Acta IMEKO, Title ACTA IMEKO ISSN: 2221-870X June 2018, Volume 7, Number 2, 102-109 ACTA IMEKO | www.imeko.org June 2018 | Volume 7 | Number 2 | 102 A Vision-based navigation system for landing procedure Silvio Del Pizzo, Umberto Papa, Salvatore Gaglione, Salvatore Troisi, Giuseppe Del Core Department of Science and Technology, University of Naples “Parthenope”, Naples, Italy Section: RESEARCH PAPER Keywords: UAS; close range photogrammetry; camera images; flight mechanics; embedded electronic platform Citation: Silvio Del Pizzo, Umberto Papa, Salvatore Gaglione, Salvatore Troisi, Giuseppe Del Core, A Vision-based navigation system for landing procedure, Acta IMEKO, vol. 7, no. 2, article 18, June 2018, identifier: IMEKO-ACTA-07 (2018)-02-18 Section Editor: Marcantonio Catelani, University of Florence, Italy Received January 11, 2018; In final form March 26, 2018; Published June 2018 Copyright: © 2018 IMEKO. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited Funding: This study was partially financed by a grant in the framework of the project “Studio e prototipazione di servizi innovativi Meteo-Marini in supporto alla Navigazione” of University of Naples, Italy Corresponding author: Salvatore Troisi, e-mail: salvatore.troisi@uniparthenope.it 1. INTRODUCTION During the last years, the market of drones has been continually growing up. The FAA (Federation Aviation Administration) has recently proposed the term Unmanned Aircraft System (UAS) for the definition of an aircraft without a human pilot on-board and controlled by an operator on the ground. The term UAS includes the vehicle (UAV – Unmanned Aircraft Vehicle), a number of subsystems such as its payload, the control station, communication system, navigation system and support system. In this work the attention was particularly focused on the navigation system and the performance of the proposed approach was investigated using a VTOL (Vertical Take-Off and Landing) UAS. In most of the introduced applications the possibility to hovering the surveyed area is very important. VTOL aircrafts provide many advantages over Conventional Take-Off and Landing (CTOL), the most notable ones are the capability of hovering in place and the small area required for take-off and landing. Among VTOL aircraft such as conventional helicopters and crafts with rotors like the tilt- rotor and fixed-wing aircraft with directed jet thrust capability, the quadrotor is very frequently chosen, especially in the academic research on mini or micro-size UAS: it is an effective alternative to the high cost and complexity of the conventional rotorcrafts, due to its ability to hover and move in absence of the complex system of linkages and blade elements present in a standard single-rotor vehicle [1]. In open sky the UAS’s position can be easily obtained by GNSS receivers that can assure the required accuracy, but when the UAS operates in harsh environments it is necessary to carefully schedule the flight with the reception of at least four satellites for all the session, but sometimes this is not enough as in the case of harsh environments [2], [3]. In such conditions the use of stand-alone positioning systems such as inertial or vision-based systems is strongly recommended. In other papers some vision-based techniques assisting the automatic landing phase have been defined. In [4] the authors use the Recursive Multi-Frame Planar Parallax algorithm to obtain accurate dense elevation and appearance models of terrain, making use of a single camera placed on board an aerial platform; in theory, with perfectly registered imagery, such ABSTRACT An autonomous vision-based landing system was designed and its performance is analysed and measured by an Unmanned Aircraft System (UAS). The system equipment is based on a single camera to determine its position and attitude with respect to a well-defined landing pattern. The developed procedure is based on photogrammetric Space Resection Solution, which provides the position and camera attitude reckoning starting from at least three, not aligned, reference control points whose image coordinates may be measured in the image camera frame. Five circular coloured targets were placed on a specific landing pattern, their 2D image frame coordinates were extracted through a particular algorithm. The aim of this work is to compute UAS precise position and attitude from a single image, in order to have a good approach to landing field. This procedure can be used in addition or for replacement of GPS tracking and can be applied when the landing field is movable or located on a moving platform, in which case the UAS will follow the landing pattern until the landing phase will be closed. ACTA IMEKO | www.imeko.org June 2018 | Volume 7 | Number 2 | 103 algorithm can produce range data with error expected to increase between linearly and with the square root of the range. In [5] a vision system able to provide position and attitude of UAS within respectively 5 cm in each axis and 5 degrees in each axis of rotation is presented; such system uses a planar landing target. Tang et al presented an algorithm for estimating the position and attitude of a UAS relative to a runway by projecting arbitrary points from the runway into the image frame [6]. In [7] the vision-based tracking and landing approach utilizes enhancement of red, green and blue (RGB) colour information of the landing target: this approach shows fast and robust performance in different lighting conditions. In [8] the authors describe a 3D vision system able to estimate the height over the ground. In [9] a forward-looking camera is used to guide the landing vehicle on a runway. In [10] an approach for guidance and safe landing of an UAS based on the concept of reuse of local features extracted from the vision system is proposed. In [11] the authors present an approach based on the visual SLAM (Simultaneous Localization and Mapping) algorithm that enables a MAV (Micro Aerial Vehicle) to autonomously determine its location and consequently stabilize itself. This approach does not require any a priori information on the environment or any known pattern in order to obtain a MAV control. In [12] a Wii remote infrared (IR) camera is used as main sensor, which allows robust tracking of a pattern of IR lights in absence of direct sunlight. The position and orientation relative to the IR pattern is estimated at a frequency of approximately 50 Hz. In this paper an alternative approach is suggested to search location and orientation of onboard camera (and consequently of the UAS equipped with camera), specifically for a quadcopter, through a vision-based methodology. In particular, the photogrammetric method of Space Resection Solution (SRS) is chosen. The proposed approach produces a series of continuous output that guide the UAS towards the landing area on which is placed a specific pattern. The developed system provides, for the whole landing procedure, the deviation from the planned track that could be used by an autopilot to correct the flight path. Lastly, it is important to specify that the whole system is onboard embedded, so that on the ground there is only a landing pattern that contains specific coded targets. 2. VISUAL ESTIMATION PROCEDURE A vision-based landing system is based on the processing of a single image. The goal is to detect camera orientation parameters (position and attitude) in a fixed reference system. A specific landing pattern, composed by coloured circular targets (Figure 5), was designed in order to implement such reference system. Indeed, the determination of full orientation assumes the availability of enough information in object space. The mapping of a point in object space 𝐱 into a point in camera space 𝐱′ (expressed in homogenous coordinates) is fully described by a projection matrix 𝐏 [13]: 𝐱′ = 𝐏𝐱 (1) The projection matrix is a 3x4 matrix that describes the orientation of a pinhole camera. In photogrammetry the orientation is divided in two different types: • External orientation, which describes the position and the attitude of a camera • Internal orientation, which describes the internal camera parameters such as focal length, sensor size etc. Both these parameters are absorbed in the projection matrix. In photogrammetry two methods are known to compute the correct orientation of a single camera starting from object coordinate points and the homologous in camera space: Direct Linear Transform (DLT) and Space Resection Solution. The former computes directly the projection matrix, solving a linear system, but needs at least six corresponding image and object points. Furthermore, the solution is not possible if the object points are coplanar [14]. The Space resection method is not linear, but it assures a solution with only three not aligned points. Furthermore, the theoretical precision of SRS is better than the DLT one [13]. The adopted vision system described in this paper uses a customized Space Resection based vision algorithm. All procedures for a correct landing occur in the following stages: • Camera calibration to compute all internal orientation parameters and lens distortion coefficients; • Image coordinates point extraction (image space); • Computation of external orientation parameter extrapolation (orientation phase using SRS); • Internal orientation parameters are extracted from calibration procedure of the camera. External orientation parameters are extracted by means of space resection method. The following section explains the overall system setup and the initial procedure that must be performed for a correct data extraction. 2.1. Camera Calibration In order to obtain external orientation parameters with good accuracy, it is mandatory to calibrate the camera. The calibration is the process that allows to determine the internal orientation of a camera and the distortion coefficients, which are, in particular: • Principal point or image centre (x0’, y0’); • focal length (f); • radial distortion coefficient (K1, K2). One of the most used analytical camera calibration techniques was originally developed by Brown [15]. This method is often used in close range photogrammetry to obtain the internal parameters with high accuracies [16]. The Raspberry camera module allows to manually control the focal length, in order to set such parameters on a specific and constant value. Furthermore, it can acquire a photo every second on a full resolution (about 5 MegaPixel); to avoid overloading the buffer and the computation and to obtain images with low brightness, the resolution was set to 1920x1080 (about 2 MegaPixel) in according to an HD video. The calibration was carried out using a set of coded circular targets, in order to obtain subpixel precision and to automatize the procedure of calibration. Table 1 shows the results of the calibration procedure. Specifically, the standard deviations provided by the self- calibration procedure are reported in the third column. 2.2. Points extraction The object recognition, consisting in the identification of a specific object on an image, is an important task in computer vision. Both SRS and DLT need to identify the ground control point position on image frame. Circular targets were designed both to facilitate automatic detection and to assure a good marking accuracy: indeed, it is ACTA IMEKO | www.imeko.org June 2018 | Volume 7 | Number 2 | 104 well known that sub-pixel accuracy can be achieved [17]. All the measurements are referred to the centre of a circular target. Several methods exist to automatically detect circular targets on an image and to find the centre. They can be divided in two categories: • No-Initial approximation required methods: these approaches allow to obtain the coordinates of the centre of a circular target on a given image without any initial approximation; • Initial approximation required methods: these approaches start from an approximated position of the target centre, being able to detect the centre with high accuracy. Of course, the target coordinate centre is not sufficient to detect the correspondent GCP. The matching is performed using a unique code for every target. In this work the codification is based on the colour of the target. 2.3. Orientation The camera orientation is computed by resection with respect to a landing frame. The resection is a linearization of collinearity equations (2) that describe the transformation of object coordinates (X, Y, Z) into corresponding image coordinates (x’, y’). 𝑥′ = 𝑥0 + 𝑐 𝑟1,1(𝑋 − 𝑋0) + 𝑟1,2(𝑌 − 𝑌0) + 𝑟1,3(𝑍 − 𝑍0) 𝑟3,1(𝑋 − 𝑋0) + 𝑟3,2(𝑌 − 𝑌0) + 𝑟3,3(𝑍 − 𝑍0) + ∆𝑥′ 𝑦′ = 𝑦0 + 𝑐 𝑟2,1(𝑋 − 𝑋0) + 𝑟2,2(𝑌 − 𝑌0) + 𝑟2,3(𝑍 − 𝑍0) 𝑟3,1(𝑋 − 𝑋0) + 𝑟3,2(𝑌 − 𝑌0) + 𝑟3,3(𝑍 − 𝑍0) + ∆𝑦′ (2) where ri,j are elements of the 3D rotation matrix R, which performs a rotation from the image camera to the object reference system, while (X0, Y0, Z0) are the coordinates of the camera perspective point in object reference frame (Figure 1): � 𝑥′ 𝑦′ 𝑐 � = 𝑹� 𝑋 − 𝑋0 𝑌 − 𝑌0 𝑍 − 𝑍0 � (3) The collinearity equation system (2) can be rewritten in the following system of correction equations: 𝑥′ = 𝑥0′ + 𝑣𝑥′ 𝑦′ = 𝑦0′ + 𝑣𝑦′ (4) where 𝑥0′ , 𝑦0′ are the image coordinates computed using approximated orientation parameters, while 𝑣𝑥′, 𝑣𝑦′are the adjustment parameters needed to obtain the correct 𝑥′ and 𝑦′ image coordinates, equation (5). The adjustment parameters can be obtained as a linearization of the equation (2), using as initial point the approximated orientation parameters employed to compute 𝑥0′ , 𝑦0′ : 𝑣𝑥′ = � 𝜕𝑥′ 𝜕𝑋0 � 0 𝑑𝑋0 + � 𝜕𝑥′ 𝜕𝑌0 � 0 𝑑𝑌0 + � 𝜕𝑥′ 𝜕𝑍0 � 0 𝑑𝑍0 + � 𝜕𝑥′ 𝜕𝜕 � 0 𝑑𝜕 + � 𝜕𝑥′ 𝜕𝜕 � 0 𝑑𝜕 + � 𝜕𝑥′ 𝜕𝜕 � 0 𝑑𝜕 𝑣𝑦′ = � 𝜕𝑦′ 𝜕𝑋0 � 0 𝑑𝑋0 + � 𝜕𝑦′ 𝜕𝑌0 � 0 𝑑𝑌0 + � 𝜕𝑦′ 𝜕𝑍0 � 0 𝑑𝑍0 + � 𝜕𝑦′ 𝜕𝜕 � 0 𝑑𝜕 + � 𝜕𝑦′ 𝜕𝜕 � 0 𝑑𝜕 + � 𝜕𝑦′ 𝜕𝜕 � 0 𝑑𝜕 (5) where, 𝜕, 𝜕, 𝜅 are the Euler angles between the camera reference system and object one. Generally, the approximated position of the camera does not provide good results, therefore position and attitude are corrected using a linearization procedure. Indeed, the distance between the computed coordinates, using the above-mentioned linearization procedure, and the measured coordinates approaches zero only after numerous iterations. At the end of iterative process, the method is able to estimate the camera attitude and position [18]. In order to solve the system in equation (4), at least 3 points (not aligned) are necessary. Indeed, such observations provide 6 equations, which allow to estimate the six external orientation parameters. Further observations could be considered to obtain an overdetermined measurement model; in this case the solution is estimated using the classical least-square adjustment method. 3. TEST SETUP 3.1. System Overview The acronym UAS covers all vehicles flying in the air with no person onboard controlling the aircraft. This term is commonly used in the computer science and artificial intelligence community, but terms like Remotely Piloted Table 1. Internal orientation parameters of Raspberry camera module. Camera Name Raspberry Camera Module Standard Deviation Focal Length [mm] 5.9409 0.003 Sensor Size [mm] Width: 6.0035 Height: 3.3750 0.006 0.005 Principal Point [mm] X: 2.997 Y: 1.6808 5.5·10-4 Lens Distortion Radial Coefficients K1: -1.9·10 -3 K2: 1.6·10 -4 1.3·10-4 1.9·10-5 Figure 1. Two Reference Systems (RS): the object RS (XYZ) and the camera one (x’y’z’). ACTA IMEKO | www.imeko.org June 2018 | Volume 7 | Number 2 | 105 Vehicle (RPV), Remotely Operated Aircraft (ROA), Remotely Controlled Helicopter (RC-Helicopter), Unmanned Vehicle System (UVS) are often used. In this section the main components of the UAS quadrotor, which include the basic Quad-rotor and the avionic system with Wireless/Bluetooth shield [19], [20], are introduced. A simple electric quadrotor model (Quadcopter ARF 450 by Conrad [21]) is as far as possible pre-assembled. In the professional field, such UAS are already used for various tasks (Surveillance, Mapping 3D, Search and Rescue, etc.). This high quality "toy" is manually controlled by a Remote Control, while a microprocessor stabilizes it through position controls and acceleration sensors. Its size is reported in Table 2. The UAS employed in this work (as shown in Figure 2) is a mini UAS, in accordance with the classification provided in [22], [23]. 3.2. Landing System Design The design of the proposed landing system is based on two main elements: • An embedded system that contains all hardware for image acquisition and data extrapolation; • A landing pattern, which is composed by a metallic platform where are located several circular targets. The embedded system is made up of a Raspberry camera module and a Raspberry Pi. The sensors are placed at the bottom of the UAS structure, in order to ensure that the camera optical axis is close to the nadiral direction (Figure 3). The camera module is connected to the CSI port, located behind the Ethernet port, and enables the camera software. Figure 4 shows a scheme of the onboard landing system and the applied sensors. The system works in standalone mode, with a microcontroller ARM11 running 700 MHz which processes the algorithm. An optional sensor suite for the landing platform consists of low-cost pressure, temperature and humidity sensors. For the evaluation of the current vertical coordinate, a barometric height measurement MSL (Mean See Level) was used. In alternative, it is possible to use an Ultrasonic Sonar Sensor, as designed in previous works [24]. Furthermore, it is possible to add a GPS module to extract the correct altitude and position of the UAS for comparing and merging data. An ad-hoc software was loaded on this system, allowing to compute the vehicle position and the attitude starting from the extraction of the circular target contour and centre present on the generic image acquired. Indeed, as above mentioned, the system is based on a specific landing field as well (as shown in Figure 5), where several coloured circular targets are located. The software is able to recognize a circular target, coded by colour, and to extract its coordinates (in pixels). This is Figure 3. The bottom of the UAS is equipped with the designed system. Figure 4. On-board vision system scheme. Figure 5. Basic Landing Pattern. Figure 2. The adopted small quad-rotor (Conrad 450 Arf 35 MHz). Table 2. Size of the adopted UAS. Weight 670 g Max Weight 1300 g Main rotor ⦰ 260 mm Height 165 mm Length/Width 450x450 mm2 ACTA IMEKO | www.imeko.org June 2018 | Volume 7 | Number 2 | 106 accomplished using a Circular Hough Transform Function developed through OpenCV libraries. The input parameter needed for a good finding of the circle in the image is the approximated circle radius expressed in pixels, which depends on range of altitudes of UAS. Figure 6 shows how the software can detect the target centre on a generic image in real time. The landing pattern dimensions were designed for a range of altitudes between 20 and 150 cm, since this range is typical for a characterization of the UAS landing procedure. 4. EXPERIMENTAL SET-UP In order to evaluate the accuracy and precision achievable employing the proposed methodology, a validation procedure was designed and then performed. The aim of this validation procedure is to detect the potential accuracy and precision achievable. Therefore, a comparison between SRS and Bundle Adjustment (BA) solutions was carried out. BA is a well-known numeric method used to compute a multi-images position using tie-points and ground control points; this technique assures high reliability and integrity on the solutions. The validation procedure has been conducted following these four steps: 1. Design and building of landing pattern, using two types of circular targets: ring coded circular targets and coloured circular targets. 2. Photogrammetric survey of landing pattern, following the classical procedure employed in close-range photogrammetry. 3. Camera positions and orientations by BA, using all the circular targets, and by SRS, using only the coloured circular targets. 4. Comparison of position and orientation for each camera, according to the BA and SRS results. 4.1. Design of landing pattern The landing pattern is composed by an aluminium plane where at least five circular coloured targets were attached. In order to obtain high reliability and precision during the validation procedure, about 40 circular ring coded targets were attached both on the plane face and on the three raised points (Figure 5). Generally, the precision in the object space can achieve an accuracy in the 1:10 000 to 1:50 000 range, using metric cameras and employing a sufficient number of well- spaced targets. Although the SRS is not affected in terms of solution stability by the planar arrangement of the targets, the BA provides results more reliable and precise if the targets are located in space rather than on the plane. A cross at the centre of the coded target (Figure 7) allows to measure the distance from a point to another with a simple calliper. Indeed, the BA in free-network adjustment (no constraints solution) determines positions and orientations of the cameras within an arbitrary reference system and up to a scale factor. The measured distance is then applied to the solution in order to lock the photogrammetric model scale. 4.2. Photogrammetric Survey Photogrammetry is a technique for obtaining an object 3D model through a dataset of images. The operational stages for a photogrammetry survey can be summarized in three steps: • Image acquisition, performed in-field and based on several photos to be taken with a calibrated camera. • Object measurements, also performed in-field and necessary to scale the photogrammetric model. • Image measurement, performed in laboratory: during this step the operator has to recognize homologues points on at least two images. Image acquisition can be performed with a consumer camera, but a calibration procedure should be carried out before the image acquisition, to obtain reliable results. The acquisition procedure was performed following the classical guidelines for a photogrammetric close-range survey: the 3D network of images should be optimized according to the precision, reliability and accuracy of the measurements [25]. The 3D network realized for this work is shown in Figure 8. The crosses, located at the centre of coded targets, allow to easily obtain precise measurements of distances between the targets. A calliper was used to measure the distance between two targets (rather, between the associated crosses), in order to Figure 7. Coded Circular Target. Figure 8. 3D network of photogrammetric survey. Figure 6. Image target recognition and its centre detection. ACTA IMEKO | www.imeko.org June 2018 | Volume 7 | Number 2 | 107 give a correct scale to the photogrammetric model. The image measurements are practically fully automated: they can be carried out employing object coded targets or the natural texture of the object. Circular coded targets offer a sub- pixel image point measurement, which allows to obtain high accuracy during the next stages. Specifically, the detection accuracy of the centre of a circular target on the image can be up to 1:50 of a pixel, yielding typical measurement accuracy on the object in the range of 1:100 000 to 1:200 000 [17]. The first one (the lower) corresponds to an accuracy of 0.1 millimeters for an object of 10 meters. In this work, the measurements of the image coordinates of the targets were carried out employing two different methods: • LSM: Least Square Matching is a powerful technique for all kinds of data matching problems [26]. Here, its application to image matching was used through the software Photomodeler Scanner. Due to the perspective angle, the circular targets present on the scene could appear on the image as ellipses. The LSM is able to detect the correct centre because it applies an affine transformation to recognize the shape correctly. Such approach achieves sub-pixel precision, and it is widely employed in industrial photogrammetry. Unfortunately, the LSM is not available in the OpenCV library. • CHT: Circle Hough Transform is a feature extraction technique for detecting circles on images. This method is based on application of the Hough Transform [27], [28] to edge map. The latter is obtained applying the Canny edge detector [29] with automatic thresholding on a single image. This approach is suitable for automatic recognition in real time, although the centre of the circle is not always accurate due to the perspective angle of view, as previously described. Furthermore, it is already developed in the OpenCV library. 4.3. Camera orientation The orientation is the fundamental step of the photogrammetric procedure, in this work two types of orientations were performed: Bundle Adjustment and Space Resection. The former is generally performed in laboratory and it takes as input a large image dataset and many object measurements; the output is the external orientation parameters of the entire dataset of images. In photogrammetry the terms “orientation parameters” express both the position and aspect of a camera. The bundle adjustment was performed using all circular target available on each image. This computation was carried out in post-processing with the commercial software Photomodeler. On the other hand, all image orientation parameters were determined with the SRS, employing only the coloured circular target and the object measurements. Such methodology is completely independent from the BA and it works on a single image without any relationship with other images. The SRS algorithm was initially developed in Matlab environment. It is a hybrid procedure between the direct solution and the iterative one. In first instance, the algorithm searches the centres of the 3 targets visible on the generic image, so that they form a triangle with the greatest possible area. Such points are used to determine the external orientation parameters solving the problem with a non-iterative solution [30], [31]. This method solves the position of the camera by Ferrari’s solution of quartic equations, finding the roots of a quartic polynomial function. Among the four possible solutions, the algorithm rejects the complex solutions and provides only the real ones. In order to solve the possible ambiguities, the iterative method is performed using the solutions obtained with the direct method as an estimation of the initial parameters. Both the convergence factor and a statistic analysis on the re- projection residual provide a robust index to solve the ambiguity. This procedure is completely developed by the authors of the paper and it was transferred on the embedded PC of the drone. The non-iterative method allows to speed up the convergence of the iterative approach, obtaining a stable solution with high attitude angles also. 4.4. Comparison The comparison is the last and fundamental step to compute the precision and accuracy achievable. The results obtained with the Bundle Adjustment procedure are more robust than those given by the Space Resection one, because the redundancy of the Bundle Adjustment is too high and the network is well designed. In order to evaluate the results accuracy, the BA solution is taken as reference. 5. RESULTS As reported in previous section, to estimate the achievable precision and accuracy, a comparison between the reference position and attitude, provided by BA solution, and the SRS estimated values was carried out. The error analysis was performed in terms of error mean, standard deviation and RMSE (Root Mean Square Errors) for each image acquired and considering two different methods of target detection: LSM and CHT. Of course, first of all, a deep inspection about the photogrammetric survey is indispensable. The survey was carried out using 14 images taken at 1.5 meters and using 37 coded targets. The obtained network is showed in Figure 8, which assures an intersection angle of 67 degrees in average among all rays. Furthermore, each image is covered by at least 28 points. Targets coordinates are estimated with high precision: indeed, the overall RMS vector length in 3D is 0.082 mm, while each camera precision values are reported in Table 3. Such values are estimated using a classical least square bundle block adjustment, using all 37 coded targets. Table 3. Precision values about the camera position (expressed in mm) and camera attitude (expressed in degrees), after a classical Photogrammetric Bundle Adjustment procedure. Name Precision in mm Precision in degrees X Y Z 𝝎 𝝋 𝜿 C1 0.35 0.36 0.26 0.0239 0.0226 0.0092 C2 0.34 0.33 0.20 0.0236 0.0238 0.0086 C3 0.32 0.28 0.22 0.0212 0.0197 0.0099 C4 0.32 0.35 0.21 0.0250 0.0214 0.0090 C5 0.27 0.28 0.19 0.0219 0.0197 0.0087 C6 0.26 0.32 0.24 0.0235 0.0203 0.0098 C7 0.24 0.32 0.27 0.0230 0.0186 0.0110 C8 0.22 0.25 0.26 0.0211 0.0165 0.0120 C9 0.28 0.24 0.26 0.0190 0.0170 0.0110 C10 0.34 0.38 0.26 0.0250 0.0222 0.0105 C11 0.30 0.34 0.23 0.0231 0.0223 0.0086 C12 0.32 0.32 0.20 0.0231 0.0232 0.0090 C13 1.61 1.60 0.32 0.0584 0.0584 0.0092 C14 1.21 1.19 0.21 0.0504 0.0510 0.0090 ACTA IMEKO | www.imeko.org June 2018 | Volume 7 | Number 2 | 108 The achieved high precision (millimetres and sub-millimetres for the position) allows to take such solution as reference. Indeed, for each camera position an independent solution was computed using the developed SRS algorithm. Therefore, an accuracy analysis was performed as differences between the reference solution (BA) and the SRS solutions. Specifically, such analysis was conducted following two different approaches. In both cases only 5 circular targets are considered. A first approach was executed using the LSM to determine the image coordinates of the targets, while the second approach was performed using the CHT method for target detection. The first one provides good results almost in every image, especially when the image is too tilted, because it is able to determine the correct target centre even if the circular target appears on the image as an ellipse [32]. The comparison among the BA solution and both SRS solutions is described in Table 4, Table 5 and Figure 9, where the absolute differences in 3D space of cameras position and the angle differences are reported. The error analysis reported in Table 4 and Table 5 provides the potential accuracy of the designed landing system. Specifically, the LSM detection mostly achieves a sub- centimetre accuracy, while the accuracy reached by the CHC approach loses one order of magnitude. Such differences are caused by the more precise detection of target centroid obtained by LSM detection; specifically, in equation 1 the image coordinates are estimated with more accuracy than through the CHC approach. Consequently, the camera position obtained with CHC approach is less accurate than the LSM one, as shown in Table 4, Table 5 and Figure 9. 6. CONCLUSIONS Unmanned aircraft systems are of very widespread use by civil and scientific community. To ensure suitable accuracy to UAS navigation system, very expensive sensors could be used. The present study proposes a system based on commercial and cheap camera sensors able to guide UAS during landing procedure. The vision-based navigation system performance was studied using real data. The system is part of an embedded platform for UAS landing path and obstacles detection as reported in [33] and [34]. The inspection has highlighted that the proposed methodology is able to achieve high precisions in position domain and in terms of attitude angles, not comparable with performance of other commercial navigation systems (i.e. GNSS positioning systems). Another reported analysis was the comparison between two different photogrammetry methods, LSM and CHT, implemented to process camera images. From such analysis, it can be seen that LSM approach provides more accurate results than the CHT one for all considered figure of merit. The CHT approach provides a mean accuracy of 2.3 cm, against the LSM one of 0.23 cm, but the former approach is completely autonomous, while the latter one needs an initialization procedure. ACKNOWLEDGEMENT This study was partially financed by a grant in the framework of the project “Studio e prototipazione di servizi innovativi Meteo-Marini in supporto alla Navigazione” of the University of Naples Parthenope. REFERENCES [1] Nonami, K., et al. Autonomous Flying Robots: Unmanned Aerial Vehicles and Micro Aerial Vehicles, Springer, Tokyo, Dordrecht, Heidelberg, London, New York, 2010. [2] Ackermann, S., et al. Digital surface models for GNSS mission planning in critical environments. Journal of Surveying Engineering, 140.2 (2014): 04014001. Figure 9. Histogram report which shows the differences among the 3D positions of perspective centre computed by BA and SRS. The differences using CHT and LSM approaches are reported in green and blue, respectively. Table 4. Accuracy values about the camera position (expressed in mm) and the camera attitude (expressed in degrees) using LSM, the fifth column reports the relative error. Name LSM detection 𝜕 [deg] 𝜕 [deg] 𝜅 [deg] 3D error [mm] RE [1:] C1 0.002 -0.014 -0.034 0.24 7292 C2 -0.046 0.116 0.023 1.96 893 C3 0.080 0.052 0.029 1.70 1029 C4 0.020 -0.034 -0.009 0.75 2333 C5 0.009 0.110 0.014 1.57 115 C6 0.124 -0.124 0.064 2.71 646 C7 0.046 -0.037 -0.032 1.05 1667 C8 -0.059 -0.102 0.008 2.08 841 C9 -0.321 -0.083 -0.004 5.16 339 C10 -0.027 0.077 0.009 1.25 1400 C11 0.026 0.016 0.045 0.55 3182 C12 -0.157 -0.134 -0.057 10.69 164 C13 -0.008 0.046 0.082 1.31 1336 C14 -0.032 0.084 0.061 2.11 829 Table 5. Accuracy values about the camera position (expressed in mm) and the camera attitude (expressed in degrees) using CHT; the fifth column reports the relative error. 8 CHT detection Name 𝝎 [deg] 𝝋 [deg] 𝜿 [deg] 3D error [mm] RE [1:] C1 1.588 -2.615 -0.946 53.84 7292 C2 0.044 -1.190 -0.732 15.67 893 C3 -3.075 -0.331 0.365 43.25 1029 C4 0.404 -1.298 0.010 22.43 2333 C5 -1.287 0.536 -0.086 18.77 1115 C6 0.365 -1.063 0.110 16.95 646 C7 0.021 0.137 0.188 4.92 1667 C8 0.524 0.428 0.269 12.00 841 C9 0.591 -1.732 -0.189 32.98 339 C10 -0.885 1.275 -0.429 26.75 1400 C11 -0.171 -0.232 0.097 7.44 3182 C12 0.048 0.009 0.045 1.30 164 C13 -1.643 -0.342 0.593 38.24 1336 C14 0.269 -0.461 -0.940 35.66 829 ACTA IMEKO | www.imeko.org June 2018 | Volume 7 | Number 2 | 109 [3] Gaglione, S., Innac, A., Pastore Carbone, S, Troisi, S. Robust estimation methods applied to GPS in harsh environments. In Proceedings of European Navigation Conference, Lausanne, 2017. 10.1109@EURONAV.2017.7954169 [4] Geyer, C., Templeton, T., Meingast, M., Sastry, S.S. The recursive multi-frame planar parallax algorithm. Proceedings of Third International Symposium on 3D Data Processing, Visualization, and Transmission, 3DPVT (2006), pp.17-24.DOI: 10.1109/3DPVT.2006.135 [5] Sharp, C. S., Shakernia, O., Sastry, S. S. A vision System for Landing an Unmanned Aerial Vehicle. In Proceedings of IEEE International Conference on Robotics & Automation, Seoul, 2011. [6] Tang, D., Li, F., Shen, N., Guo, S. UAV attitude and position estimation for vision-based landing. Proceedings of 2011 International Conference on Electronic and Mechanical Engineering and Information Technology, EMEIT 2011 9,6023131, pp. 4446-4450, 10.1109/EMEIT.2011.6023131. [7] Bi,Y., Duan,H. Implementation of autonomous visual tracking and landing for a low-cost quadrotor. Optik-International Journal for Light and Electron Optics, 2013, 124.18: 3296-3300. [8] Yu, Z., Nonami, K., Shin, J., Celestino, D. 3D vision based landing control of a small scale autonomous helicopter. International Journal of Advanced Robotic Systems, March 2007, Volume 4, Issue 1, Pages 51-56. [9] Miller, A., Shah, M., Harper, D. Landing a UAV on a runway using image registration. Proceedings - IEEE International Conference on Robotics and Automation 4543206, pp. 182-187, 10.1109/ROBOT.2008.4543206 [10] Cesetti, A, et al. A vision-based guidance system for UAV navigation and safe landing using natural landmarks. Journal of intelligent and robotic systems, 2010, 57.1-4: 233, 10.1007/s10846- 009-9373-3. [11] Blösch, M., Weiss, S., Scaramuzza, D., Siegwart, R. Vision based MAV navigation in unknown and unstructured environments. In Robotics and automation (ICRA), 2010 IEEE international conference on (pp. 21-28). IEEE. [12] Wenzel, K.E., Masselli, A., Zell, A. Automatic take off, tracking and landing of a miniature UAV on a moving carrier vehicle. Journal of intelligent & robotic systems, 2011, 61.1: 221-238., pp. 221- 238, 10.1007/s10846-010-9473-0. [13] McGlone C., Mikhail, E., and Bethel, J. Manual of photogrammetry. Bethesda, MD, USA: American Society for Photogrammetry and Remote Sensing, 2013. [14] Faugeras, O. Three-dimensional computer vision: a geometric viewpoint. MIT press, 1993. [15] Brown, D.C., Close-range camera calibration. PE&RS, 1971, Vol. 37(8), pp.855-866. [16] Fraser, C. S. Digital camera self-calibration. ISPRS Journal of Photogrammetry and Remote sensing 52.4 (1997): 149-159. [17] Gutierrez, José A., and Armstrong, Brian SR. Precision Landmark Location for Machine Vision and Photogrammetry: Finding and Achieving the Maximum Possible Accuracy. Springer Science & Business Media, 2007. [18] Luhman, T., Robson, S., Kyle, S., Harley, I. Close Range Photogrammetry, principles techniques and applications, 2011. [19] Papa, U., Del Core, G. Design and Assembling of a low-cost UAV Quadcopter, University of Naples “Parthenope, 2014. [20] Timmins, H. Robot Integration Engineering a GPS module with the Arduino. Pratical Arduino Engineering, 2011 - Springer. [21] CONRAD, URL: www.conrad.com/ce/en/product/208000/QUADROC OPTER-450-ARF-35-MHz, accessed Jan. 2016. [22] Eisenbeiss, H. A Mini Unmanned Aerial Vehicle (UAV): System Overview and Image Acquisition. International Workshop on "Processing and Visualization using High-Resolution Imagery", Institute for Geodesy and Photogrammetry, ETH- Hoenggerberg, Zurich, CH, 2004. [23] Daponte, P., De Vito, L., Mazzilli, G., Picariello, F., Rapuano, S. "A height measurement uncertainty model for archaeological surveys by aerial photogrammetry", J. of Measurement, Feb. 2017, vol. 98, pp 192-198. [24] Daponte, P., De Vito, L., Lamonaca, F., Picariello, F., Riccio, M., Rapuano, S., Pompetti, L. and Pompetti, M. "DronesBench: an innovative bench to test drones", in IEEE Instrumentation & Measurement Magazine, vol. 20, no. 6, pp. 8-15, December 2017. doi: 10.1109/MIM.2017.8121945 [25] Del Pizzo, S., and Troisi. S. Automatic orientation of image sequences in cultural heritage. Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 38.5/W16 2011. [26] Grun, A. Adaptive least squares correlation: a powerful image matching technique. South African Journal of Photogrammetry, Remote Sensing and Cartography, 1985, 14(3): 175–187. [27] Hough, P. V. C. Method and means for recognizing complex patterns, No. US 3069654, 1962 [28] Duda, R. O., and Hart, P. E. Use of the Hough transformation to detect lines and curves in pictures, Commun. ACM, vol. 15, no. 1, pp. 11-15, 1972 [29] Canny, J. A., A computational approach to edge detection. IEEE Trans. Pattern Anal. Machine Intell., vol. 8, no. 6, pp. 679-714, 1986 [30] Zeng, Z., and Wang, X. A general solution of a closed-form space resection. Photogrammetric Engineering and Remote Sensing 58.3 (1992): 327-338. [31] Awange, J. L., and Grafarend. E. W. Explicit solution of the overdetermined three-dimensional resection problem. Journal of Geodesy 76.11-12 (2003): 605-616. [32] Luhmann, T. Eccentricity in images of circular and spherical targets and its impact on spatial intersection. The Photogrammetric Record 29.148 (2014): 417-433. DOI: 10.1111/phor.12084. [33] Papa, U., Del Core, G., Giordano, G., & Ponte, S.. Obstacle detection and ranging sensor integration for a small unmanned aircraft system. Metrology for AeroSpace (MetroAeroSpace), 2017 IEEE International Workshop (2017): 571-577 [34] Papa, U., Del Core, G., & Picariello, F. (2016). Atmosphere effects on sonar sensor model for UAS applications. IEEE Aerospace and Electronic Systems Magazine (2016): 31(6), 34-40.M. Fazio, S.L. Rota, Metrology on stamps, Phys. Educ. 30 (1995) pp. 289-297. mailto:10.1109@EURONAV.2017.7954169 A Vision-based navigation system for landing procedure