Microsoft Word - 372-2382-1-LE-rev ACTA IMEKO  ISSN: 2221‐870X  September 2016, Volume 5, Number 2, 64‐70    ACTA IMEKO | www.imeko.org  September 2016 | Volume 5 | Number 2 | 64  Testing GoPro for 3D model reconstruction in narrow spaces  Fausta Fiorillo 1 , Marco Limongiello 1 , Belén Jiménez Fernández‐Palacios 2 1 DICIV‐Departament of Civil Engineering, University of Salerno, Via Giovanni Paolo II, 132, 84084, Fisciano, Italy  2 Department of Cartographic and Terrain Engineering, University of Salamanca, Hornos Caleros, 50, 05003, Avila, Spain      Section: RESEARCH PAPER  Keywords: photogrammetry; action camera; low‐cost systems; archaeology  Citation: Fausta Fiorillo, Marco Limongiello, Belén Jiménez Fernández‐Palacios, Testing GoPro for 3D model reconstruction in narrow spaces, Acta IMEKO,  vol. 5, no. 2, article 9, September 2016, identifier: IMEKO‐ACTA‐05 (2016)‐02‐09  Section Editors: Sabrina Grassini, Politecnico di Torino, Italy; Alfonso Santoriello, Università di Salerno, Italy  Received: March 25, 2016; In final form July 5, 2016; Published September 2016  Copyright: © 2016 IMEKO. This is an open‐access article distributed under the terms of the Creative Commons Attribution 3.0 License, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  Corresponding author: Marco Limongiello, e‐mail: mlimongiello@unisa.it     1. INTRODUCTION  In the archaeological field, 2D representation is more prevalent and used compared to 3D restitutions (especially for technical use). In fact, realization of a photoplan is often required to collect chromatic and metric information in a single file. However, due to intrinsic (e.g. non-planar surfaces) and/or extrinsic (e.g. reduced distance to the object) conditions, it is not always possible to produce them. The use of multi-image photogrammetry for close-range surveys is an alternative solution, especially in the Cultural Heritage field and for complex acquisition conditions [1], [2]. In recent years there have been important developments in close-range photogrammetry thanks to various well known factors, such as: (i) the application of computer vision algorithms that allows automatic camera calibration and calculation of exterior orientation parameters; (ii) the improvement of photogrammetric software; (iii) and the increase of the quality of low cost digital cameras (able to provide high resolution images). The software packages developed, combining computer vision image-matching algorithms and photogrammetry principles, allow to obtain accurate 3D reconstructions and to export detailed orthoimages from the textured 3D models, with the implementation of a well-defined workflow. The estimation of geometric camera characteristics (focal length, centre of projection of the image, radial lens distortion coefficients) can be automatically performed within the software processing workflow. At the same time, the mathematical advances provide more flexibility to use a fisheye camera as well for photogrammetry applications. The fisheye lens has a wide coverage and helps to solve the problems related to narrow spaces and complex conditions for camera stations. The wide field of view can reduce the number of shots to cover the entire scene and the time of the acquisition phase; the disadvantages are the extreme distortions of the images [3], [4]. This paper presents the result of a test performed with a GoPro Hero 3 Black (Figure 1) used to survey an external wall surface in a very narrow space. The final aim is to obtain a detailed orthophoto from the feature-based 3D reconstruction of the wall surface. The photo processing was developed with different photogrammetric software (Agisoft PhotoScan 1.2.3, Pix4Dmapper 1.4.46 and 3DF Zephyr Aerial 2.3061l), analysing the accuracy of the final 3D models. 2. DATA ACQUISITION  The selected area for the test is a perimetric wall, on the east side of “Villa di Giulia Felice” (Figure 2) in the Archaeological ABSTRACT  The main objective of this paper  is to analyse the potential as well as the  limitations of an action camera (GoPro Hero 3 Black)  in  photogrammetric applications for architectural cultural heritage reconstruction. The investigations were carried out in a site of notable  historical interest, “Villa di Giulia Felice” in Pompeii. In order to estimate the work pipeline, the time‐consuming processing and the  output product accuracy using fisheye camera  images, three commercial  image processing software packages were tested: Agisoft  PhotoScan, Pix4Dmapper and 3DF Zephyr Aerial. Several comparisons among the final 3D models produced have been developed and  the results achieved. Despite the problems found related to  lens distortion and the small distance from the camera to the object  (average distance ~80 cm), the test provided good results in terms of accuracy (average error [2‐3.5] cm) and reliability.  ACTA IMEKO | www.imeko.org  September 2016 | Volume 5 | Number 2 | 65  Site of Pompeii, Italy. The Villa, located near Porta Sarno, was explored first between 1755 and 1757 and then in 1953. The survey campaign of “Villa di Giulia Felice” is part of the “Great Pompeii Project”, whose aims are the protection, conservation, maintenance, and restoration of the site. An orthophoto of the entire wall was required for archaeological studies. The use of a terrestrial laser scanner (TLS) was not considered due to the very close space, a corridor with the maximum distance between the two walls of 1.1 m. Given the camera to object distance of 0.3 – 1.1 m and the length and height of the wall ~20 m and ~4 m respectively (Figure 3), the wide-angle optic is an advantage in the acquisition phase of the images. Indeed, this type of lens increases the field of view and thus decreases the number of shots to carry out. Therefore, the project involved the use of a GoPro Hero 3 Black with a nominal focal length of 3 mm and a pixel size of 0.00155 mm (sensor size of 6.2×4.65 mm). There are several image capture modes of the GoPro camera. In this research, only the “Wide” mode is utilized with the following specifications: image size 12 MP (4000×3000 pixel); vertical FOV 94.4° and horizontal FOV 122.6°. The advantages due to the use of GoPro are its small size and low weight of the camera body (73 g for the Black edition), which facilitates the photographic acquisition in narrow spaces. On the other hand, the disadvantage is that the wide-angle lenses are more exposed to distortions (especially at the extremes of the frame [5], [6]), which generate a distorting effect known as “bowl-effect” [7]. Due to this factor, the use of fisheye camera is less common for photogrammetric purposes, leading to a loss in output precision [8]. In the present case study, the strong distortion effects are accentuated owing to the very small distances to the object. However, thanks to the implementation of a specific camera calibration algorithm to fit ultra-wide lens distortions, some commercial photogrammetric software is able to process photographs captured with wide-angle lenses (without the use of external software to create undistorted images). The acquisition project of the images included the use of a telescopic pole to obtain the photographs for the entire height of the wall. We have acquired 263 images in 11 horizontal strips, with overlapping of ~80 % (average baseline: 40 cm, minimum distance: 30 cm, maximum distance: 110 cm, Figure 4) and parallel and horizontal axis camera. According to the sensor resolution and the acquisition distance, the medium GSD (Ground Sample Distance) is 0.5 mm on the front area. Using a total station (Leica TCR 705 with nominal accuracy of 5 mm), 14 natural reference points uniformly distributed on the perimetric wall were measured (about one point per 5 m2, Figure 5). The coordinates of control points were set in the Gauss-Boaga reference system. The natural control points were chosen as easily recognizable features on the images. In this way the measured reference coordinates were marked on the photos and used as Ground Control Points (GCPs) to scale and Figure 1: Size of GoPro Hero 3 Black.  Figure  4:  Camera  to  wall  surveyed  distances:  on  the  x‐axis  there  is  the  distance from the object (m) and  on the ordinate the relative number of  photos.  Figure 2: The plant of “Villa di Giulia Felice” and the investigated perimetric wall (red).  Figure 3: The corridor and perimetric wall surveyed of “Villa di Giulia Felice”. ACTA IMEKO | www.imeko.org  September 2016 | Volume 5 | Number 2 | 66  georeference correctly the 3D reconstruction and to estimate the accuracy of the final model. All of these points were also used to have a better control on the error dispersion over the entire surveyed wall [9], [10]. 3. DATA PROCESSING  In order to evaluate the work pipeline, the time-consuming processing and the output products accuracy using action camera images, three commercial multi-image processing software packages were tested: Agisoft PhotoScan 1.2.3 (founded in 2006), Pix4Dmapper 2.0.104 and 3DF Zephyr Aerial (both founded in 2011). All software is focused on computer vision technology using image-matching algorithms that combine the automation of computer vision techniques with the photogrammetric principles. For each one, the whole process of 3D reconstruction, from the image orientation to the textured 3D model creation up to the orthophoto extraction, can be automated, with minimal external human intervention (e.g. input parameters setting, check/control point input). The common steps in the standard workflow are: 1) calculation of internal and external orientation parameters (with automated detection of key point and tie points) and creation of sparse cloud; 2) extraction of a dense cloud; 3) construction of a polygonal model; 4) texture mapping and 5) orthophoto production. Furthermore, according to the increasing use of action cameras, the software is able to support a fisheye lens, implementing a specific camera model to fit ultra-wide lens distortions. As a first step, the estimation of the cameras' orientations allows alignment of the photographs and the reconstruction of sparse unscaled point clouds. The georeferencing process allows a linear transformation of the model using three parameters for translation, three for rotation and one for scaling. A least three reference points (GCPs) must be known to calculate the transformation parameters. It is also possible optimize the estimated point cloud and internal and external camera orientation parameters using the GCPs. The optimization procedure allows not only to achieve higher accuracy in calculating camera parameters but also to correct possible distortion (e.g. the bowl-effect). During the optimization process, the sum of re-projection errors and reference coordinate misalignment errors is minimized. The re-projection error concerns the difference between the control points position defined on the original image and their estimated position during optimization process. For that, the images were processed in a bundle adjustment, incorporating metric information in the form of GCPs, which were manually inserted assigning the coordinates measured with the Leica TCR 705 total station. In order to develop a coherent comparison among the results obtained from different programs, some precautions have been taken. In particular, (i) in the three chosen programs the GCPs were detected on the same images; (ii) the whole process was performed with the same computer (a work station i7-4790, 3.60 GHz, RAM 32,0, NVIDIA GeForce GTX 970 2 GB) and in addition, (iii) the most similar settings were selected in the corresponding steps. The software gives us a summary report with information on the whole process, such as the setting of the input data, the final calibration parameters of the camera, the re-projection error on the control points, etc. In Agisoft PhotoScan, the data processing is based on 5 consecutive steps: 1) Align Photos (internal/external camera orientation and sparse cloud creation), 2) Build Dense Cloud, 3) Build Mesh, 4) Build Texture, 5) and Export Orthophoto. In the first step, full resolution images were used and the default parameter configuration was set with 40.000 key points limit and 4.000 tie points limit; all the cameras were aligned (263/263). The camera calibration parameters were also optimized using GCPs. The dense matching was built in “high quality”; this setting involves a reduction factor of the image of 1/4 (two times by each side). The dense point cloud has more than 19 million of points and it does not show an information gap. The flowchart of Pix4Dmapper is based on 3 steps: 1) Initial Processing (internal/external camera orientation and sparse cloud creation), 2) Point Cloud and Mesh (dense point cloud and textured polygonal model creation) and 3) DSM (Digital Surface Model), Orthomosaic and Index. In this last phase it is possible to generate the orthophoto, according to a resolution requested by the user. Using a default configuration template for the parameters setting, the software cannot align all cameras. The number of automatically detected key points was augmented to 15.000 and at the same time, the number of pairs for each image was augmented (setting the maximum value 12). With this configuration setting, the software aligns all images. GCPs were used also to adjust the camera calibration and estimate a mean RMS (Root Mean Square) error of 0.017 m. In the second step, for the construction of the dense cloud, the same settings of Agisoft PhotoScan were used, i.e. an image scale factor of 1/4 (half resolution on each side). The final point cloud has more than 24 million points. The data processing performed in 3DF Zephyr Aerial is subdivided in 5 consecutive steps: 1) Camera Orientation and Figure 5: Topographic points (GCPs).  Figure 6: Orthophoto and top view in Agisoft PhotoScan.  Figure 7: Orthophoto and top view in Pix4Dmapper.    Figure 8: Orthophoto and top view in 3DF Zephyr Aerial.  ACTA IMEKO | www.imeko.org  September 2016 | Volume 5 | Number 2 | 67  Sparse Point Cloud generation (Structure from Motion); 2) Dense Point Cloud generation (Multi view Stereo); 3) Mesh Extraction; 4) Texture Extraction and 5) Ortophoto Extraction. In order to use a similar processing setting, the used parameters were: a limit of 15,000 key points for the image in the first step; and an image scale factor of 1/4 in the second step. The software aligns 257/263 images and the dense point cloud has more than 2 million points. Table 1 schematises the workflow steps of all software. Table 2 shows the summary of data processing parameters. Figures 6 to 8 show the final orthophotos obtained with Agisoft PhotoScan, Pix4Dmapper and 3DF Zephyr Aerial, respectively. It is possible to note that: (i) there is no visible bowl-effect (on the top view the wall profile is linear and has not curvature); (ii) the 3DF Zephyr Aerial orthoimage (Figure 8) has problems and missing parts on the upper area of the wall. A first consideration involves the time-consumption for the processing data. The duration of the elaborations with this latest software has been greater than the other two. Instead, Agisoft PhotoScan and Pix4Dmapper completed the camera alignment and matching process in comparable times (in total about 6 hours: 3 for the internal/external calibration and 3 for the construction of dense point cloud). 4. OUTPUT COMPARISONS  A first analysis was carried out on the spatial reconstruction of the position of each camera (external orientation). In the report released by the software, we can find (i) the coordinates (X, Y, Z) of the centre of each aligned frame and (ii) the rotations of the axes (respectively for the x, y, z axes). For each combination of software pairs (Agisoft Photoscan VS Pix4D, Agisoft Photoscan VS 3DF Zephyr, Pix4D VS 3DF Zephyr), the difference between the XYZ coordinates and the  angles of corresponding photos were calculated. Table 3 reports the average value, the standard deviation and the maximum value of the calculated difference. It can be noted that the major deviations are obtained in comparisons with 3DF Zephyr Aerial. Instead, the camera position and spatial orientation for each photo, found by Agisoft and Pix4D during the alignment step, are very similar. In fact, the comparison between Agisoft PhotoScan and Pix4Dmapper has minimal residues (in the order of cm) for the XYZ coordinates and differences lower than 1° for the rotations. A second comparison was conducted analysing the re- projection errors on the GCPs calculated by each software package. In Table 4 for each GCP the corresponding re- projection error on the XYZ coordinates and their vector module (Error 3D) is reported. The points have been inserted in the same images for each software package and by the same operator, in order to avoid more uncertainties in the calculation. Agisoft PhotoScan presents the greatest deviations values (the average error is 3.5 cm and the standard deviation is 4 cm), while Pix4D has the lowest errors (the average error is 2.2 cm and the standard deviation is 2.6 cm). We can also note that the points C06 and C10 show a very large deviation in every software package (about 7 cm and 10 cm respectively). Probably they are due to errors in the topographical survey. A third analysis has foreseen in the statistical comparison of the georeferenced point clouds, removing the other points of the reconstructed 3D scene that do not belong to the wall investigated. A statistical analysis of the wall surface can provide an estimate of the point cloud noise and the deviations between the produced surfaces in the studied software. In Figures 9 to 11 on the left column the comparison between pairs of the georeferenced point cloud obtained in each software package is shown (made by Cloud Compare). The deviation between the corresponding points of two point clouds is represented in false colour: the maximum deviations (red points) are present on the top wall and in the edges (e.g. doors, windows and roof). The point-to-point comparison results were plotted in the diagrams on the right column of Figures 9 to 11. The graphics have on x-axis the deviation values (m) and on y-axis the corresponding number of points. The Agisoft Photoscan VS Pix4D point clouds comparison presents the lowest deviations whereas Pix4D VS 3DF Zephyr point clouds comparison presents the maximum deviation. According to the conclusion about the external camera parameter of Table 3, the 3DF Zephyr Aerial model presents greater deviations from the other. The statistical analysis of such deviations shows that they agree with a Weibull type distribution, represented on the graphic with the grey continuous curve. The correspondence between the distribution and frequency histograms obtained is in fact remarkably significant (especially for comparison between Agisoft VS Pix4D). In the Weibull distribution, the highest frequencies of the curves represent the deviation (on the x-axis) associated to the maximum number of compared point (on the y-axis). It is possible to observe that the highest frequencies of the distribution correspond to a deviation value Table 3: Comparisons among external camera calibration parameters.    X(cm)  Y(cm)  Z(cm)     A g is o ft  V S   P ix 4 D   MEAN  0.76  1.26  0.62  0.23  0.11  0.22  ST. DEV.  0.34  0.61  0.54  0.77  0.21  0.69  MAX   1.50  2.46  2.17  0.57  0.24  0.54  A g is o ft  V S   3 D F  Z e p h y r  MEAN  18.15  10.88  21.03  0.22  0.06  0.21  ST. DEV.  11.18  7.51  3.73  0.14  0.06  0.14  MAX  48.26  37.95  44.82  9.60  2.86  9.42  P ix 4 D  V S    3 D F  Z e p h y r  MEAN  17.66  11.29  21.61  0.17  0.11  0.17  ST. DEV.  11.09  7.78  3.45  0.81  0.21  0.72  MAX   48.85  37.14  44.08  10.96  2.82  9.74  Table 1: Workflow steps.  Agisoft PhotoScan  Pix4Dmapper  3DF Zephyr  Step 1  Align Photos  Initial Processing  Sparse Point Cloud  Step 2  Build Dense Cloud  Point Cloud &  Mesh  Dense Point Cloud  Step 3  Build Mesh  DSM, Orthomosaic  and Index  Mesh  Step 4  Build Texture  /  Textured mesh   Step 5  Export Orthophoto  /  Orthophoto   Table 2: Output software comparison.     Agisoft PhotoScan  Pix4Dmapper  3DF Zephyr   Aligned cameras  263/263  263/263  258/263  Tie point extracted  230.508  460.922  378.435  Dense cloud   19.507.347  24.817.288  2.580.530  ACTA IMEKO | www.imeko.org  September 2016 | Volume 5 | Number 2 | 68  of about: 1.5 mm for the Agisoft VS Pix4D comparison, 3 mm for the Agisoft VS 3DF Zephyr comparison and 2 mm for the Pix4D VS 3DF Zephyr comparison. It means that the comparison between Agisoft PhotoScan and Pix4Dmapper has a distribution that highlights how the two generated models are very similar, having a 1.5 mm deviation for most of the compared points of the model. Furthermore, from graphics of the Weibull distribution, it is possible to observe the presence of a higher noise in comparison between Agisoft PhotoScan and Pix4Dmapper with 3DF Zephyr Aerial. In Figure 12 the Weibull curves of the three comparisons have been represented on the same scale. The diagram underlines that the distribution peak corresponds to the lowest value of deviation (“Absolute distance” on the x-axis) for Agisoft VS Pix4D comparison (orange curve) and to the highest value for Agisoft VS 3DF Zephyr (blue curve). A further statistical analysis was performed using analysis of Pearson’s correlation. The Pearson product-moment correlation coefficient (or Pearson correlation coefficient, for short) is a measure of the strength of a linear association between two variables and is denoted by “r”. Basically, a Pearson product-moment correlation attempts to draw a line of best fit through the data of two variable. The Pearson correlation coefficient, “r”, can take a range of values from +1 to -1. A value of 0 indicates that there exists no linear relationship between the two variables. A value greater than 0 indicates a positive association; that is, as the value of one variable increases, so does the value of the other variable while a value less than 0 indicates a negative association. The standard method that statisticians use to measure the ‘significance’ of their empirical analyses is the p-value. A low p- value (such as 0.05) is taken as evidence that the null hypothesis can be rejected (Ho: ρ=0, where the correlation coefficient ρ is the ratio between the covariance between the two variables and the product of their standard deviations). A low p-value implies that the parameter is highly significant. Figure 13 shows that the p-value max does not exceed in many cases (23/35) a p-value <5 %, does therefore there is a good correlation between the errors calculated in the various software. 5. CONCLUSION  Although there are wide-angle lenses with less distortions, we have chosen the GoPro camera for its size and lightness and its control through a Wi-Fi connection, which has facilitated and simplified the photographic acquisitions. It was possible to use a simple and economic pole in order to take photos at different height. Despite the narrow space, with a low-cost equipment, a contained number of pictures and easy operations, the horizontal and vertical coverage of the wall was obtained. The action camera represents a low-cost instrument that can have many applications, therefore it seemed interesting to verify the behaviour and reliability for photogrammetric applications even in extreme situations such as in this case study. In a previous step of this research [11], an external program was used to correct the distortion of each image before to import them in the photogrammetric software. Within the Nacional Research Project PRIN 2010-2011 “Architectural Perspective: digital preservation, content access and analytics (Research Unit of Salerno), the initial studies and analysis were reported. These first researches have demonstrated that an external pre-filtering of the images (for example with the “lens correction” tool of Adobe Photoshop) could be useful when the photogrammetric software has no internal algorithms to correct fisheye lens distortions. In fact, during these early studies the first problem found is that Pix4Dmapper automatically read the parameters of the camera and automatically process the images by calculating, again, the distortions of wide-angle lenses. The results are models with evident deformation. Instead, Agisoft Photoscan allows the Table 4: GCP errors in tested software.  GCPs Name  Error X (cm)  Error Y (cm)  Error Z (cm)  Error 3D (cm)  Agisoft  Pix4D   Zephyr  Agisoft  Pix4D  Zephyr  Agisoft  Pix4D  Zephyr  Agisoft  Pix4D  Zephyr  C01  1.91  ‐0.60  ‐1.04  ‐2.17  0.01  0.25  ‐1.06  ‐0.10  ‐0.39  3.08  0.61  1.14  C02  0.73  0.10  1.57  ‐1.28  ‐0.30  ‐0.05  ‐1.16  0.60  ‐0.37  1.88  0.68  1.61  C03  1.25  ‐0.10  ‐0.17  ‐1.33  ‐0.30  ‐0.02  ‐0.87  0.60  ‐0.08  2.02  0.68  0.19  C04  ‐0.03  0.70  0.74  ‐0.30  ‐0.80  ‐0.26  ‐0.51  1.00  0.15  0.59  1.46  0.80  C05  ‐7.27  ‐0.10  2.58  11.57  0.10  1.01  ‐0.56  1.00  0.80  13.67  1.01  2.89  C06  ‐0.60  1.00  1.64  ‐0.68  ‐0.50  0.16  7.16  ‐6.00  ‐7.21  7.22  6.10  7.40  C07  ‐0.22  0.60  0.95  ‐0.38  ‐0.60  ‐0.18  0.27  0.10  0.00  0.52  0.85  0.97  C08  0.97  ‐0.30  0.05  ‐1.92  1.20  1.50  1.05  0.60  0.65  2.39  1.37  1.64  C09  0.72  0.30  0.09  ‐1.82  0.90  0.99  0.47  0.50  ‐0.15  2.02  1.07  1.00  C10  3.38  ‐2.80  ‐2.81  ‐5.73  5.10  5.89  ‐7.57  7.70  8.44  10.08  9.65  10.67  C11  0.18  0.30  0.26  0.34  ‐0.80  ‐0.47  0.47  ‐0.40  0.04  0.61  0.94  0.53  C12  ‐1.36  1.30  0.91  2.21  ‐2.20  ‐2.27  1.32  ‐1.20  0.22  2.91  2.82  2.46  C13  0.23  ‐1.20  ‐0.23  ‐0.02  ‐0.40  0.15  0.55  ‐0.70  0.01  0.60  1.45  0.27  C14  0.11  0.01  1.30  1.51  ‐1.70  0.05  0.45  ‐0.50  ‐0.40  1.58  1.77  1.36  MEAN (cm)  1.35  0.67  1.02  2.23  1.07  0.95  1.68  1.50  1.35  3.51  2.18  2.35  ST.DEV. (cm)  2.38  1.02  1.32  3.82  1.71  1.78  2.99  2.78  3.10  3.99  2.58  3.00    ACTA IMEKO | www.imeko.org  September 2016 | Volume 5 | Number 2 | 69  Figure  12:  Weibull  distribution  of  absolute  distances  (on  the  x‐axis  the  deviation  values  and  on  the  y‐axis  the  corresponding  number  of  compared points).  Figure 13: Pearson correlation coefficient (r) and p‐value (p‐val).      Figure 9: The comparison between the photogrammetric point clouds of Agisoft PhotoScan and Pix4Dmapper (on the left) and the corresponding graphic with Weibull distribution (on the right). On the x‐axis the deviation values (m) and on the y‐axis the corresponding number of compared points of the two  models.          Figure 10: The comparison between the photogrammetric point clouds of Agisoft PhotoScan and 3DF Zephyr Aerial (on the  left) and the corresponding  graphic with Weibull distribution (on the right). On the x‐axis the deviation values (m) and on the y‐axis the corresponding number of compared points of the two models.         Figure 11: The comparison between the photogrammetric point clouds of Pix4Dmapper and 3DF Zephyr Aerial (on the left) and the corresponding graphic  with Weibull distribution (on the right). On the x‐axis the deviation values (m) and on the y‐axis the corresponding number of compared points of the two  models.  ACTA IMEKO | www.imeko.org  September 2016 | Volume 5 | Number 2 | 70  elaboration of images previously corrected from the distortions with external software, but also in this case the model has non- linear deformations. In conclusion, we have better results using the algorithm implemented in the photogrammetric software. The developed analysis enables us to consider the accuracy achieved acceptable using a fisheye camera for photogrammetry applications (max average error of 3.5 cm Table 4). The new versions of the photogrammetry software that allow using directly fisheye images, guarantee better results and certainly simplify the elaboration. The comparison between the tested programs shows that the results can be considered comparable: the maximum deviation is between Agisoft VS 3DF Zephyr point cloud and it is about 3 mm at the highest frequency of the Weibull distribution. The conducted test allows us to deduce that the low cost sensors of action camera can be considered a useful tool for the survey of Cultural Heritage [12]. The main advantages of this technology are the cost of the equipment and the easy handling of the camera. The acquisitions with a fisheye camera are very useful to speed up and simplify the survey above all for very close range acquisition. Finally, the survey approach tested in this case study proves to be efficient and successful. REFERENCES  [1] F. Fiorillo, B. Jiménez Fernández-Palacios, R. Remondino, S. Barba, “3D surveying and modelling of the archaeological area of Paestum, Italy”, VAR - Virtual Archaeology Review, vol. 4, No. 8 (2013), pp. 55-60. [2] F. Fassi, L. Fregonese, S. Ackermann, V. De Troia, “Comparison between laser scanning and automated 3d modelling techniques to reconstruct complex and extensive cultural heritage areas”, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XL-5/W1 (2013), pp. 73-80. [3] C. Strecha, R. Zoller, S. Rutishauser, B. Brot, K. Schneider-Zapp, V.Chovancova, M. Krull, L. Glassey, “Quality assessment of 3D reconstruction using fisheye and perspective sensors”, ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. II-3/W4 (2015), pp. 215-222. [4] J. Covas, V. Ferreira, L. Mateus, “3D Reconstruction with Fisheye Images - Strategies to Survey Complex Heritage Buildings”, Proc. of Digital Heritage International Congress, vol. 1, (2015), pp. 123-126. [5] C. Balletti, F. Guerra, V. Tsioukas, P. Vernier, “Calibration of Action Cameras for Photogrammetric Purposes”, Sensor, vol. 14, No. 9 (2014), pp. 17471-17490. [6] J. Kim, M. Pyeon, Y. Eo, I. Jang, “An Experiment of Three- Dimensional Point Clouds Using GoPro”, International Scholarly and Scientific Research & Innovation, vol. 8, No. 1 (2014), pp. 82-85. [7] M. Bolognesi, A. Furini, V. Russo, A. Pellegrini, P. Russo, “Testing the low-cost RPAS potential in 3D Cultural Heritage reconstruction”, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XL-5/W4 (2015) pp. 29-235. [8] F. He, A. Habib, “Target-based and feature-based calibration of low-cost digital cameras with large field-of-view”, Proceedings of the ASPRS 2015 Annual Conference, May 4-8 2015, Tampa, Florida (2015), pp. 25-32. [9] Y. Ninsalam, J Rekittke, “Landscape architectural foot soldier operations”, Sustainable Cities and Society, September (2015), pp. 158-167. [10] A. Guarnieri, A. Vettore, S. El-Hakim, L. Gonzo, “Digital photogrammetry and laser scanningin cultural heritage survey”, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXV Part B5 (2004), pp. 154-158. [11] M. Limongiello, B. Jiménez Fernández-Palacios, “Action camera for metric archaeological documentation in narrow spaces”, Poster at the 1st International Conference on Metrology for Archaeology (2014). [12] J. García Fernández, A. Alvaro Tordesillas, S. Barba, “An approach to 3D digital modeling of surfaces with poor texture by range imaging techniques. ‘Shape from stereo’ vs. ‘shape from silhouette’ in digitizing Jorge Oteiza’s sculptures”. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XL-5/W4 (2015), pp. 25- 29.