___________________________________________________________________________________________________________ Geoinformatics CTU FCE 275 METRIC ACCURACY EVALUATION OF DENSE MATCHING ALGORITHMS IN ARCHEOLOGICAL APPLICATIONS C. RE1, S. ROBSON2, R. RONCELLA3, M. HESS2 1 CISAS, University of Padova, 35129 Padova (PD), Italy 2 Department for Civil, Environmental and Geomatic Engineering, University College London, WC1E 6BT London, United Kingdom 3 DICATeA, University of Parma, 43124 Parma (PR), Italy Keywords: Cultural Heritage, Photogrammetry, Laser scanning, Scanner, Comparison, Accuracy Abstract: In the cultural heritage field the recording and documentation of small and medium size objects with very detailed Digital Surface Models (DSM) is readily possible by through the use of high resolution and high precision triangulation laser scanners. 3D surface recording of archaeological objects can be easily achieved in museums; however, this type of record can be quite expensive. In many cases photogrammetry can provide a viable alternative for the generation of DSMs. The photogrammetric procedure has some benefits with respect to laser survey. The research described in this paper sets out to verify the reconstruction accuracy of DSMs of some archaeological artifacts obtained by photogrammetric survey. The experimentation has been carried out on some objects preserved in the Petrie Museum of Egyptian Archaeology at University College London (UCL). DSMs produced by two photogrammetric software packages are compared with the digital 3D model obtained by a state of the art triangulation color laser scanner. Inter- comparison between the generated DSM has allowed an evaluation of metric accuracy of the photogrammetric approach applied to archaeological documentation and of precision performances of the two software packages. 1. INTRODUCTION - BACKGROUND The conservation of cultural heritage by the reconstruction of three-dimensional models continues to be an area for constant development and evolution by a wide variety of contributing scientific communities. Driving forces behind this great interest include: documentation in case of destruction or damage; creation of virtual museums and tourism; teaching and learning; conservation and restoration. Whilst ultimately fields will merge, currently the most common 3D surface recording techniques can be divided into two main categories: photogrammetry and laser scanning.Whilst both techniques have their advantages, critical evaluation is often biased by the capabilities of the deployed systems and the individuals using them. In particular the use of image based techniques requires rigorous photogrammetric bundle adjustment, supported by metric survey if both precision and accuracy are to be maintained [1]. Two key issues need to be considered when applying either technique. The first is in optimizing capture to minimize occlusion which give rise to holes in the data. In the case of a triangulation laser scanner both sensor and laser must “see” the same surface; hence the base separation and spot separation will dictate the variation in surface form that can be captured. In the photogrammetric case occlusions between images will similarly give rise to "holes" in the data and variable image quality will contribute to geometrical error and "noise" in the reconstructed surfaces. Optimization of the photogrammetric process to ensure good results commences with careful in the preliminary stages (camera ca libration and image orientation) combined with object selection to avoid failure cases in objects where surface detail and contrasting texture are absent. Secondly, surface finish is a critical issue for all optical recording technique. In both laser scanning and photogrammetry light must be reflected back from the surface to be recorded into a camera. In the case of scanning the geometry and intensity of the light are defined by the scanning configuration and, in the better systems, feedback based on the amount of light received at the detector. The photogrammetric technique will more typically deploy a photographic lighting setup, or even ambient light in the recording process. In either case specular reflections, particularly from metallic or dark shiny stone finishes will cause practical challenges. The study described in this paper was carried out as part of activities for the creation of 3D models on archaeological finds of small and medium size in order to investigate the performance of different photogrammetric software in comparison to a state of the art laser scanning system. Results have been drawn from 3D surveys of a number of archaeological finds preserved in the UCL Petrie Museum of Egyptian Archaeology. The objects examined were: a lid of a stone canopic jar (ca. 20 cm x 20 cm), a funerary cone (ca. 14 cm x 14 cm) and a cartonnage mask (45 cm x 30 cm x 9cm). The first two objects were imaged with both laser scanning systems and photogrammetry, whilst the cartonnage was too fragile to manipulate and was imaged in-situ with photogrammetry. A photogrammetric inter-software comparison was made between DSMs (Digital Surface Model) computed with two commercial systems and one research orientated system. The commercial systems included BAE SocetSet which is optimized to produce mapping products, whilst the academic system was Dense Matcher developed by Parma University. For the two smaller objects, resulting DSM were compared to the dense point ___________________________________________________________________________________________________________ Geoinformatics CTU FCE 276 cloud generated by the high precision (ca. 20 µm depth uncertainty) Arius3D laser scanner (housed at UCL) to evaluate their metric accuracy. 2. OBJECT SURFACE RECORDING 2.1 3D Colour Laser Scanning The funerary cone was 3D colour laser scanned using a recently upgraded Arius3D „Foundation Model 150‟ Colour Scanner, which is unique in Europe, to create detailed object „fingerprints‟ of a range of artefact types. The scanner, held in partnership between Arius3D and UCL, is able to deliver 3D coloured point data at a sampling interval of 0.1mm (~250 dots per inch) with a range accuracy of better than of 0.020mm [2].The scan head delivers XYZ, RGB and surface normal data along a 50mm profile which is driven across the object surface by the controlled motion of a Coordinate Measurement Machine (CMM). The CMM scanning volume allows objects of up to 90cm x 50cm to be scanned. The scanner collects 3D geometry information through the use of a laser triangulation system, whilst colour is collected by analysis of the reflected light from Red, Green and Blue lasers at 638 nm, 532 nm, and 473 nm. These capabilities confer the project with the ability to produce state of the art 3D surface models which have a level of geometric and colour standardization that are well suited to museum recording. Laser scanning, as with all optical techniques, is dependent on the interaction of the surface to be recorded with light. The highly directional nature of the laser illumination and narrow acceptance angle of the detector in the triangulation geometry favour surfaces which reflect incident light in a diffuse or Lambertian way. Specular surfaces, particularly polished metals, present real difficulties that either saturate the senor or result in the sensor receiving insufficient light from the surface to make a record. Such optical properties are not only dependant on the surface and imaging geometry but also on the wavelength of the light used to make the recording. State of the art systems such as those from Arius3D, are highly capable of recording the geometry of many different surfaces, but require a combination of artistry and science for the successful reconstruction of colour captured from disparate views. Most scanning systems have a workflow that is designed to convert point cloud data into triangulation based models for subsequent visualisation and dissemination. However work on the UCL led E-Curator project has demonstrated that heritage professional / digital object interaction can be efficiently delivered from coloured point cloud data where geometry, colour and point based surface normals are combined with splat based point rendering [3]. In our case this is achieved through the use of the Arius3D Pointstream software [4], but there are a growing number of points rendering alternatives. 3. PHOTOGRAMMETRIC SURVEY Photogrammetric data capture follows a common methodology and is distinct in philosophy from most computer vision approaches in that the captured imagery and content are designed from a metric standpoint. First a geometry or network design is performed to ensure that the number and location of the images to be used are appropriate to produce accurate results. Several issues influence the quality of the final result. Key are: the availability of known control points and/or scale in the field of view of the imaging systems and choice of, image resolution and image scale to ensure that fine surface detail can be recorded for subsequent image matching and visualisation. If well designed, it is the ability to record fine detail which can allow photogrammetry to easily surpass the data that can be captured with the majority of laser scanning systems. This is because laser scanning systems are limited in spatial resolution by both their projected laser dot diameter which are typically of the order of 50 to 250 microns and the capability of the motion system directing the scanner to capture in a spatially regular manner. 3.1 Orientation software description To orient the image blocks and optionally carry out camera calibration many different software tools are available from both commercial and research sources. For the work carried out in this paper the following tools were based on their availability in Padova, Parma and London. PhotoModeler is a digital close-range photogrammetry program that allows 3-D models to be constructed from digital images. The system is used for camera calibration and for the digital orientation and restitution of photographs during processing; PhotoModeler (PM) computes the orientation of each image calculating the location and angle of the camera for each photo. PM was used principally because it is a well established solution, however it should be noted that it also allows metric constraints in the form of inter-distances between features to be included in the calculation of the bundle adjustment. In our recording process circular targets were located around the object to be measured and their inter-distances measured with digital calliper to accurately scale the resultant model and strengthen the photogrammetric adjustment. Vision Measurement System (VMS) has been established for some 15 years as a tool for engineering photogrammetry having been compared against many other metrology systems in both industrial and biological applications [5]. The software supports photogrammetric simulation, self calibrating bundle adjustment with both basic and extended parameter sets, fully automatic target image measurement and the ability to produce geometrically corrected images so that the subsequent matching process does ___________________________________________________________________________________________________________ Geoinformatics CTU FCE 277 not need to consider the geometric nuances of the best fitting camera calibration parameter sets. The software also supports the use of inter-distances and directions between targets and features as part of a rigorous bundle adjustment process. EyeDEA has been developed in the last year at DICATeA. Unlike the previous software, EyeDEA is capable of automatically orientating a generic close-range image sequence using Structure From Motion (SFM) algorithms. Also, a Graphical User Interface (GUI) allows the user to measure image points manually or semi-automatically (i.e. the user selects points on one image and the software automatically finds - through a matching procedure- the homologous points on the other images of the block); the user can perform a bundle adjustment of the whole image block or process just a part; the user can also define which images make a sequence and then process them using our SFM code [6] that implements the SURF operator and SURF feature descriptors. 3.2 Matching software description To generate the DSMs using Matching Algorithms both commercial and in-house matching systems were applied: Dense Matcher is a software package developed by Parma University based on the classical area-based Stereo Algorithm. The program is able to detect homologies between one reference (master) images and their correspondent (slave) images. The detection is performed using different matching windows for the two images. The best results are obtained with the Least Squares Matching (LSM) method. In the case of multiple images, however, the Multi-Photo Geometrically Constrained Matching (MGCM) method introduced by [7] exploits the redundancy of the information present in more than two images. SocetSet by BAE systems is a digital mapping software application. The software works with digital airborne, satellite and terrestrial imagery data and includes multi-sensor triangulation, several matching algorithms and the capability of generating a range of image based products [8]. For this research the software has been applied to generate a surface model from the photogrammetric images using its Next-Generation Automatic Terrain Extraction (NGATE) module. NGATE is an advanced tool for automatic DSM generation utilizing combined area and edge matching. The software is capable of matching each pixel in both forward and back matching processes and can deploy break lines to delineate major surface discontinuities. 3.3 The three case studies All objects recorded in the following three case studies are from the UCL Petrie Museum of Egyptian Archaeology [9]. All object handling during the imaging campaigns was both approved and subsequently carried out by approved Petrie museum specialists. 3.3.1 Canopic Jar Lid The Canopic Jar Lid (accession number UC30116) is an ancient Egyptian object dating back to the New Kingdom period (ca 1200 BC). This small stone object (Figure 1) is ca. 20cm x 20cm x 20cm. For photogrammetric recording a Nikon D700 (4357x2899 pixel) digital camera with calibrated 38 mm lens has been used. The sequence of 23 images with convergent camera attitudes follows a spiral path moving on an imaginary spherical surface centered on the object. The camera calibration operations were carried out by adopting standard procedures provided within the PhotoModeler 5 software. The image orientations were performed with the PhotoModeler bundle adjustment by importing homologous points determined with EyeDEA. The scale of the photogrammetric model was been defined by caliber measurements of the distance between the targets applied to a base board on which the object was placed. The measures used for scaling were placed as a constraint during the calculation of bundle-adjustment. At the end of the block orientation process the reconstruction of the DSMs was carried out (Figure 2). Orientation data were imported into Dense Matcher together with the estimated ground points. To optimize the correlation process selected image pair, meeting requirements in relation to the ratio of baseline/distance and angles of convergence of optical axes, have been selected as data-input in the matching process. In particular, features that appear only on very narrow base photographs have much lower accuracy than features on photographs with greater separation. 3.3.2 Funerary Cone The funerary cone (accession number UC37585, Figure 3) is a small Egyptian object dating back to the New Kingdom (ca 1200 BC). It is approximately 10 cm in diameter, nearly circular and is moulded from clay and shows a relatively uniform texture. The survey of this object conveniently utilised a systematic set of images acquired under a 1030mm hemispherical dome in use at UCL to study PTM (Polynomial Texture Mapping) [10] .The dome consists of a central camera mounting and 64 individual flash lights arranged in five tiers inside the dome. Placing the object in this hemisphere allows sequential image capture with highly controlled angular illumination. In this case images were taken simultaneously deploying the lights on each of the five tiers. A single Nikon D200 digital camera was mounted at the dome “north pole” above the object which was placed on the horizontal baseboard (Figure 5). To facilitate stereo ___________________________________________________________________________________________________________ Geoinformatics CTU FCE 278 imaging, the object was translated systematically from left to right to create baselines from 4 cm up to a maximum of 12 cm. The object had been placed on a base board with coded targets. The scale of the photogrammetric model has been defined by calliper measurements made between the targets on the base board used as constraints within the bundle adjustment. For this metric comparison test, the image pair chosen subtended the maximum baseline (12 cm) illuminated by the highest tier of lights. Figure 1: Archival photograph of the Canopic Lid Figure 2: Example DSM Figure3:Archival photograph of the funerary cone Figure 4: DSM of Funerary cone Figure 5: Schematic drawing of PTM dome. Image courtesy of L. MacDonald 3.3.3 Cartonnage Mask Figure 6: Archival Photograph of Figure 7: Detail of the resulting DMS from Cartonnage Mask SocetSet, colour per vertex mesh, of the cartonnage head The third case study records a mask (accession number UC45849, Figure 6) from the Ptolemaic Period (305 - 50 BC). The medium sized object (45 cm x 30 cm x 9cm) is made from painted plastered waste-paper, papyrus cartonnage. Such head covers were used to lay upon the chest of mummified and wrapped bodies. In this case it was not possible to handle or lift the fragile object, so only a photogrammetric DSM was obtained. Despite being larger, the cartonnage was imaged using the same workflow as the smaller objects, the only difference being that a larger calibrated board was ___________________________________________________________________________________________________________ Geoinformatics CTU FCE 279 required. In this case a calibrated Nikon D700 with calibrated 35mm lens was used to produce images from a systematic range of viewpoints and stable photographic lighting conditions with two indirect slave flashes. Images were orientated in VMS and subsequently matched with Socetset. The final model was output in both point cloud and TIN (Triangulated irregular network) formats (Figure 7). The 3D colour model is currently part of an interactive museum exhibit at the British Library “Growing Knowledge” exhibit [11]. 4. ANALYSIS AND COMPARISON 4.1 Technical and practical considerations Whilst both laser scanning and photogrammetric solutions are capable of non contact production of 3D colour models, which can be used for both scientific analysis and audience visualisation, the methods used to generate the models give rise to several key differences. Digital close range photogrammetry is a robust and established non-contact optical method for the documentation of museum artefacts. The equipment consisting of a digital SLR camera (Nikon D700) and lighting equipment is easily transportable to museum and fragile objects. It is capable of delivering high-resolution colour images ideal for the documentation of current condition and damages on the surface of the artefact enabling the visualisation of details of the order of 50 µm. The use of an imaging dome can enhance imaging consistency and provide illumination control capable of supporting a range of RTI reconstruction techniques [12]; however the dimensions of the dome, or illumination device and requirement for blackout are restrictive when compared to a simple photographic imaging configuration. Differences between the on-site time necessary to capture data and off-line processing are significant, with the laser scanner taking longer than the imaging techniques. Set against this is the immediacy of the 3D model and the ability to check its completeness and quality in-situ. Such checking avoids the need to re-visit, but we note the continuing improvement in photogrammetric automation that will steadily erode this advantage. Visual inspection is a very important aspect for both museum professionals and audience engagement. A key advantage of image based techniques are higher resolution and colour fidelity resulting in output that is more convincing to conservators and curators even though the underlying geometry might not be as detailed as a 3D scan sampled from laser scanning techniques. When scientifically captured, such images enable detailed inspection of damage and condition and have the necessary resolution to mimic the use of a low magnification hand lens. Within these tests, the cartonnage (section 3.3.3) provided an example of an extremely fragile object that could not be removed from its museum environment or even its supporting structure. In this case the imaging system needed to go to the object and be deployed in a manner that considered other users of the museum space. 4.2 DSM comparisons To make the comparison between datasets (photogrammetry / laser and photogrammetry / photogrammetry) 3D modelling software was used. After the registration between the surfaces to be compared, the most significant statistical values of the distances between the two surfaces were calculated: Mean, Standard deviation, RMS (Root Mean Square) Error (Table 1). To make the comparisons more readable, a colour map was also displayed. 4.2.1 Canopic Jar Lid Mean (mm) StdDev (mm) RMS Error (mm) 0.059 0.209 0.217 Figure 8: DMS deviation map for the Canopic Jar Lid Table 1 : Comparison table for Canopic Jar Lid The 8 pairs of images with the best geometric configuration and producing the most complete DSM were used: the DSMs were subsequently merged to produce a global triangulated mesh that was later re-scaled according to the reference length measured on the original object. The DSM obtained from the fusion of the different meshes has a certain degree of noise (~0.2mm) and the presence of some incomplete areas due to occlusions, failure of matching algorithm due to low image textures on parts of the lid and, oblique viewing angles to the object surface. Most obvious http://pressandpolicy.bl.uk/Press-Releases/Growing-Knowledge---Exhibition-Enters-a-Second-Phase-4aa.aspx ___________________________________________________________________________________________________________ Geoinformatics CTU FCE 280 is that the connection between the face and the neck of the canopic jar lid is unsatisfactory. The output DSM was compared with the Arius3D generated DSM as a reference. Since the two models were not generated in the same reference system an ICP alignment was required. Figure 8 shows the final DSM which have then been overlaid with a colour error map denoting the discrepancy from the laser scan reference data. Taking the scan model as being correct, accuracy has been determined by projecting the mesh of the complete photogrammetric model onto the scanned model. Both the standard deviation and the RMS error are of the order of 0.2 mm. 4.2.2 Funerary Cone The sequence in Figure 13 below shows three different types of comparisons: between the photogrammetric DSM generated by SocetSet and the reference scanning model (Figure 9.a), the Dense Matcher‟s DSM compared to SocetSet model (Figure 9.b) and the Dense Matcher model with 3D laser scanner DSM (figure 9.c). Each comparison is overlaid with the colour error map to show discrepancies between the DSMs obtained by the different techniques. In this case study all models were first aligned using ICP algorithms. As can be seen the discrepancies between the two photogrammetric models appears random with deviations of the order of 0.1mm which are attributable to differences between the matching algorithms (Figure 9b). However comparison between both photogrammetric models and the laser reference show clear systematic departures. Given the parallel axes of the stereo images used to make the models, it is possible that uncorrected lens distortion has given rise to these differences which represent an underlying curvature in both image based model [13]. This result highlights the need to produce check data to ensure the correctness of 3D reconstruction since such an observed trend could be interpreted as a structural change in the object by a museum professional. DSM-SS/DSM-Laser DSM-DM/DSM-SS DSM-DM/DSM-Laser 0.154 0.080 0.158 Table 2 : Standard Deviation of discrepancies after ICP (mm) Figure 9: Deviation maps for the Funerary Cone: (a) SocetSet to Laserscan, (b) DenseMatcher to SocetSet, (c) DenseMatcher to Laserscan. Comparison across a small area in the centre of the DSMs, following local alignment, effectively removes the influence of the model curvature to highlights the degree of local discrepancy which are all of the order of 0.08mm (Table 3). DSM-SS/DSM-Laser DSM-DM/DSM-SS DSM-DM/DSM-Laser 0.078 0.072 0.081 Table 3: Values of Standard Deviation (mm) for the comparison of small area DMS in the centre of the Funerary Cone ___________________________________________________________________________________________________________ Geoinformatics CTU FCE 281 4.2.3 Cartonnage Mask In this case study the DSMs generated with single image pairs with SocetSet have been compared with the corresponding DSMs produced by Dense Matcher. Figure 10 shows the discrepancies between the two photogrammetric techniques working on the same pair of images over the relatively flat lower portion of the mask under the same survey conditions. Both image matching software packages have produced encouraging results, demonstrating agreement of the order of 0.1 mm. Figure 11 is of a second pair of images which include the more 3D upper portion of the mask. Here agreement is lower (~0.25mm) and clearly shows significant systematic deformations. These differences are attributable to the greater complexity of the surface and to the presence of occlusion and shadows around the nose and chin that make the image matching process challenging. 5. CONCLUSION Comparison between photogrammetric DSMs and those made with the 3D laser scanner demonstrate an overall agreement of the order of 0.2mm. If systematic error in the photogrammetric data can be minimised, for example through the use of convergent axes, then internal precision estimates of the order of 0.02mm should be achievable, even if this value is quite optimistic and the real value may be around 0.05 mm. Such data are entirely appropriate for th e documentation of small and medium-sized archaeological finds. Most notable however is the ability of the digital imaging techniques to directly deliver compelling high resolution imagery at low cost which is readily accepted by museum professionals. Results highlight the importance of the dominant role played by photogrammetric orientation in the workflow if accurate DSMs are to be produced. In particular comparison against the Arius3D DSM showed that, if control points are not properly designed and acquired, significant systematic but undetectable deformations can occur. 6. ACKNOWLEDGEMENTS The authors would like to acknowledge the support of the UCL Petrie Museum and Prof. Lindsay Macdonald for support in imaging with the PTM dome. 7. REFERENCES [1] Remondino, F. et al., 2008. Turning images into 3-D models. Developments and PerformanceAnalysisof Image Matching for Detailed Surface Reconstruction of Heritage Objects. Signal ProcessingMagazine, IEEE, 25(4), pp.55-65. [2] Arius3D .2011. Arius3D. Available at: http://www.arius3d.com/ [Accessed June 3, 2011]. [3] Robson, S. et al., 2008. Traceable storage and transmission of 3D colour scan data sets. In M. Ioannides & A. Addison, eds. Proceedings of the 14th International Conference on Virtual Systems and Multimedia. CIPA, dedicated to Digital Heritage. Limassol, Cyprus: Archaeoloingua/ Budapest, pp. 93-99. [4] Arius3D 2011. Pointstream Software. http://www.pointstream.net/ [Accessed June 3, 2011]. [5] Gruen, A. & Baltsavias, E., 1988. Geometrically constrained multiphoto matching. Photogrammetric engineering and remote sensing, 54(5), p.633 Figure 10: DMS deviation map for the Cartonnage mask, lower part of the mask Figure 11: DMS deviation map for the Cartonnage Mask, right side of the head ___________________________________________________________________________________________________________ Geoinformatics CTU FCE 282 [6] BAE Systems, 2011. BAE System Digital Mapping Software. Available at: http://www.socetgxp.com/content/products/socet-set. [7] UCL Museums & Collections, 2011. The Petrie Museum of Egyptian Archaeology. Available at: http://www.ucl.ac.uk/museums/petrie [Accessed June 3, 2011]. [8] Roncella, R., Re, C. & Forlani, G., 2011. Comparison of two Structure and Motion Strategies. In Proc. 4th ISPRS International Workshop 3D-ARCH: 3D Virtual Reconstruction and Visualization of Complex Architectures. Trento, Italy: ISPRS. Available at: http://www.isprs.org/proceedings/XXXVIII/5-W16/pdf/roncella_re_forlani.pdf. [9] Shortis, M. & Robson, S. 2001. Vision Measurement System - VMS. Available at: http://www.geomsoft.com/VMS/ [Accessed June 3, 2011]. [10] MacDonald, L. & Robson, S. 2010. Polynomial texture mapping and 3D representation. In International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences,. ISPRS Commission V Symposium. Newcastle upon Tyne, United Kingdom. Available at: http://www.isprs-newcastle2010.org/papers/159.pdf. [11] http://pressandpolicy.bl.uk/Press-Releases/Growing-Knowledge---Exhibition-Enters-a-Second-Phase-4aa.aspx. [12] Malzbender, T., Gelb, D., Wolters H. & Zuckerman B. 2010. “Enhancement of Shape Perception by Surface Reflectance Transformation”, HP Laboratories Technical Report, HPL-2000-38,March 2000. [13] Wackrow, R., & Chandler J. H. 2011 Minimising systematic error surfaces in digital elevation models using oblique convergent imagery. The Photogrammetric Record. Volume 26, Issue 133, pages 16–31, March 2011