___________________________________________________________________________________________________________ Geoinformatics CTU FCE 249 NEW LOW-COST AUTOMATED PROCESSING OF DIGITAL PHOTOS FOR DOCUMENTATION AND VISUALISATION OF THE CULTURAL HERITAGE Karel PAVELKA, Jan REZNICEK Czech Technical University in Prague, Faculty of Civil Engineering, Laboratory of Photogrammetry, Thakurova 7, Prague 6, 166 29, Czech Republic Tel.+420224354951, E-mail: pavelka@fsv.cvut.cz Keywords: Culture heritage, photogrammetry, point cloud, 3D modeling, camera calibration Abstract: The 3D scanning is nowadays a commonly used and fast technique. A variety of type’s 3D scanners is available, with different precision and aim of using. From normal user ´s point of view, all the instruments are very expensive and need special software for processing the measured data. Also transportation of 3D scanners – poses a problem, because duty or special taxes for transport out of the EU have to be paid and there is a risk of damage to dismantling of these very expensive instruments and calibration will be needed. For this reason, a simple and automated technique using close range photogrammetry documentation is very important. This paper describes our experience with the software solution for automatic image correlation techniques and their utilization in close range photogrammetry and historical objects documentation. Non-photogrammetrical approach, which often gives very good outputs, is described the last part of this contribution. An image correlation proceeds well only on appropriate parts of documented objects and depends on the number of images, their overlapping and configuration, radiometrical quality of photos, and surface texture. 1. INTRODUCTION Documentation and visualization of historic monuments has long been one of the major components of heritage conservation. Traditionally, it was used to be a domain of Geodesy and Photogrammetry in the structural- historical research. At the turn of the century, a 3D scanning method was added. The later technology quickly became one of the major methods for documenting complex shapes. Generally, 3D scanning includes a large variety of techniques for creating data that are represented by point clouds. The best known method is laser scanning using ranging scanners which can collect data automatically with mm precision (object points in real 3D coordinates) from a large area. Most of the present methods have several important characteristics: they are slow, difficult to operate, expensive, or all together. Such a system may not give accurate and complete results from the user´s point of view. There are the requirements to be satisfied a low cost system, simplicity, easy transportation and, if possible, – fully automatic data processing. Of course, this must provide basic common functions for data usage. In this regard, the photographic systems are suitable. 2. LASER SCANNING 2.1 Requirements and measurements Laser scanning was used to measure non-selected data. It means we can only influence the density of measurements in a regular grid. The main aim of laser scanning is to create an accurate surface 3D model. Generally, laser scanners are very expensive and sensitive instruments –thus there can be problems with transportation and calibration at distant sites. It is necessary to use very sophisticated and expensive software for processing the measurements. The measured point clouds are processed by meshing functions to a mesh of irregular triangles. The current laser scanners are equipped with calibrated digital cameras for texturing the laser scanning model by using of photogrammetrical images. The processing of measurements results in a rendered virtual model of the documented object. For object modeling hundreds of millions 3D points are typically used. mailto:pavelka@fsv.cvut.cz ___________________________________________________________________________________________________________ Geoinformatics CTU FCE 250 Figure 1: Complex documentation of Charles Bridge vault in Prague obtained by laser scanner Callidus Figure 2: Laser scanning system Leica HDS3000: scanning of baroque statue ___________________________________________________________________________________________________________ Geoinformatics CTU FCE 251 2.2 Current photogrammetric systems Photogrammetry in the classical approach offers selected measured data. Stereo-photogrammetry or intersection photogrammetry is used. Important object points can be defined in a model by using mouse-clicking. Precise stereo- photogrammetric systems require special hardware equipment and software, and generally, they are not intended for unprofessional users. In aerial stereo-photogrammetry, a variety of automatic processes is available: automatic target matching for marked control points or fiducial finding, automatic tie-points finding for Automatic Aero Triangulation (AAT), or using image correlation techniques for deriving a Digital Surface Model (DSM) from stereo-pairs with known external orientation. All these procedures are part of special stereo-photogrammetric software or software for AAT and they have been in use since the nineties of 20th century, initially on Unix-workstations and later on today‟s common improved personal computers. Intersection photogrammetry was revived in the eighties, initially as a method using analogue photogrammetric images taken by a hand held calibrated film-camera, tablet, and software. Later the software has been adapted for digital non-professional cameras; it has always been necessary to calibrate the camera to obtain appropriate results. Certainly, the most popular software in this category is PhotoModeler as a low-cost solution for close range intersection photogrammetry. At first, as low-cost software it had only limited possibilities for data processing of images. In the last years, the processing options have been significantly improved by increasing the accessibility of quality computer technology and by related software development as well as by significant by increasing the digital camera quality and resolution. Interesting results based on using image correlation and using high quality photos taken by calibrated cameras were expected in the last two years. 3. NEW PHOTOGRAMMETRIC PROCESS In the Laboratory of Photogrammetry of the Czech Technical University (CTU) in Prague, we have developed our own experimental system based only on a common digital camera and image correlation software. The optical correlation scanner (OKS) consists of one calibrated camera, photo-base with moving camera-holder, tripod, and software written in Matlab language. OKS was designed to be universal; by changing the base length, it is possible to measure both short-distance and long-distance objects (the base is variable up to 1 meter). In order to obtain good correlation, the images are taken with a very short step (usually 5 or 10 cm). The basic idea of this approach is such that each point should be matched on more than two images. By using two images only, there are no verification results. The third (or next) image point correspondence is necessary for validating the correctness of point match. The total precision and reliability of point match is increased by more image point correspondences. The output from this process is a textured point of cloud, which can be meshed and processed by using the software usually used for laser scanning. Figure 3: Image sequence for improving image correlation process ___________________________________________________________________________________________________________ Geoinformatics CTU FCE 252 Figure 4: Optical Correlation System (OKS) Figure 5: Final 3D model of one part of the relief with texture (Baroque-age relief near Velenice village in the Czech Republic) ___________________________________________________________________________________________________________ Geoinformatics CTU FCE 253 3.1 PhotoModeler Scanner In the PhotoModeler Scanner software, the photogrammetric method of multi-image intersection and bundle adjustment has been chosen. The intersection method demands a set of images captured around the object (or partially around it) from different positions for the measured points to be visible on at least two images captured from different locations. For external orientation and transformation to a local or national coordination reference system, the Ground Control Points (GCP) were needed, or it is possible to enter a precisely defined distance only for scaling, which is often sufficient for small objects documentation. On the other hand, the image correlation works well on images with parallel axes of frame. In such case it was necessary to solve the problem of integration of images taken with parallel axes, which are unsuitable for intersection photogrammetry. The first functioning version was launched at the end of 2009, but it was still far from full use in practise. Computing of all suitable image combinations took long hours on a common new computer. However, the process of referencing all images was still manual. The outputs were often heavily contaminated by noise. A new 2010 version is much better: the computing time is significantly reduced and the image orientation process (referencing) involves automatic procedures. It must be said that this photogrammetric system is based on the physical and mathematical basis of photography, central projection, and use of exact camera calibration. 3.2 Case project Sarcophagus As a small case project, a set of 15 images taken by calibrated Canon 20D with 8MPix was used. For simple documentation the decorated sarcophagus beside the Hercules Temple on the Jabal al-Qal'a, also called Amman Citadel was selected. All images was taken from hand within two minutes only. After manual referencing of all images, only those with approximately parallel axes were processed to a point of cloud. Within only a few minutes about 500 thousand object points from one decorated site were captured. The created point cloud was rendered with quality textures from images. All the images cannot be processed by automated image referencing – the overlap between images was not sufficient for creating complex model; thus only 8 images from the decorated part were processed. The Photomodeler software computes cloud of points always from a pair of images only without control or verification and with many errors (noise). The software options are limited and it is clear that it will be enhanced. Figure 6: Sarcophagus ___________________________________________________________________________________________________________ Geoinformatics CTU FCE 254 3.3 Camera calibration For image acquisition, a digital camera Canon 20D and Canon 10-22 mm lens were used. The use of an ultra-wide lens required a very precise calibration, for which several software solutions were used (Photomodeler v6 and self-made software in Matlab language). The parameters of focal length, principal point, radial and decentric distortions and chromatic aberration were calibrated in dependence on used lenses. The planar calibration field for SW Photomodeler, extended with four points in space, was captured from 16 different positions. The images were firstly converted to the original Bayer scheme with SW Dcraw and then separated into three independent greyscale images (RGB) with the quarter of the original resolution. Then the calibration process was made in SW Photomodeler independently for each channel and the resulting parameters were saved to the common input file for next computing in SW Matlab. Next, the parameters were balanced in order to have one common focal length for all channels and also in order to take advantage of the whole image format after distortion repair (similar to idealization process in SW Photomodeler – however, this software does not compute chromatic aberration). Finally, the calibration protocols in Matlab were generated and used for image optimization (idealization – creation of a new photo set without image distortion) of the photogrammetric images. Figure 7: Output from Photomodeler software 4. NON-PHOTOGRAMMETRICAL PROCESS 4.1 Present state of the art The idea to derive 3D information from planar images is old and was used successfully in classical photogrammetry. There is not only a photogrammetric view of the matter; the image can be regarded as a signal. In this case the processing is possible without knowing the details of the camera used. Automatic correspondences finding between images is used (SIFT - Scale Invariant Feature Transform) and this process is used for panorama stitching or for automatic reconstruction of 3D scenes. The newly developed methods appeared first (around year 2000) in the theory of image processing and later in low-cost photogrammetry. The new approach is based mainly on automatic referencing of images to a model and automatic extraction of geometrical information. Using only a set of common images processed by image correlation techniques to a point cloud has appeared after year 2000. Nowadays, it is possible to find free software solutions or web-service like Photosynth software (under Microsoft) which creates panorama images or a virtual model using free client software. After processing of images, a virtual model is exposed on the web site. Another software system, like ARC 3D Webservice (Katholieke Universiteit Leuven) also makes a 3D model from the image sequence taken by a digital uncalibrated camera. This system consists of two modules: Image Uploader for image uploading to the server and Modelviewer for ___________________________________________________________________________________________________________ Geoinformatics CTU FCE 255 processed object visualisation and its export to another format (for example VRML). The using of these services is very easy, but they are usually not intended for professional processing, and especially, indicate the current possibilities in automatic processing of photos. 4.2 Technical process for automatic processing In the Laboratory of photogrammetry of the CTU a special work-flow or technical process for easily applicable and simple historical objects documentation has been developed. The aims of this work being to provide a simple and cost effective documentation and presentation of monuments. After the input of images (they can be taken by various unknown cameras), Bundler software is used. Using the algorithm SIFT, the key points are identified (about 2000), and for each image it is determined if its major points are at other images (matching). Automatic finding of correspondences between image pairs (SIFT) is one of the ways how to find the key points. A set of points arising from the key points in the images is refereed to as thin cloud of points. During the process of creating thin point clouds by SfM algorithms (Structure from Motion for Unordered Image Collections), the internal and external camera parameters are computed. PMSV (Patch based Multi View Stereo software) searches the same places on different photos and adds points to the thin point cloud. A dense cloud of points is created on which a fine object structure is visible. The cloud of points arising from this procedure was further modified in the program MeshLab, which is an open source transferable and extensible system for processing and editing unstructured meshes. MeshLab supports the processing of unstructured models arising for example from a 3D scan. It provides a set of tools for editing, cleaning, replacement, control, and rendering this type of data. For further research it is necessary to reconstruct a surface from the points automatically. Another option is to load these points in a graphic studio (CAD or other special software) and trace out all the elements. Complicated programs are only for experts and they are usually expensive. Both options require additional software and knowledge of how to work with these programs. However, this strongly limits further use. In order to address professionals (archaeologists and experts in monument care), it is necessary to create such a tool that can load the data (points) and treat them easily. We have created easy to use and intuitive software and this is the final part of the process described above. 5. RESULTS Both photogrammetrically oriented and non-photogrammetrically oriented techniques for automated 3D model separation were tested. The new 2010 version of Photomodeler catches with the losses in the automation trend in the recent years. Its big disadvantage, however, is that it enables to create a cloud of points always from two images without any control. It results in a considerable amount of noise and model distortion. This problem is solved by system OKS developed at the CTU in Prague; it computes all points from all appropriate images, but it needs special equipment. The non-photogrammetrical approach computes the point cloud also from all appropriate images, and it does not require any special equipment and camera parameters. This system needs a large number of images with a big overlap. In our testing it has given the best and fully automated results. On the other hand, it is not just one complete system; it consists of several independent steps. 6. DISCUSSION Using the non-metric digital camera for photogrammetry has revealed many problems in data processing. They can be seen especially in precise camera calibration, large number of images, and data processing. Using photogrammetrical images for texturing the final 3D model is very important for visualization, but there are no special functions for radiometrical correction of photos which are common, e.g. for creating an orthophoto from aerial images. Scanning large and complicated objects introduces problems with inaccessible parts and details lost in the data noise. Better detailed resolution can be reached by using better quality images, better configuration of images taken at good illumination (very important), and, of course, with appropriate software, which, nowadays, is quickly progressing; with modern computers the technique and computation speed will soon be out-of-date. Automatic photogrammetric - and non-photogrammetric based techniques can be used both for simple documentation and creation of a 3D model. The advantage is in the whole procedure is based only on images taken by common camera (from the photogrammetric point of view by a calibrated camera). This seems to be very simple, but many issues are still to be addressed. Thus only elementary objects with a good texture can be easily processed to obtain the 3D model. ___________________________________________________________________________________________________________ Geoinformatics CTU FCE 256 Figure 8: Scheme of non-photogrammetrical system for 3D modelling 5. CONCLUSION The main result of the above mentioned process is an accurate triangular surface model. The high resolution views and 3D models were generated from digital photos only. The process based on classical intersection photogrammetry (represented here by Photomodeler) has been recently supplemented with functions for fully automated image referencing and automated point clouds separation. In the last years, another non-photogrammetric approach has been applied; an automatic image processing method has been created, making use of signal processing, computer vision, and image processing. The main goal of this paper is to describe and test new documentation possibilities by using automated image processing. The use of new automated technologies has shown that they are useful in documentation of small historical objects or in archaeology and that they can also be used by non-professionals; however, there are still many problems due to noise, model texturing, and data joining. 6. ACKNOWLEDGEMENT This project is sponsored by the Czech Ministry of Education Research Scheme MSM6840770040 (CTU Nr.34- 07401). Output 1 software for transformation to vrml Output 3 software for visualisation and data usage List of images +exif data BUNDLER Point cloud +camera parameters Output 2 Software for purging and colored geometry PMVS Dense point cloud MeshLab Meshing, Geometry Input of images (snímky) ___________________________________________________________________________________________________________ Geoinformatics CTU FCE 257 Figure 9: Point cloud created from images Figure 10: Meshed point cloud rendered from original images ___________________________________________________________________________________________________________ Geoinformatics CTU FCE 258 7. REFERENCES [1] Lowe,D.G. Object recognition from local scale-invariant features. n International ConferenceonComputerVision, Corfu,Greece,pp.1150-1157. 1999 [2] Lowe, D.G.:„Distinctive image features from scale-invariant keypoints,“International Journal of Computer Vision, 60, 2 (2004), pp. 91-110. http://www.cs.ubc.ca/~lowe/keypoints/ [3 Hartley, R. I. Zisserman, A. „Multiple View Geometry in Computer Vision, 2ndEdition, “ Cambridge University, 2004. [4] Brown,D. Lowe,D.G.„Unsupervised 3D Object Recognition and Reconstruction in Unordered Datasets“. http://research.microsoft.com/~brown/papers/3dim05.pdf [5] Matas, J. , Chum, O., Urban, M., Pajdla, T. :„Robust Wide Baseline Stereo from Maximally Stable ExtremalRegions,“British Machine Vision Conference, 2002. [6] Mach, L. SIFT: Scale Invariant Feature Transform - Automatické nalezení korespondencí mezi dvojicí obrázků , http://mach.matfyz.cz/sift [7] Pavelka,K., Reznicekek,J.:Culture Heritage Preservation With Optical Correlation Scanner, 22nd CIPA Symposium, October 11-15, 2009, Kyoto, Japan [8] Reznicek, J., Pavelka, K., 2008. New Low-cost 3D Scanning Techniques for Cultural Heritage Documentation. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences.Vol. XXXVII. Part B5. Beijing [9] Vergauwen, Maarten, ARC 3D Webservice [online]. 2009. . [10] Snavely,N., Seitz,S.M., Szeliski,R.: Photo Tourism: Exploring image collections in 3D. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2006), 2006. [11] Snavely,N., Seitz,S.M., Szeliski,R.: Modeling the World from Internet Photo Collections. International Journal of Computer Vision, 2007. [12] Dellaert,F. Seitz, S. Thorpe, C. Thrun, S.: Structure from Motion without Correspondence. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2000 [13] Berg,M.et al. Computational Geometry, Algorithms and Applications. 2. edition. Berlin: Springer, 2000.367s. ISBN3-540-65620-0. [14] Pavelka, K. - Tezníček, J. - Hanzalová, K. - Prunarová, L.: Non Expensive 3D Documentation and Modelling of Historical Objects and Archaeological Artefacts by Using Close Range Photogrammetry. Workshop on Documentation and Conservation of Stone deterioration in Heritage Places 2010 [CD-ROM]. Amman: CulTech for Archeology and Conservation, 2010 [15] Koska, B. Using Unusual Technologies Combination for Madonna Statue Replication. Proceedings of the 23rd CIPA Symposium [CD-ROM]. Praha: ČVUT, Fakulta stavební, Katedra mapování a kartografie, 2011, p. 5ř-66. ISBN 978-80-01-04885-6. [16] Pavelka, K. - Tezníček, J. - Koska, B.: Complex documentation of the bronze equestrian statue of Jan Zizka by using photogrammetry and laser scanning. Workshop on Documentation and Conservation of Stone deterioration in Heritage Places 2010 [CD-ROM]. Amman: CulTech for Archeology and Conservation, 2010 [17] KUemen, T. - Koska, B. - Pospíšil, J.: Verification of Laser Scanning Systems Quality. XXIII-rd International FIG Congress Shaping the Change [CD-ROM]. Mnichov: FIG, 2006, ISBN 87-90907-52-3. [18] Koska, B. - KUemen, T. - Štroner, M. - Pospíšil, J. - Kašpar, M.: Development of Rotation Scanner, Testing of Laser Scanners. Ingeo 2004 [CD-ROM]. Bratislava: Slovak University of Technology, Faculty of Civil Engineering, 2004, ISBN 87-90907-34-5. [19] Svatušková,J.: Possibilities of new methods for documentation and presentation of historic objects, PhD thesis, CTU Prague, Faculty of Civil Engineering, 2011. http://mach.matfyz.cz/sift