A metrological approach for multispectral photogrammetry ACTA IMEKO ISSN: 2221-870X December 2021, Volume 10, Number 4, 111 - 116 ACTA IMEKO | www.imeko.org December 2021 | Volume 10 | Number 4 | 111 A metrological approach for multispectral photogrammetry Leila Es Sebar1, Luca Lombardo2, Marco Parvis2, Emma Angelini1, Alessandro Re3,4, Sabrina Grassini1 1 Dipartimento di Scienza Applicata e Tecnologia, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129, Turin, Italy 2 Dipartimento di Elettronica e Telecomunicazioni, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129, Turin, Italy 3 Dipartimento di Fisica, Università degli Studi di Torino, via Pietro Giuria 1, 10125, Turin, Italy 4 INFN, Sezione di Torino, via Pietro Giuria 1, 10125, Turin, Italy Section: RESEARCH PAPER Keywords: Photogrammetry; multispectral imaging; reference object; metrology; cultural heritage Citation: Leila Es Sebar, Luca Lombardo, Marco Parvis, Emma Angelini, Alessandro Re, Sabrina Grassini, A metrological approach for multispectral photogrammetry, Acta IMEKO, vol. 10, no. 4, article 19, December 2021, identifier: IMEKO-ACTA-10 (2021)-04-19 Section Editors: Umberto Cesaro and Pasquale Arpaia, University of Naples Federico II, Italy Received November 4, 2021; In final form December 6, 2021; Published December 2021 Copyright: This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Corresponding author: Leila Es Sebar, e-mail: leila.essebar@polito.it 1. INTRODUCTION In the last few years, digitalization techniques and related 3D imaging systems have acquired major importance in several fields, like industry, medicine, civil engineering, architecture, and cultural heritage. For the last mentioned one, in particular, such technologies can provide multiple contributions, in terms of conservation, data archiving, enhancement, and web sharing [1]- [3]. The existing three-dimensional imaging systems, which acquire measurements through light waves, can be discriminated on the basis of the ranging principle employed [4]. Among the several techniques, photogrammetry is a remote image-based technique, that became widely diffused. In particular, this technique allows for the collection of reliable 3D data of an object, regarding its surface (color and texture) and its geometrical features without requiring any mechanical interaction with the object itself [5]. Indeed, a 3D model is constructed starting from digital images of the object, leading to the creation of its virtual replica. With the increasing diffusion of digitalization techniques and the aims of many users to the creation of 3D models, several concerns have been raised about the results that can be achieved. Therefore, even though digitalization practices are widely diffused and they can provide realistic replicas of an object, the factors that impact the uncertainty of the final 3D models are several and they must be further investigated. Some authors have summarized the most important factors that affect the uncertainty in 3D imaging and modeling systems [4], [6]. Nevertheless, precision and accuracy evaluation of 3D models has not been supported by internationally recognized standards which are of major importance to avoid archiving and sharing wrong information [7]. Some publications have presented different test artifacts or new systems that could be used to test the performances of the photogrammetry approach [6],[8]. In some cases, the accuracy of a final model is determined by comparing the results with some reference data, acquired with active systems such as laser scanners [8]-[10]. Otherwise, the results are also evaluated on the basis of statistical parameters generated by the employed reconstruction software [11]. ABSTRACT This paper presents the design and development of a three-dimensional reference object for the metrological quality assessment of photogrammetry-based techniques, for application in the cultural heritage field. The reference object was 3D printed, with nominal manufacturing uncertainty of the order of 0.01 mm. The object was realized as a dodecahedron, and in each face, a different pictorial preparation was inserted. The preparations include several pigments, binders, and varnishes, to be representative of the materials and techniques used historically by artists. Since the reference object’s shape, size and uncertainty are known, it is possible to use this object as a reference to evaluate the quality of a 3D model from the metric point of view. In particular, verification of dimensional precision and accuracy are performed using the standard deviation on measurements acquired on the reference object and the final 3D model. In addition, the object can be used as a reference for UV-induced Visible Luminescence (UVL) acquisition, being the materials employed UV-fluorescent. Results obtained with visible-reflected and UVL images are presented and discussed. mailto:leila.essebar@polito.it ACTA IMEKO | www.imeko.org December 2021 | Volume 10 | Number 4 | 112 Nevertheless, there is no unique and recognized way to define the quality of a reconstructed model. This paper presents preliminary but promising results, achieved through the design and realization of a low-cost 3D printed reference object specifically designed for assessing position accuracy and dimensional uncertainty of photogrammetric reliefs. Moreover, the reference object, realized in collaboration with the “Centro Conservazione e Restauro La Venaria Reale”, was created with special insets in which different pictorial preparations were inserted in order to be representative of the materials and techniques used historically by artists. The proposed object could be therefore proposed also as a reference sample for multispectral imaging applications, a widely diffused 2D technique for the characterization and identification of historic-artistic materials [12], [13]. Generally, photogrammetry and multispectral imaging are applied as separate techniques, but their combined application is becoming more and more frequent in the cultural heritage field. Indeed, this approach exploits the benefit of mapping multispectral imaging data onto 3D models for complete documentation of the conservation state of an object [14], [15]. Even though several references can be used in different fields [16]-[18], in this case a specific reference object has to be employed. The reference object was tested using a photogrammetric measuring system, that allows acquisition of both the Visible-reflected (VIS) and the UV-induced luminescence (UVL) images. In particular, the experimental setup is composed by an ad-hoc modified digital camera capable of working in a wide spectral range (350-1100 nm), several different lighting sources and filters, and an automatic rotating platform. Meshroom [19], an open-source software, was used to make the photogrammetric reconstruction of the reference object. The obtained results were then compared to the physical 3D object in order to estimate the accuracy of the final 3D replica. In addition, a comparison of two different approaches for the realization of the UVL model is presented. 2. 3D REFERENCE OBJECT The reference object consists of a 3D printed polymeric dodecahedron. Figure 1 shows the prototype, that was designed with the Wings 3D [20] software. Then, the reference object was realized with a ProJet 2500 Plus (3D Systems) printer, employing the VisiJet® M2R-GRY resin. The printer allows obtaining an object with a nominal uncertainty of the order of 0.01 mm. The reference object was designed to be suitable for photogrammetry survey, indeed its shape is properly created for achieving specific information regarding the reconstruction geometrical accuracy. Furthermore, the object was designed in order to have twelve pentagonal slots, in which different pigment preparations can be inserted. In particular, twelve different pigments were chosen to be representative of the principal artistic materials. All the pigments employed are provided by Kremer Pigmente GmbH & Co. KG [21]. In particular, the pigments employed are lead white, white barium sulfate, bone black, magnetite black, raw sienna Italian, lead-tin yellow, minium, lac dye, azurite, malachite, verdigris, and lapis lazuli. The twelve painting preparations were realized in order to reproduce the techniques employed in real historical artifacts. Therefore, each one consists of several consecutive layers: support, preparation layer, underdrawings, pictorial layer, and varnishes. The preparation layer is made of stucco, realized adding gypsum to saturate a solution of water and animal glue (14:1 ratio in weight). Then, three different underdrawings were applied directly on top of the stucco layer. In particular, the materials employed are charcoal, sanguine, and iron gall ink. These first two layers, namely stucco preparation, and underdrawings are the same for all the twelve mock-ups. Each of the twelve sections was designed to host one single pigment/dye in nine different combinations. Indeed, each section was divided into three subsections, based on the binder employed: Arabic gum, egg tempera, and linseed oil. Then each section is further divided into three subsections: one with a historical varnish (i.e. mastic), one with a modern one, and one left unprotected. This choice is of particular interest for the UVL imaging techniques because it allows discriminating between the fluorescence of the different pictorial preparations with and without varnish. Figure 2 shows the proposed reference object and a scheme of the pictorial preparation. Figure 1. Drawing of the reference object realized by means of Wings 3D software. Figure 2. On the left: top view and scheme of the pictorial preparation. On the right, the proposed reference object. ACTA IMEKO | www.imeko.org December 2021 | Volume 10 | Number 4 | 113 3. 3D ACQUISITION SYSTEM The system employed to acquire the images of the reference object is composed of a modified digital camera, a set of suitable light sources, and an automatic rotating platform. 3.1. Acquisition setup The images were acquired with a Fujifilm XT-30 digital camera coupled with a Minolta MC Rokkor-PF 50mm f/1.7 lens. The camera is modified to be suitable for ultraviolet-visible- infrared photography acquisition, in the range from 350 nm to 1100 nm. For both VIS and UVL measurements the camera was equipped with a Hoya IR UV-cut filter and a Schott BG40 filter. The image acquisition was performed in a room where it was possible to create absolute darkness, in order to avoid any possible interference from unwanted light sources. LED UV A- 365 nm filtered sources were employed as sources for the acquisition of UVL images, while standard halogen lamps were used for VIS images. Table 1 reports the main parameters employed in the image acquisition and Figure 3 shows the complete acquisition setup [22]. The acquisition system is completed by the rotating platform which allows to automatically take images of the object at specified rotation angles. The platform is composed of a circular rotating plate that hosts the object. The plate is connected to a stepper motor (type NEMA 17) by employing a suitable gearbox (1:18 ratio) in order to increase the torque and the angular resolution and to reduce the rotation speed. A stepper motor driver chip (A4988, Allegro MicroSystems) is used to drive the motor and to move the platform at specified angular positions with a resolution of 0.1°. The platform is controlled by an ArduinoUno Development Board connected to a computer, where a dedicated application allows the users to set up all the acquisition parameters (such as image number, angle, speed, etc.) and to carry out some basic image pre-processing on the acquired images. Furthermore, the platform features a camera shot trigger output which, connected to the camera, allows the platform to automatically trigger the camera shot at each one of the specified object positions. This greatly improves and simplifies the image acquisition procedure dramatically reducing the user manual intervention. 3.2. Data processing The reconstruction of the 3D models was performed by means of Meshroom (version 2021.1.0). This software has a particular embedded feature, called Live Reconstruction that allows to directly import images while they are acquired and to augment the previous Structure from Motion coverage with an iterative process. In this study, the images were iteratively added in groups of four for each step. The first block of images was acquired frontally respect to the artifact with an angular step of 15°. Subsequently, two additional sets of images were acquired after flipping the artifact on different sides in order to improve the reconstruction of all the faces and their details. Hence, a total of 72 reflected VIS images plus 72 UVL images were collected and processed with a standard pipeline in Meshroom. In particular, the following steps were carried out: Camera Initialization, Natural Feature Extraction, Image Matching, Features Matching, Structure from Motion (SfM) Creation, Prepare Dense Scene, Depth Map Estimation, Depth Map Filtering, Meshing, Meshing Filtering, Texturing. The parameters of reconstruction were all set to default values. Figure 3. Photogrammetry system for multispectral images acquisitions. Figure 4. From left to right: 3D model from reflected VIS images; 3D model obtained after re-texturing of the reflected-VIS model with UVL images; and 3D model obtained using UVL images directly. Table 1. The employed acquisition parameters. Parameter Value Image size 6240 × 4160 Sensor size/type 23.5mm × 15.6mm (APS-C)/ X-Trans CMOS Effective pixels 26 Megapixels Image format .RAF ISO 200 Focal length 50 mm Aperture f/16 Shutter speed 2.0 s Acquired images 72 (3 revolutions of 24 images) ACTA IMEKO | www.imeko.org December 2021 | Volume 10 | Number 4 | 114 The aforementioned procedure was applied to reconstruct models from both VIS and UVL images. Therefore, two different models were obtained. Nevertheless, Meshroom allows to texture a 3D model also using a different set of images with respect to the ones used to generate the point cloud and the mesh. Indeed, it is possible to duplicate the computed Dense scene node and import a new folder of images. In this way, the software will generate a texture for the model, with a different set of images. In order to test this feature, the procedure was applied to the VIS model, to re- texture it with the UVL images. The three textured 3D models were exported in the obj format and properly scaled by employing the open-source software Wings 3D. To scale each model, three different dimensions of the object were measured, and the mean scaling factor was computed. Figure 4 shows an image of the final models. 4. EXPERIMENTAL VALIDATION In order to assess the uncertainty of the 3D reconstructed models from reflected VIS images, several distances on the real artifact were taken by using a caliper and compared with the same distances on the 3D models. Figure 5 shows the distances measured, which are distributed all around the artifact itself. The distances were chosen in correspondence of the edges and between opposite faces of the artifact since they can be easily measured. The measurements of the artifact were collected with a 1/20 mm caliper. Whereas the software Wings 3D was employed to measure the corresponding distances on the virtual replica. The model uncertainty was estimated according to the difference δ as reported in (1):  = |𝐷𝑟 −𝐷𝑚 | , (1) where 𝐷𝑟 is the distance on the reference object and 𝐷𝑚 is the one on the 3D model. In addition, also the relative uncertainty was calculated as shown in (2): % = |𝐷𝑟 −𝐷𝑚 | 𝐷𝑟 × 100 % . (2) Finally, the overall standard deviation was evaluated, as in (3): Table 2. Uncertainty estimation of the 3D model from VIS images. The measurement on the reference object, on the VIS model, and on the UVL model are reported.  indicates the difference between the measurements (Equation (1)), and  is the relative uncertainty (Equation(2)). Dimension Reference object (mm) VIS-3D model (mm) δ (mm) ε (%) UVL-3D model (mm) δ (mm) ε (%) a 44.2 44.0 0.23 0.52 44.1 0.13 0.30 b 44.3 44.5 0.19 0.43 44.8 0.31 0.70 c 44.1 44.2 0.11 0.25 44.7 0.491 1.11 d 99.0 98.7 0.31 0.31 100.6 1.91 1.94 e 71.4 71.2 0.25 0.35 71.6 0.45 0.63 f 44.3 44.0 0.35 0.79 44.8 0.85 1.93 g 44.3 44.6 0.33 0.74 44.6 0.03 0.07 h 44.4 44.5 0.08 0.18 44.8 0.32 0.72 i 121.6 122.2 0.55 0.45 125.3 3.15 2.58 l 71.4 71.4 0.05 0.07 72 0.65 0.91 m 71.2 71.3 0.14 0.20 71.4 0.06 0.08 n 98.8 98.9 0.12 0.12 100.8 1.88 1.90 o 115.0 114.9 0.14 0.12 116.7 1.84 1.60 p 71.2 70.5 0.67 0.94 71.2 0.67 0.95 q 114.5 114.8 0.34 0.30 116.6 1.76 1.53 r 70.8 70.9 0.19 0.27 71.8 0.86 1.21 s 70.8 70.5 0.27 0.38 71.9 1.37 1.94 t 70.9 70.9 0.03 0.04 71.7 0.77 1.09 u 44.2 44.2 0.06 0.14 44.5 0.29 0.66 v 70.9 70.8 0.02 0.03 71.9 1.07 1.51 Figure 5. Original VIS images of the reference object with validation measurement distances. ACTA IMEKO | www.imeko.org December 2021 | Volume 10 | Number 4 | 115 𝜎 = √ 1 𝑁 − 1 ∑(𝛿𝑖 2 ) 𝑁 𝑖=1 (3) Regarding the multispectral reconstruction, two different approaches were tested. One UVL model was obtained re- texturing the mesh already obtained for the VIS model. This means that the coordinates of the measured points did not change and that the UVL-model is perfectly superimposable. Therefore, there is no need to perform a metric evaluation of this model. The second UVL-model was instead reconstructed starting directly from UVL images. For the estimation of the quality of the model, the procedure previously presented was applied. In particular, the differences between the distances measured on the UVL 3D model and the ones measured on the reference object were computed. In Table 2, the distance difference δ in mm and the corresponding relative error ε are reported. On the basis of these results, the reconstructed visible models are quite reliable with maximum dimensional uncertainties lower than 1 % for the visible model and lower than 2 % for the UV model. Nevertheless, the average uncertainties are lower, reaching about 0.33 % and 1.17 % for the visible and the UV models, respectively, while the standard deviations  of the differences between the real model and the reconstructed one are of about 170 µm and 690 µm, respectively. The higher uncertainty obtained for the UV model is probably due to a higher color uniformity of the acquired images which affected the reconstruction process. Therefore, the reconstruction accuracy can be probably improved by tuning the image acquisition procedure and the reconstruction parameters. 5. CONCLUSIONS This paper presented the design and development of an artifact, that can be used as a metric reference object for the assessment of the accuracy and dimensional uncertainty of the 3D model obtained through photogrammetry. The object is 3D printed and has twelve insets, in which several pictorial preparations were inserted. Indeed, the object is suitable also to be a reference for multispectral imaging. The reference object was employed in order to test the photogrammetric measurement system employed, from a metric point of view. A comparison between several distances acquired both on the mechanical reference object and on the reconstructed VIS 3D model was realized. The maximum dimensional uncertainty is lower than 1 % and the average uncertainty reached about 0.33 %. Moreover, the reconstruction of the model from UVL images was performed, using two different approaches. The obtained results were compared with the real object. From the comparison between the obtained results, it is possible to state that the approach that involves the creation of a VIS model and the subsequent re-texturing with UVL images achieved the best results. ACKNOWLEDGEMENT The authors would like to acknowledge Dr. Paola Buscaglia from Centro Conservazione e Restaura “La Venaria Reale” for the support related to the realization of the pictorial preparation. REFERENCES [1] M. Russo, F. Remondino, G. Guidi, Principali tecniche e strumenti per il rilievo tridimensionale in ambito archeologico, Archeologia e Calcolatori, 22 (2011) (In Italian), pp. 169-198, ISSN 1120-6861. [2] L. Es Sebar, L. Iannucci, C. Gori, A. Re, M. Parvis, E. Angelini, S. Grassini, In-situ multi-analytical study of ongoing corrosion processes on bronze artworks exposed outdoors, Acta IMEKO 10(1) (2021) pp. 241-249. DOI: 10.21014/acta_imeko.v10i1.894 [3] I. M. E. Zaragoza, G. Caroti, A. Piemonte, The use of image and laser scanner survey archives for cultural heritage 3D modelling and change analysis, Acta IMEKO 10(1) (2021) pp. 114-121. DOI: 10.21014/acta_imeko.v10i1.847 [4] J. A. Beraldin, M. Rioux, L. Cournoyer, F. Blais, M. Picard, J. Pekelsky, Traceable 3D imaging metrology, Proc. SPIE Videometrics IX 6491 (2007) DOI: 10.1117/12.698381 [5] T. Schenk, Introduction to photogrammetry, The Ohio State University, Columbus, 2005, 106. Online [Accessed 3 December 2021] https://www.mat.uc.pt/~gil/downloads/IntroPhoto.pdf [6] J. A. Beraldin, F. Blais, S. El-Hakim, L. Cournoyer, M. Picard, Traceable 3D imaging metrology: Evaluation of 3D digitizing techniques in a dedicated metrology laboratory, Proceedings of the 8th Conference on Optical 3-D Measurement Techniques, July 9- 12, 2007, Zurich, Switzerland 2007, pp. 310-318. [7] I. Toschi, A. Capra, L. De Luca, J. A. Beraldin, On the evaluation of photogrammetric methods for dense 3D surface reconstruction in a metrological context, In ISPRS Technical Commission V Symposium, WG1 2(5) (2014) pp. 371-378. [8] G. J. Higinio, B Riveiro, J. Armesto, P. Arias, Verification artifact for photogrammetric measurement systems, Optical Engineering 50(7) (2011), art. 073603. DOI: 10.1117/1.3598868 [9] C. Buzi, I. Micarelli, A. Profico, J. Conti, R. Grassetti, W. Cristiano, F. Di Vincenzo, M. A. Tafuri, G. Manzi, Measuring the shape: performance evaluation of a photogrammetry improvement applied to the Neanderthal skull Saccopastore 1, Acta IMEKO 7(3) (2018). DOI: 10.21014/acta_imeko.v7i3.597 [10] A. Koutsoudis, B. Vidmar, G. Ioannakis, F. Arnaoutoglou, G. Pavlidis, C. Chamzas, Multi-image 3D reconstruction data evaluation, Journal of Cultural Heritage 15(1) (2014) pp. 73-79. DOI: 10.1016/j.culher.2012.12.003 [11] A. Calantropio, M.P. Deseilligny, F. Rinaudo, E. Rupnik, Evaluation of photogrammetric block orientation using quality descriptors from statistically filtered tie points, International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences 42(2) (2018). [12] J. Dyer, G. Verri, J. Cupitt, Multispectral Imaging in Reflectance and Photo-induced Luminscence Modes: A User Manual, British Museum, 2013. [13] A. Cosentino, Identification of pigments by multispectral imaging; a flowchart method, herit sci 2(8) (2014). DOI: 10.1186/2050-7445-2-8 [14] S. B. Hedeaard, C. Brøns, I. Drug, P. Saulins, C. Bercu, A. Jakovlev, L. Kjær, Multispectral Photogrammetry: 3D models highlighting traces of paint on ancient sculptures, In DHN (2019), pp. 181-189. [15] E., Nocerino, D. H. Rieke-Zapp, E. Trinkl, R. Rosenbauer, E. M. Farella, D. Morabito, F. Remondino, Mapping VIS and UVL imagery on 3D geometry for non-invasive, non-contact analysis of a vase, International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives, 42 (2), (2018) pp. 773-780. DOI: 10.5194/isprs-archives-XLII-2-773-2018 [16] M. Parvis, S. Corbellini, L. Lombardo, L. Iannucci, S. Grassini, E. Angelini, Inertial measurement system for swimming rehabilitation, 2017 IEEE International Symposium on Medical http://dx.doi.org/10.21014/acta_imeko.v10i1.894 http://dx.doi.org/10.21014/acta_imeko.v10i1.847 https://doi.org/10.1117/12.698381 https://www.mat.uc.pt/~gil/downloads/IntroPhoto.pdf https://doi.org/10.1117/1.3598868 http://dx.doi.org/10.21014/acta_imeko.v7i3.597 https://doi.org/10.1016/j.culher.2012.12.003 https://doi.org/10.1186/2050-7445-2-8 http://dx.doi.org/10.5194/isprs-archives-XLII-2-773-2018 ACTA IMEKO | www.imeko.org December 2021 | Volume 10 | Number 4 | 116 Measurements and Applications (MeMeA), Rochester, MN, USA, 8-10 May 2017, pp. 361-366. DOI: 10.1109/MeMeA.2017.7985903 [17] A. Gullino, M. Parvis, L. Lombardo, S. Grassini, N. Donato, K. Moulaee, G. Neri, Employment of Nb2O5 thin-films for ethanol sensing, 2020 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Dubrovnik, Croatia, May 25-28, 2020, pp. 1-6. DOI: 10.1109/I2MTC43012.2020.9128457 [18] L. Iannucci, L. Lombardo, M. Parvis, P. Cristiani, R. Basseguy, E. Angelini, S. Grassini, An imaging system for microbial corrosion analysis, 2019 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Auckland, New Zealand, May 20-23, 2019, pp. 1-6. DOI: 10.1109/I2MTC.2019.8826965 [19] AliceVision, Meshroom - 3D Reconstruction Software. Online [Accessed 3 December 2021] https://alicevision.org/#meshroom [20] Wings 3D. Online [Accessed 3 December 2021] http://www.wings3d.com [21] Kremer Pigmente. Online [Accessed 3 December 2021] https://www.kremer-pigmente.com/en [22] L. Es Sebar, S. Grassini, M. Parvis, L. Lombardo, A low-cost automatic acquisition system for photogrammetry, 2021 IEEE International Instrumentation and Measurement Technology Conference-I2MTC, (2021) pp. 1-6. DOI: 10.1109/I2MTC50364.2021.9459991 https://doi.org/10.1109/MeMeA.2017.7985903 https://doi.org/10.1109/I2MTC43012.2020.9128457 https://doi.org/10.1109/I2MTC.2019.8826965 https://alicevision.org/#meshroom http://www.wings3d.com/ https://www.kremer-pigmente.com/en https://doi.org/10.1109/I2MTC50364.2021.9459991