journal contacts acta imeko issn: 2221-870x march 2021, volume 10, number 1 acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 0 journal contacts about the journal acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are mainly based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. editor‐in‐chief francesco lamonaca, italy founding editor‐in‐chief paul p. l. regtien, netherlands associate editor dirk röske, germany editorial board section editors (vol. 7 10) leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france paolo carbone, italy lorenzo ciani, italy catalin damian, romania pasquale daponte, italy luca de vito, italy luigi ferrigno, italy edoardo fiorucci, italy alistair forbes, united kingdom helena geirinhas ramos, portugal sabrina grassini, italy fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom francesco lamonaca, italy massimo lazzaroni, italy fabio leccese, italy rosario morello, italy michele norgia, italy pedro miguel pinto ramos, portugal nicola pompeo, italy sergio rapuano, italy dirk röske, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy enrico silva, italy krzysztof stepien, poland marco tarabini, italy yvan baudoin, belgium francesco bonavolonta, italy marcantonio catelani, italy carlo carobbi, italy mauro d’arco, italy egidio de benedeto, italy alessandro depari, italy alessandro germak, italy min-seok kim, korea momoko kojima, japan koji ogushi, japan vilmos palfi, hungary franco pavese, italy jeerasak pitakarnnop, thailand jan saliga, slovakia emiliano sisinni, italy oscar tamburis, italy jorge c. torres-guzman, mexico ioan tudosa, italy ian veldman, south africa rugkanawan wongpithayadisai, thailand claudia zoani, italy about imeko the international measurement confederation, imeko, is an international federation of actually 42 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses principal contact prof. francesco lamonaca university of calabria department of computer science, modelling, electronic and system science via p. bucci, 41c, vi floor, arcavacata di rende, 87036 (cs), italy e-mail: f.lamonaca@dimes.unical.it acta imeko eulalia balestrieri, e-mail: balestrieri@unisannio.it carlo carobbi, e-mail: carlo.carobbi@unifi.it ioan tudosa, e-mail: itudosa@unisannio.it koji ogushi, e-mail: kji.ogushi@aist.go.jp momoko kojima, e-mail: m.kojima@aist.go.jp support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig, germany e-mail: dirk.roeske@ptb.de mailto:f.lamonaca@dimes.unical.it mailto:balestrieri@unisannio.it mailto:carlo.carobbi@unifi.it mailto:itudosa@unisannio.it mailto:kji.ogushi@aist.go.jp mailto:m.kojima@aist.go.jp mailto:dirk.roeske@ptb.de journal contacts acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 2 journal contacts about the journal acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are mainly based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. about imeko the international measurement confederation, imeko, is an international federation of actually 42 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses principal contact prof. francesco lamonaca university of calabria department of computer science, modelling, electronic and system science via p. bucci, 41c, vi floor, arcavacata di rende, 87036 (cs), italy e-mail: editorinchief.actaimeko@hunmeko.org support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig, germany e-mail: dirk.roeske@ptb.de editor‐in‐chief francesco lamonaca, italy founding editor‐in‐chief paul p. l. regtien, netherlands associate editor dirk röske, germany copy editors egidio de benedetto, italy silvia sangiovanni, italy layout editors dirk röske, germany leonardo iannucci, italy domenico luca carnì, italy editorial board leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france paolo carbone, italy lorenzo ciani, italy catalin damian, romania pasquale daponte, italy luca de vito, italy sascha eichstaedt, germany ravi fernandez, germany luigi ferrigno, italy edoardo fiorucci, italy alistair forbes, united kingdom helena geirinhas ramos, portugal sabrina grassini, italy leonardo iannucci, italy fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom yasuharu koike, japan dan kytyr, czechia francesco lamonaca, italy aime lay ekuakille, italy massimo lazzaroni, italy fabio leccese, italy rosario morello, italy michele norgia, italy franco pavese, italy pedro miguel pinto ramos, portugal nicola pompeo, italy sergio rapuano, italy renato reis machado, brazil álvaro ribeiro, portugal gustavo ripper, brazil dirk röske, germany maik rosenberger, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy michela sega, italy enrico silva, italy pier giorgio spazzini, italy krzysztof stepien, poland ronald summers, uk marco tarabini, italy tatjana tomić, croatia joris van loco, belgium zsolt viharos, hungary bernhard zagar, austria davor zvizdic, croatia mailto:editorinchief.actaimeko@hunmeko.org mailto:dirk.roeske@ptb.de acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 6 section editors (vol. 7 11) yvan baudoin, belgium piotr bilski, poland francesco bonavolonta, italy giuseppe caravello, italy carlo carobbi, italy marcantonio catelani, italy mauro d’arco, italy egidio de benedeto, italy alessandro depari, italy alessandro germak, italy istván harmati, hungary min-seok kim, korea bálint kiss, hungary momoko kojima, japan koji ogushi, japan vilmos palfi, hungary jeerasak pitakarnnop, thailand md zia ur rahman, india fabio santaniello, italy jan saliga, slovakia emiliano sisinni, italy ciro spataro, italy oscar tamburis, italy jorge c. torres-guzman, mexico ioan tudosa, italy ian veldman, south africa rugkanawan wongpithayadisai, thailand claudia zoani, italy introductory notes for the acta imeko first issue 2022 acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 1 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 introductory notes for the acta imeko first issue 2022 francesco lamonaca1 1 department of department of computer science, modeling, electronics and systems engineering (dimes), university of calabria, ponte p. bucci, 87036, arcavacata di rende, italy section: editorial citation: francesco lamonaca, introductory notes for the acta imeko first issue 2022, acta imeko, vol. 11, no. 1, article 1, march 2022, identifier: imekoacta-11 (2022)-01-01 received march 30, 2022; in final form march 30, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: francesco lamonaca, e-mail: editorinchief.actaimeko@hunmeko.org dear readers, the first issue of a new year is the time for balances. in the last year acta imeko has moved huge steps towards the speedup of the publication time and the attracting of high value papers. these improvements were also confirmed by the fact that acta imeko is now indexed by the directory of open access journals (doaj). doaj is one of the most important community-driven, open access service in the world and has a reputation for advocating best practices and standards in open access. it indexes and provides access to high quality, open access, peer-reviewed journals, only. doaj's basic criteria for inclusion have become the accepted way of measuring an open access journal's adherence to standards in scholarly publishing, especially as concern with ethical and quality standards. this was achieved thanks to the evaluable work of the editorial board, the section editors, the reviewers and, last but not least, all the authors that have selected acta imeko for sharing their research with the scientific community. this issue includes the special issue on the ‘imeko tc4 international conference on metrology for archaeology and cultural heritage’ guest edited by fabio santaniello, michele fedel, annaluisa pedrotti and the special issue ‘innovative signal processing and communication techniques for measurement and sensing systems’ guest edited by zia ur rahman. the first special issue collects papers on a hot topic archaeology and cultural heritage studied from the metrological point of view. the fact that the topic is hot is evident: although in full pandemic time, the conference has been a remarkable success with 158 initial submissions, 126 accepted papers, 431 authors from 19 countries, 4 invited plenary speakers, 13 special sessions, 3 tutorial sessions and 11 patronages. the presented papers have highlighted the natural need of exchange of knowledge and expertise between ‘human sciences’ and ‘hard sciences’. this contamination is evident also in the extended papers published in this issue. the special issue on ‘innovative signal processing and communication techniques for measurement and sensing systems’ consists of fifteen papers identifying new perspectives and highlighting potential research issues and challenges in the contest of measurement and sensing. specifically, this special issue will demonstrate how the emerging technologies could be used in future smart sensing systems. the topics are heterogeneous and includes measurements for and by antennas, artificial intelligence, beam forming techniques, body area networks, embedded processors, image sensors and processing, internet of things, knowledge-based systems, machine learning algorithms, medical signal analysis, sensor data processing, vlsi architectures and many more. many novelties are foreseen in this new year, for example the articles of the next issue will be online since the end of the month, and it will be closed at the end of june as foreseen. this new publication policy will strongly reduce the publication time of the submitted papers, they will be available online and indexed by scopus as soon as they are ready. in order to keep the articles of different ‘special issues’ close together in the table of contents, we will not use consecutive page numbering throughout the issue beginning with this volume 11. the page numbers of every single article start with one and end with the number of the article’s pages. we hope that you will enjoy your readings and that you can confirm acta imeko as your main source to find new solutions and ideas and a valuable resource for spreading your results. francesco lamonaca, editor in chief mailto:editorinchief.actaimeko@hunmeko.org microsoft word article 13 106-705-1-le-final.doc acta imeko december 2013, volume 2, number 2, 73 – 77 www.imeko.org acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 73 machine integrated telecentric surface metrology in laser structuring systems robert schmitt1 2, tilo pfeifer1 2, guilherme mallmann2 1 laboratory for machine tools and production engineering wzl, rwth aachen university, aachen, germany 2 fraunhofer institute for production technology (ipt) – department of production metrology, aachen, germany section: research paper keywords: inline metrology, frequency-domain optical coherence tomography, surface inspection, laser structuring citation: robert schmitt, tilo pfeifer, guilherme mallmann, machine integrated telecentric surface metrology in laser structuring systems, acta imeko, vol. 2, no. 2, article 13, december 2013, identifier: imeko-acta-02 (2013)-02-13 editor: paolo carbone, university of perugia received april 15th, 2013; in final form october 8th, 2013; published december 2013 copyright: © 2013 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: this work was supported by german federal ministry of education and research (bmbf), germany corresponding author: guilherme mallmann, e-mail: guilherme.mallmann@ipt.fraunhofer.de 1. introduction the functionalization of surfaces based on laser micromachining is an innovative technology used in a broad spectrum of industrial branches. its main advantage against other processes is the high machining flexibility. this technique permits the structuring of different workpieces (different form complexity and materials) with the same machine tool. examples can be found in minimizing air resistance, operation noise and friction losses [1] as well as in the surface structuring of tools and moulds [2]. there is, however, a trend towards the improvement of the overall process quality and automation as well as smaller microstructures in laser micro-machining. the fulfilment of these demands are at present limited by the non-existence of a sufficiently described process model as well as the absence of a robust and accurate inline process monitoring technique. on the one hand the missing process model leads to a time consuming effort to initialize the laser structuring of new products. this procedure depends e.g. on the applied material composition, product form and surface roughness. if the process behaviour is unknown for a determined workpiece, laser parameters and suitable machining strategies have to be identified in a trial and error testing before the real machining process can start. in this context reference geometries need to be structured and analysed outside the machine tool until a suitable parameter set is found. on the other hand the absence of a process control based on the real machined surface causes a high degree of inefficiency. this is explained by the inability of the machine to identify process defects during the machining procedure, leading to an increased possibility of rejected parts with a high degree of added value. for solving this task with a high level of compatibility and integration, an optical distance measurement system based on the frequency domain optical coherence tomography (fd-oct) was developed. the described telecentric measurement through the laser machining optical system enables a fast and highly accurate surface inspection in machine coordinates before, during and after the structuring process. based on this process monitoring a machining control can be set-up, leading to a fully automated process adjustment and manufacture procedure. abstract the laser structuring is an innovative technology used in a broad spectrum of industrial branches. there is, however, a market trend to smaller and more accurate micro structures, which demands a higher level of precision and efficiency in this process. in this terms, an inline inspection is necessary, in order to improve the process through a closed-loop control and early defect detection. within this paper an optical measurement system for inline inspection of micro and macro surface structures is described. measurements on standards and laser structured surfaces are presented, which underline the potential of this technique for inline surface inspection of laser structured surfaces. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 74 2. state of the art 2.1. laser structuring process the laser structuring technology utilizes thermal mechanisms to machine a workpiece, which are induced through the absorption of high amounts of energy. the physical interaction of laser radiation and matter is therefore a crucial point in this process. its efficiency is associated with laser and material properties, as applied wavelength, focal radius, angle of incidence, material light absorption, surface roughness, metal temperature, laser pulse repetition frequency and energy [1]. typical working wavelengths for the machining of metals and their alloys are 1064 nm (infra-red light) or 532 nm (green light). the guidance of the laser beam to the part’s surface is accomplished in most systems using a rastering system or a laser scanner [3]. this configuration uses computer numerically controlled galvanometer mirrors to deflect and an f-theta lens to focus the laser beam over a working area. this lens is wavelength optimized to focus the chief ray of the laser beam normal to the scanning field regardless of the scan angle, as well as to make the traveled distance of the laser spot on the focal plan directly proportional to the scan angle [4]. 2.2. laser structuring process monitoring inline process monitoring solutions for laser based structuring systems currently being developed in the academy and industry show technical limitations. the technologies available present in part low accuracy, no depth information or are not able to measure directly in machine coordinates. for example an approach using conoscopic holography is used. this technique is not able to measure directly in machine coordinates, as it uses a different optical path as the process laser, leading to complex calibration steps and inserting transformation errors as well as measurement displacement. in [5] a technology based on the acquisition of process generated electro-magnetic emissions is presented for the monitoring of the selective laser melting process. a similar approach can be also applied on laser structuring systems. this technique is however not able to deliver any direct depth information, just being able to monitor the amount of energy absorbed in the machining procedure and based on this evaluate the removed depth. 3. solution concept the solution concept for the machine integrated process monitoring system was designed based on the rastering system machine layout (figure 1). as measurement system an optical distance measurement technique based on the frequency domain optical coherence tomography (fd-oct) was used. the system integration is accomplished through an optical element as beam splitter. 3.1. frequency domain optical coherence tomography (fd-oct) the fd-oct is a technique based on low-coherence interferometry. differently from normal low-coherence interferometers, which use a piezo element to find the maximum interference point, in the fd-oct the depth information is gained by analyzing the spectrum of the acquired interferogram. the calculation of the fourier transformation of the acquired spectrum provides a back reflection profile as a function of the depth. for the generation of the interference pattern a measurement and a reference path are used, where the optical path difference between these arms is detected. the higher the optical path difference between reference and measuring arm, the higher the resulting interference modulation (figure 2). the total interference signal i(k) is given by the spectral intensity distribution of the light source (g(k)) times the square of the sum of the two back reflected signals (ar as the reflection amplitude of the coefficient reference arm and a(z) as the backscattering coefficient of the object, with regard to the offset z0), where k is the optical wavenumber [6]:   0 ( ) ( ) exp( 2 ) ( ) exp 2 ( )( )r zi k g k a i kr a z i kn z r z dz    (1) where n is the refractive index, 2r is the path length in the reference arm, 2(r+z) is the path length in the object arm and 2z the difference in path length between both arms. by finding the maximum amplitude at the spectrum’s fourier transformation, the absolute optical path difference can be detected. the max. measuring depth (zmax) is described by [7]  2max 0 4z n n  (2) where is the central wavelength,  is the bandwidth, n is the sample’s refractive index and n is the number of detector units covered by the light source’s spectrum. the axial resolution of an fd-oct is described by [7] 2 02 0.44fd oct car l     . (3) for the measurement of single distance (a single back reflection) the axial resolution can be increased to a submicrometric resolution by the usage of signal processing techniques, such as gauss fit. figure 1. concept of the inline process monitoring system – (a) dispersion compensator / glass rod. figure 2. frequency domain optical coherence tomography (ft-oct) set-up. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 75 3.2. measurement system prototype the spectrometer for the interference signal acquisition was developed with a wavelength measuring range of 107 nm, which can be adjusted in the absolute frequency range from 900 to 1100 nm depending on the light source to be used (figure 3). as a detector an indium gallium arsenide (ingaas) line camera was used. standard silicon based detectors present quantum efficiencies of less than 20% for the applied wavelength range against values between 60%-80% for ingaas (1.7 µm) based detectors [8]. the light source used in the system is a superluminescent diode with central wavelength at 1017 nm and wavelength range of 101 nm. the theoretical measuring range (maximum depth scan) of the developed fd-oct using the presented sld light source was calculated using equation (2) to be of 1.31 mm. the available measurement range was evaluated using a precision linear translation table. the results showed a maximum distance measurement of 1.25 mm using a specimen of aluminium with a technical surface, which simulates the workpieces used for the laser structuring. the theoretical axial resolution of the system calculated based on equation (3) is 4.51 µm. the usage of a gauss fitting algorithm to find the modulation frequency after the fourier transformation of the acquired light spectrum increases the axial resolution by calculating a sub-pixel accurate curve maximum. based on this technique an increased axial resolution could be achieved. the standard deviation of the distance measurement values acquired in the centre of the scanner field was 218 nm. a detailed analysis of the measurement system is presented in [9]. 3.3. beam coupling the concept chosen for the integration of the developed fd-oct in a laser structuring machine is based on an optical filter or a dichroic beam splitter used as an optical coupler. the coupling is accomplished by using this optical element in the reflective area for the measurement wavelengths and in the transmissive area for the structuring laser wavelength. a very important requirement on this system is the laser beam’s transmission efficiency. the laser beam coupling performance will be directly connected to the machine process energy efficiency as well as to the overall heat development on the coupling system. in order to warrant a robust system with little long time deviations, e.g. by component wear, and little energy losses an optical element with a transmission of near to 100% needs to be chosen. another important system requirement is a highly accurate beam alignment. a misalignment between laser and measurement beam will lead to a displacement between the laser and the measurement spot, causing a mismatch / uncertainty in the measurement results. the developed coupling system meets the described demands and enables a system integration by an unique machine hardware change. the insertion of a coupling optical element in the laser beam path with a determined angle (45° for a dichroic beam splitter or a wavelength dependent angle for an optical filter) fulfills the complete integration. the component disposal for the beam coupling can be seen in the prototype setup presented in figure 4. the coupling efficiency of the concept was evaluated using an optical edge filter for the wavelength of 1064 nm. by changing the angle of the edge filter, the edge frequency between reflection and transmission is displaced. by an angle of 23° the edge frequency is adjusted in such a way, that the filter reflects the wavelength bandwidth of the measuring system and transmits the wavelength of the laser beam (figure 4). an overall coupling efficiency of over 95% for the laser beam and over 93% for the measurement beam was evaluated in laboratory tests. these results validate the concept for the machine integration. 3.4. telecentric f-theta scanning lens as presented in figure 1 a typical scanning system used in laser structuring systems is composed mostly of a scanning unit based on galvanometric moved mirrors and an f-theta scanning lens. the scanning objective used in the presented system prototype is a telecentric f-theta scanning lens. this lens type is wavelength optimized through the addition of a targeted optical distortion in the lens system. the aim of this optimization is to create a focal plane for the laser beam, as well as to create a direct proportional relation between scanning angle and laser spot position [4]. the designed optical system of a telecentric f-theta lens causes optical aberration in wavelengths other than the machining laser wavelength. this aberration introduces systematic errors to the measurement beam, such as a figure 4. system prototype with detailed view of the beam coupling unit (laser and measurement beam coupling). figure 3. measurement system based on fourier domain low coherence tomography. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 76 distortion of the focal plane, changes in the optical path and form deformation of the measurement spot. the evaluated dispersion could be compensated on the measurement system by the usage of a special designed glass rod, which was inserted on the system’s reference path (figure 1). this glass rod represents the same optical dispersion generated by the f-theta in the main optical path. hereby the interference conditions are fulfilled for all field angles of the scanner. the remaining effects of the f-theta in the optical path for other beam entrance angles can be seen in the measurement results as slight deviations of the real object form or a small decrease of the lateral accuracy in the border of the scanning field. the specific optical path deformation over the scanning field could be evaluated through a system simulation. the results show a form error in the shape of a saddle (figure 5). 3.5. laser structuring system prototype in order to evaluate the proposed inline measurement technique previously to a machine integration, a laser structuring system prototype was constructed. for the deflection of the laser and measurement beam a galvanometer based scanning unit was used. a telecentric f-theta lens with focal distance of 80 mm was applied for focusing both beams on the structuring plane (figure 6). the combination of both optical elements enables a working field of 30 mm × 30 mm. as machining laser a nanosecond pulsed fiber laser with central wavelength at 1064 nm was used. the controlling of the scanner and laser units is executed by a specially developed piece of software. a laser security compliant housing completes the prototype. 4. results the evaluation of the system prototype was carried out through a series of test measurements in flatness and step standards, as well as in laser structured workpieces. for these tests the processing laser was turned off. by measuring a flatness and a step standard the remaining amount of optical aberration introduced by the telecentric f-theta lens could be investigated. alterations in the optical path length of the measuring beam affect the surface form inspection and need to be characterized. the measurement of a flatness standard shows a slight distorted plane (figure 7), as expected by the system simulation results (figure 5). for a measurement area of 20 mm × 20 mm a parabolic distortion could be detected in x and y directions. after the subtraction of the plane inclination a maximal deformation of 48 µm in the x-axis and of 65 µm in the y-axis was measured. as already shown in the system simulation, this measurement distortion is caused by an optical dispersion in dependency of the beam entrance angle. a correction of this effect can be achieved by a system calibration, which is based on a measurement of a reference surface (e.g. flatness standard). the evaluated form error is modeled by a 3d polynomial and used in the system software to compensate the measurement results. to evaluate the measurement range as well as a possible non-linearity after the calibration process a step standard with steps of about 100 µm was measured (figure 8). the measured area of the workpiece was 6.5 mm × 2 mm. the overall height variation measured by the prototype was 925 µm over the 10 steps. a reference measurement with a chromatic sensor acquired a height variation of 917 µm over the 10 steps. an overall non-linearity of about 8 µm over a measurement range of 1 mm could be evaluated. this represents a non-linearity of figure 5. simulation of the optical path length of the measurement beam through the machine optical system over the complete working field. figure 6. prototype of a laser structuring system with the developed inline measurement system. figure 7. 3d measurement of a reference surface (unit in mm). acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 77 less than 1% for the presented prototype in the used scanning area. regarding the application of the presented measurement system for laser structuring processes, e.g. in the manufacturing of tools and moulds [1], a laser machined structure (20 mm × 20 mm) was measured and analyzed (figure 9). the resulting measured surface demonstrates the robustness of the inline system for the inspection of complex surfaces. 5. conclusions within this paper an optical measurement system for the surface inspection of micro and macro structures with sub-micron accuracy within a laser structuring machine was described and evaluated. a standard deviation of the distance measurements in the z-axis of fewer than 218 nm and a nonlinearity of less than 1% for a measurement range of 1 mm was determined. measurements on standards and laser structured surfaces are presented, which validate the potential of this technique for a telecentric surface inspection within laser structuring machines. future investigations, especially in the usage of this technique for a feedback control of adaptive laser micro processing machines, will be carried out. the effects of process emissions in the measurement results are also an important research task. acknowledgement we gratefully acknowledge the financial support by the german ministry of education for the project ‘scan4surf ’ (02po2861), which is the basis for the proposed achievements. references [1] s. schreck et. al., “laser-assisted structuring of ceramic and steel surfaces for improving tribological properties”, proc. of the european materials research society, applied surface science, 2005, vol. 247, pp. 616-622. [2] f. klocke, et al., “reproduzierbare designoberflächen im werkzeugbau: laserstrahlstrukturieren als alternatives fertigungsverfahren zur oberflächenstrukturierung”, werkstatttechnik, 2009, no. 11/12, pp. 844-850. [3] j. c. ion, laser processing of engineering materials – principles, procedure and industrial application, elsevier, 2005, isbn 978-0-7506-6079-2, p. 389. [4] b. furlong and s. motakef, “scanning lenses and systems”, photonik international, no. 2, 2008, pp. 20-23. [5] t. craeghs et. al., “online quality control of selective laser melting”, proceedings of the 20th solid freeform fabrication (sff) symposium, 2011. [6] m. brezinski, optical coherence tomography – principles and applications, elsevier, 2006, isbn 978-0121335700 pp. 130-134. [7] p. tomlings and r. wang, “theory, developments and applications of optical coherence tomography”, applied physics, no. 38, 2005, pp. 2519-2535. [8] a. rogalski, infrared detectors, crc press, 2010, isbn 978-1420076714, pp. 315-317. [9] r. schmitt, g. mallmann and p. peterka, “development of a fd-oct for the inline process metrology in laser structuring systems”, proc. spie 8082, 2011, 808228. figure 8. 3d measurement of a step standard (unit in mm). figure 9. 3d measurement of a laser structured surface (unit in mm). introductory notes for the acta imeko second issue 2022 acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1-3 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 introductory notes for the acta imeko second issue 2022 francesco lamonaca1 1 department of department of computer science, modeling, electronics and systems engineering (dimes), university of calabria, ponte p. bucci, 87036, arcavacata di rende, italy section: editorial citation: francesco lamonaca, introductory notes for the acta imeko second issue 2022, acta imeko, vol. 11, no. 2, article 2, june 2022, identifier: imekoacta-11 (2022)-02-02 received june 30, 2022; in final form june 30, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: francesco lamonaca, e-mail: editorinchief.actaimeko@hunmeko.org dear readers, the second issue 2022 of acta imeko collects contributions that do not relate to a specific event. as editor-in-chief, it is my pleasure to give readers an overview of these papers, with the aim of encouraging potential authors to consider sharing their research through acta imeko. modern applications in virtual reality require a high level of fruition of the environment as if it was real. in applications that have to deal with real scenarios, it is important to acquire both its three-dimensional (3d) structure and details to enable the users to achieve good immersive experiences. in the paper entitled “omnidirectional camera pose estimation and projective texture mapping for photorealistic 3d virtual reality experiences”, by a. luchetti et al., the authors propose a method to obtain a mesh with high quality texture combining a raw 3d mesh model of the environment and 360° images. the main outcome is a mesh with a high level of photorealistic details. the paper entitled “the importance of physiological data variability in wearable devices for digital health applications”, by g. cosoli et al., aims at characterizing the variability of physiological data collected through a wearable device (empatica e4), given that both intraand inter-subject variability play a pivotal role in digital health applications, where artificial intelligence (ai) techniques have become popular. inter-beat intervals (ibis), electrodermal activity (eda), and skin temperature (skt) signals have been considered and variability has been evaluated in terms of general statistics (mean and standard deviation) and coefficient of variation. results show that both intraand inter-subject variability values are significant, especially when considering those parameters describing how the signals vary over time. moreover, eda seems to be the signal characterized by the highest variability, followed by ibis, contrary to skt which results more stable. s. avdiaj et al., in the paper “measurements of virial coefficients of helium, argon and nitrogen for the needs of static expansion method”, study the influence of virial coefficients on the realization of primary standards in vacuum metrology, especially in the realization of the static expansion method. in the paper they present the measured data for virial coefficients of three gases, namely helium, argon, and nitrogen, measured at room temperature and a pressure range from 3 kpa to 130 kpa. in the new optical pressure standard ultra-low expansion (ule) glass cavities were proposed to measure helium refractivity for a new realisation of the unit of pressure, pascal. however, it was noticed that the use of this type of material causes some difficulties. one of the main problems of ule glass is the pumping effect for helium. therefore, instead of ule, zerodur glass was proposed as a material for the cavity. this proposal was given by the vacuum metrology team of the physikalisch-technische bundesanstalt ptb in the quantumpascal project. in order to calculate the flow of helium gas through zerodur glass one has to know the permeation constant k. in the paper “measurements of helium permeation in zerodur glass used for the realisation of quantum pascal”, a. kurtishaj et al. measured the permeation of helium gas in zerodur in the temperature range from 80 °c to 120 °c. experimental results assess that zerodur material has the potential to be used as cavity material for the new quantum standard of pressure. s. ondera et al., in the paper entitled “dose reduction potential in dual-energy subtraction chest radiography based on the relationship between spatial-resolution property and segmentation accuracy of the tumor area”, investigated the relationship between the spatial-resolution property of soft tissue images and the lesion detection ability using u-net. the aim of the paper is to explore the possibility of dose reduction during energy subtraction chest radiography. an informed type a evaluation of standard uncertainty is derived in the paper entitled “an informed type a evaluation of standard uncertainty valid for any sample size greater than or equal to 1”, authored by c. carobbi, based on bayesian analysis. the result is mathematically simple, easily interpretable, applicable both in the theoretical framework of the guide to the expression of uncertainty in measurement (propagation of mailto:editorinchief.actaimeko@hunmeko.org acta imeko issn: 2221-870x june 2022, volume 11, number 2, 2-3 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 standard uncertainties) and in that of the supplement 1 of the guide (propagation of distributions), valid for any size greater than or equal to 1 of the sample of present observations. g. campobello et al., in the paper “on the trade-off between compression efficiency and distortion of a new compression algorithm for multichannel eeg signals based on singular value decomposition” investigate the trade-off between the compression ratio and distortion of a recently published compression technique specifically devised for multichannel electroencephalograph (eeg) signals. this paper extends a previous one in which the authors proved that, when singular value decomposition (svd) is already performed for denoising or removing unwanted artifacts, it is possible to exploit the same svd for compression purpose by achieving a compression ratio in the order of 10 and a percentage root mean square distortion in the order of 0.01 %. in this article, authors successfully demonstrate how, with a negligible increase in the computational cost of the algorithm, it is possible to further improve the compression ratio by about 10 % by maintaining the same distortion level or, alternatively, to improve the compression ratio by about 50 % by still maintaining the distortion level below the 0.1 %. in the paper entitled “a strategy to control industrial plants in the spirit of industry 4.0 tested on a fluidic system”, l. fabiano et al. propose a strategy of automating the control of wide spectrum industrial processes plants in the spirit of industry 4.0. the strategy is based on the creation of a virtual simulator of the operation of the plants involved in the process. through the digitization of the operational data sheets of the various components, the simulator can provide the reference values of the process control parameters to be compared with their actual values, to decide the direct inspection and/or the operational intervention on critical components before a possible failure. n. covre et al., in “monte carlo-based 3d surface point cloud volume estimation by exploding local cubes faces”, propose a state-of-the-art algorithm for estimating the 3d volume enclosed in a surface point cloud via a modified extension of the monte carlo integration approach. the algorithm consists of a preprocessing of the surface point cloud, a sequential generation of points managed by an affiliation criterion, and the final computation of the volume. the pre-processing phase allows a spatial re-orientation of the original point cloud, the evaluation of the homogeneity of its points distribution, and its enclosure inside a rectangular parallelepiped of known volume. the affiliation criterion using the explosion of cube faces is the core of the algorithm, it handles the sequential generation of points, and proposes the effective extension of the traditional monte carlo method by introducing its applicability to the discrete domains. in the paper “3d shape measurement techniques for human body reconstruction”, i. xhimitiku et al. investigate and compare the performances of three different techniques for 3d scanning. in particular, two commercial tools (smartphone camera and ipad pro lidar) and a structured light scanner (go!scan 50) have been used for the analysis. first, two different subjects have been scanned with the three different techniques and the obtained 3d models were analysed in order to evaluate the respective reconstruction accuracy. a case study involving a child was then considered, with the main aim of providing useful information on performances of scanning techniques for clinical applications, where boundary conditions are often challenging (e.g. non-collaborative patient). finally, a full procedure for the 3d reconstruction of a human shape is proposed, in order to set up a helpful workflow for clinical applications. high-resolution x-ray computed micro-tomography (ct) is a powerful technique for studying the processes of crack propagation in non-homogenous quasi-brittle materials such as rocks. to obtain all the significant information about the deformation behaviour and fracture characteristics of the studied rocks, the use of a highly specialised loading device suitable for the integration into existing tomographic setups is crucial. since no adequate commercial solution is currently available, a completely newly-designed loading device with a four-point bending setup and vertically-oriented scanned samples is proposed and used in the paper “study of fracture processes in sandstone subjected to four-point bending by means of 4d xray computed micro-tomography”, authored by l. vavro et al. this design of the loading procedure, coupled with the high stiffness of the loading frame, allows the loading process to be interrupted at any time and for ct scanning to be performed without the risk of the sudden destruction of the scanned sample. m. s. latha gade et al. in the paper “a cost-efficient reversible logic gates implementation based on measurable quantum-dot cellular automata” describe experimental and analytic approaches for measuring design metrics of reversible logic gates using quantum-dot cellular automata (qca), such as ancilla input, garbage output, quantum cost, cell count, and area, while accounting for the effects of energy dissipation and circuit complexity. the parameters of reversible gates with modified structures are measured and then compared with the existing designs. human facial expressions are thought to be important in interpreting one's emotions. emotional recognition plays a very important part in the more exact inspection of human feelings and interior thoughts. over the last several years, emotion identification utilizing pictures, videos, or voice as input has been a popular issue in the field of study. recently, most emotional recognition research focuses on the extraction of representative modality characteristics and the definition of dynamic interactions between multiple modalities. deep learning methods have opened the way for the development of artificial intelligence products, and the suggested system employs a convolutional neural network (cnn) for identifying real-time human feelings. the aim of the research study proposed by k. pranathi et al. in the paper “video-based emotion sensing and recognition using convolutional neural network based kinetic gas molecule optimization” is to create a real-time emotion detection application by utilizing improved cnn. this research offers information on identifying emotions in films using deep learning techniques. kinetic gas molecule optimization is used to optimize the fine-tuning and weights of cnn. in “development of a contactless operation system for radiographic consoles using an eye tracker for severe acute respiratory syndrome coronavirus 2 infection control: a feasibility study”, m. sato et al. propose noncontact operation system for radiographic consoles that used a common eye tracker system facilitating noncontact operation of radiographic consoles for patients with covid-19 to reduce the need for frequent disinfection. experimental tests show that the proposal can be applied even if the operator uses a face shield. thus, its acta imeko issn: 2221-870x june 2022, volume 11, number 2, 3-3 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 application could be important in preventing the transmission of infections. j. y. blaise et al. in “acquisition and integration of spatial and acoustic features: a workflow tailored to small-scale heritage architecture” report on an interdisciplinary data acquisition and processing chain, the novelty of which is primarily to be found in a close integration of acoustic and spatial data. the paper provides a detailed description of the technological and methodological choices that were made in order to adapt to the particularities of the corpus studied (interiors of small scale rural architectural artefacts). the research outputs pave the way for proportion-as-ratios analyses, as well as for the study of perceptual aspects from an acoustic point of view. ultimately, “perceptual” acoustic data characterised by acoustic descriptors will be related to “objective” spatial data such as architectural metrics. multiplication provides a substantial impact on metrics like power dissipation, speed, size and power consumption. a modified approximate absolute unit is proposed by y. nagaratnam et al. in the paper “a modified truncation and rounding-based scalable approximate multiplier with minimum error measurement” to enhance the performance of the existing approximate multiplier. the proposed multiplier can be applied in image processing and shows an error of 0.01% while actual solutions show a typical error of 0.40%. always in the field of approximate multipliers, in “low-power and high-speed approximate multiplier using higher order compressors for measurement systems”, m. v. s. ram prasad et al. proposed an innovative architecture that in the implementation of a fir filter allows to achieve a delay of 27 ns versus the 119 ns achieved by the exact multiplier taken as reference. finally, the technical note authored by franco pavese is a comment which addresses the paper published in this journal “is our understanding of measurement evolving?” and authored by luca mari. this technical note concerns specific parts of that paper, namely the statements: “doubt: isn’t metrology a ‘real’ science? … metrology is a social body of knowledge”, “measurements are aimed at attributing values to properties: since values are information entities, any measurement must then include an informational component” and “what sufficient conditions characterise measurement as a specific kind of property evaluation?”, and discusses alternatives. also in this issue, high-quality and heterogeneous papers are presented, confirming acta imeko as the natural platform for disseminating measurement information and stimulating collaboration among researchers from many different fields. in particular, the technical note shows how acta imeko is the right place where different opinions and points of view can meet and compare, stimulating a fruitful and constructive debate in the scientific community of measurement science. i hope you will enjoy your reading. francesco lamonaca editor in chief microsoft word article 16 113-807-1-ga.docx acta imeko december 2013, volume 2, number 2, 91 – 95 www.imeko.org acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 91 challenges and perspectives of regional cooperation within coomet – the euro-asian regional metrology organization pavel neyezhmakov1, klaus-dieter sommer2 1 national scientific centre “institute of metrology”, mironositskaya 42, 61002 kharkov, ukraine 2 physikalisch-technische bundesanstalt, bundesallee 100, 38116 braunschweig, germany section: technical note keywords: metre convention; rmo; jcrb; cipm mra; quality management system citation: pavel neyezhmakov, klaus-dieter sommer, challenges and perspectives of regional cooperation within coomet – the euro-asian regional metrology organization, acta imeko, vol. 2, no. 2, article 16, december 2013, identifier: imeko-acta-02 (2013)-02-16 editor: paolo carbone, university of perugia received april 17th, 2013; in final form december 14th, 2013; published december 2013 copyright: © 2013 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited corresponding author: pavel neyezhmakov, email: pavel.neyezhmakov@rambler.ru 1. introduction coomet (coomet – abridged form from “cooperation in metrology”) is a regional metrology organization (rmo) [1] that establishes cooperation between the national metrology institutions of the countries of central and eastern europe. it was founded in june 1991 and renamed euro-asian cooperation of national metrology institutions in 2000. coomet is open to any metrology institution from other regions; they can join as associate members. the basic activity of coomet is cooperation in the following areas: measurement standards of physical quantities, legal metrology, quality management systems (qms), information and training. the participation in coomet activities gives an opportunity for the member countries to solve metrological tasks in a more efficient way on the basis of approved rules and procedures. the current 15 members of coomet are the metrology institutions from armenia, azerbaijan, belarus, bulgaria, georgia, kazakhstan, kyrgyzstan, lithuania, moldova, russia, romania, slovakia, tajikistan, ukraine and uzbekistan. in addition, there are 3 associate members: germany, dpr of korea, and cuba. the objectives of coomet are the following:  provide assistance in effectively addressing any problems relating to the uniformity of measures, uniformity of measurements and the required accuracy of their results;  provide assistance in promoting cooperation between national economies and eliminating technical barriers for international trade;  harmonize activities of metrology services of the euro-asian countries with similar activities in other regions.  these objectives are accomplished by cooperation between interested coomet member countries with regards to supporting activities related to the accreditation of national metrology institutes (nmis), as well as calibration and measurement laboratories.  at today’s stage of progress, the tasks of coomet are aimed at strengthening the links between the nmis in order to solve common problems and to create effective mechanisms that will meet the following objectives: abstract the creation of the metrological infrastructure providing traceable results of the measurement is one of the major tasks of coomet member-countries from central asia and to caucasian region. coomet successfully cooperates in developing the basic metrology infrastructures of these countries. in accordance with strategic aims coomet also supports metrological knowledge transfer and developing technical competence for innovations and scientific research. for the purpose of implementing joint research projects a tc 5 was established. cooperation within tc 5 aims at activating the nmis of coomet member countries in the global integration process in science, technology and high-end manufacturing. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 92  achieve compatibility of measurement standards and harmonize the requirements imposed on measuring instruments and methods for their metrological control;  recognize the equivalence of national certificates authenticating the results of metrology activities;  exchange information on the current status of national metrology services and their development;  collaborate in developing metrology projects; and  promote the exchange of metrology services. a considerable advance in the formation of the current aims and tasks of the organization occurred as a result of several coomet committee meetings. at these meetings, decisions were made to increase the effectiveness of coomet activities. it meant that planning of cooperation and informing about activities, interaction with other international and regional metrology organizations, etc. should be improved. according to the coomet first development program for 2001-2002 there was developed and approved the organizational structure of coomet, according to which structural and working bodies in all fields of cooperation covered by the mou were created (see figure 1). this structure provided wide involvement of qualified specialists of nmis of member countries into coomet activities. transformations directed toward the improvement of organization activities were reflected in the mou and rules of procedure of coomet, developed and fixed in the appropriate documents. in 2005 the mou and rules of procedure were amended with respect to the election of the coomet president, according to which in the year before the end of the acting president’s term, the future president is elected and authorized for the next 3 years. one more important branch of coomet activity is the development and adoption of the conception of coomet in 2005. conception of cooperation and activity of coomet determines the strategy tasks from a mediumterm and long-term perspective and provides for their implementation. among these tasks are the following:  strengthening innovation and technology components of member countries in the global system of economic and society development;  competent and economically effective participation of coomet member countries in global integration processes in the fields of science, technology, science intensive production and the economy in general;  increasing the level of competitiveness of member countries in the fields of science and technology through participation in the world market of intellectual products, science intensive products and services. 2. implementation of the cipm mra the implementation of the cipm mra [2] is directed at the fulfillment of the following tasks:  organization and holding of regional comparison of measurement standards of coomet member countries in order to assure the traceability of measurement standards to reference values of the si units;  determination of the degree of equivalence of national measurement standards;  regional and interregional reviews of the calibration and measurement capabilities (cmc) and their figure 1. current organizational structure of coomet. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 93 publication in the bipm key comparison database (kcdb); and  evaluation of the quality management systems (qms) of nmis through external review of each qms. today, nmis from 12 coomet member countries are participants in the cipm mra. six countries among them are members of the international bureau of weights and measures (bipm) and six are associates of the general conference on weights and measures (cgpm). with the aim of providing support for nmis in coomet for the cipm mra purpose, several recommendations have been successfully developed and approved. according to the cipm mra d-04 “calibration and measurement capabilities in the context of the cipm mra,” [3] the joint committee of regional metrology organizations and bipm (jcrb) requires that cmc data presented for publication in annex c should be completely supported by an implemented quality system, controlled and approved by the local rmo, and the range and uncertainty of the cmc shouldn’t contradict the information obtained from the results of key and supplementary comparisons. for the implementation of this jcrb requirement, coomet has instituted the program of comparisons (document coomet d9). the joint committee for measurement standards (jcms) annually updates the program of comparisons and presents the results at the president’s council meeting for approval by the coomet president. according to the cipm mra d-04 prior to be submitted for the inter-regional review, the cmcs should be reviewed and approved by the rmo. coomet has established the process for the intra-rmo review. this process follows the cipm mra-d-04 and assures that the cmcs submitted for the inter-regional review have sufficient technical support. the technical committees form groups of technical experts for the review of the comparison’s results and cmcs-data. starting in 2002, the coomet quality forum directed the cipm mra realization. the evaluation of the quality management system in coomet nmis is carried out during the conduction of peer reviews by coomet auditors and technical experts once every 5 years and is coordinated by the technical committee of coomet quality forum (tc qf). the peer reviews are carried out in accordance with the rmo coomet recommendations. the tc qf is conducting monitoring of quality management systems of nmis of member countries on the basis of cross analysis of annual reports, directed to the secretariat of tc qf. currently 22 nmis from 18 coomet member countries are cooperating within the quality forum. 3. involving new parties in the cipm mra providing for the traceability in the coomet region has required great attention and support for countries of central asia and the caucasus region in the development of national metrology infrastructures, harmonization with international requirements, improving the national standards of these countries, and preparing these nmis for signing the cipm mra. countries which are on the way to signing: azerbaijan, armenia, kyrgyzstan, tajikistan and uzbekistan. aiming for stable development and cooperation, in the near future coomet plans to provide for the preparation of these countries to sign and realize the cipm mra, for active participation of their national measurement standards in international comparisons, for the creation of quality systems for their nmis, and for implementing the requirements of iso/iec 17025. in order to support these countries, subcommittee “support in developing basic metrological infrastructures of coomet member countries” was established in 2008 within coomet tc 4 for “information and training.” this sc has to solve the following tasks in the countries of the region: assistance in the preparation for signing the cipm mra; preparation of the staff for qms application according to iso/iec 17025; assistance in conducting comparisons and preparing cmcs; training for national metrology staff; and the organization of training workshops. with the financial support within the realized ртвcoomet technical cooperation project titled, “support of cooperation between member countries of the regional metrology organization coomet,” several workshops were organized in 2008 – 2012 by coomet tc 4 for directors and experts of the nmis. for example, workshop for coomet nmi’s internal auditors (see figure 2) according to iso/iec 17025 was held 7–8 of november 2012 in nism (chisinau, republic of moldova). workshop participants: 36 representatives from 12 countries azerbaijan, armenia, belarus, germany, georgia, kazakhstan, lithuania, moldova, russia, slovakia, tajikistan, ukraine. the workshop consisted of theoretical and practical parts. next items were presented and discussed:  the basics of qms in nmi;  coomet regulations and procedures on the assessment of qms nmi;  requirements according to iso/iec 17025. the “model” internal audit of nism laboratories was carried out. after “model” internal audit the participants were tested and received the certificates. considering the great importance of the cipm mra in broadening the economic, scientific, technical, and international cooperation, as well as eliminating technical barriers to trade, the activity performed in this field will surely result in signing of the metre convention by the governments of the above-mentioned countries or acquiring the status of an associate to the general conference of weights and measures (cgpm) in order to participate in the figure 2. participants of the workshop in moldova. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 94 implementation of the cipm mra in the near future. 4. cooperating in the field of legal metrology coomet mou contains a number of regulations which are absent in the by-law documents of other regional organizations. for example, coomet activities concern not only scientific but also legal metrology. legal metrology is one of the well-developed areas of cooperation in coomet member countries. at the 20th coomet committee meeting in 2010 [4], a new structure of tc 2 for legal metrology was approved. the new structure consists of the following subcommittees:  sc 2.1 harmonization of regulations and norms  sc 2.2 technologies of measuring devices and systems in legal metrology  sc 2.3 competence assessment of bodies in legal metrology  sc 2.4 legal metrological control (lmc)  the tasks of these scs are the following:  sc 2.1: acceptance or adaptation of approved international or regional documents (e.g: viml, oiml documents, welmec guides, ec recommendations).  sc 2.2: development of test procedures for measuring instruments (mi), including software and measuring systems, but also data transfer and other future technologies.  sc 2.3: development of criteria for the assessment of verification laboratories and other parties.  sc 2.4: establishment of projects for the elements of lmc (surveillance qm, market surveillance, field surveillance). the main purpose of the changes in the structure of tc 2 was to improve the efficiency and optimization of the legal metrology activities based on the experience of the international organization of legal metrology (oiml) and other regional legal metrology organizations. the new structure of tc 2 allows for further expansion of cooperation in the field of legal metrology. the discussion on the new content of work within tc 2 shows that the countryspecific interests should be considered. all tc 2 member states have the same aim, which is to build and enforce an operational, effective system for legal metrology. the new subcommittee structure is open for different realizations and for future changes in the methods for legal metrology. one example is the current low interest in market surveillance of several states, while others use this system more than the preventive verification system. so the question for all is how mutual acceptance can be created in the case of free trade of products, i.e., prepackaged goods or measuring instruments. in this context, the growing importance of conformity assessment was noted. 5. joint research in metrology as a result of many years of discussion of the possibility to realize joint research projects and their sources of funding, a technical committee for joint research in metrology, tc 5, was established in 2009. the tasks of tc 5 for the near future are:  to identify common research areas;  to determine priority fields in research and development;  to determine the efficiency of projects for the economy of coomet member countries; and  to identify those interested groups that benefit the most from the implementation of the projects. it should be noted that there are a number of research projects that are currently implemented within coomet: earth rotation period (erp) determination on the basis of data from observatories of coomet countries; metrology of nanotechnology, standardization of eu-152 radionuclide solution; etc. for example, within erp coomet project, in 2010 the observatories in russia, ukraine, uzbekistan, bulgaria, poland, and the czech republic made routine star and satellite observations and then transmitted the observation data to the erp processing and calculating centre at vniiftri. an exchange of erp observing data and calculating results was made between the participating countries and the international and national centres for erp determination. the calculations of the pole coordinates and duration of the day by the results of gps observations at the stations on the territory of russia were made on a regular basis. the accuracy of erp determination by means of all the techniques of the country participants was about 0.0002″ and 0.02 ms with regard to the pole coordinates and to the universal time, respectively. these values closely approach the accuracy of products of the international earth rotation service (iers). however, the number of these projects is rather small. therefore, for realization of significant tc 5 tasks, the following project was initiated: coomet 492/de/10 “development of a concept for joint metrology research in coomet member countries”. within this project the working group (wg) prepared the questionnaires for conducting the following:  state-of-art analysis;  research needs of participating countries;  structuring and prioritizing of potential research projects;  identification of research subjects of common interest (e.g., metrology for energy, for environment, for health, security and safety, etc. – grand challenges in the field of metrology);  expected social and economic impact of the research and development outcome. the meeting of the wg was held in july 2012 in nsc “institute of metrology”, ukraine. at this meeting questions connected with the need of joint research in coomet were considered, common tasks and the scope of the europeanasian metrology research venture (eamrv) were discussed, as well as the aspired social economic impact, possible styles of cooperation, procedures for determining and application of eamrv etc. further, the results from three questionnaires were evaluated by the wg and were sent to the representatives of coomet member-countries:  questionnaire no.1 “nmi’s current research-anddevelopment resources, capabilities, capacities and international cooperation”; acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 95  questionnaire no.2 “aspired capacity, capability and quality of the national calibration, measurement and testing infrastructure”;  questionnaire no.3 “metrology for the future and demands on joint metrological research”. in order to realize this project, coomet proceeds from world achievements in the field of metrology related to science, industry and economy of all cooperating countries, and, at the same time, establishes a self-contained metrology research strategy for the european-asian transition area, including central asian and caucasian regions, tailored to particular economical needs of its member countries. the development and implementation of joint projects in coomet will contribute to basic science and technology, as well as stimulate innovations to solve metrology problems in coomet member countries. references [1] http://www.coomet.net, http://www.coomet.org. [2] http://www.bipm.org/en/cipm-mra/mra_online.html. [3] http://www.bipm.org/utils/common/cipm_ mra/cipm mra-d-04.pdf. [4] p. neyezhmakov. “20 years of coomet: we measure together for a better tomorrow”, oiml bulletin. – july 2011, vol. lii № 3. – pp. 32-37. editorial to selected papers from the tc17 events "international symposium on measurements and control in robotics" (ismcr2021) and vrise2021 topical event on robotics for risky interventions and environmental surveillance acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 2 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 editorial to selected papers from the tc17 events "international symposium on measurements and control in robotics" (ismcr2021) and vrise2021 topical event on robotics for risky interventions and environmental surveillance zafar taqvi1 1 research fellow university of houston clear lake, houston, texas usa section: editorial citation: zafar taqvi, editorial to selected papers from the tc17 events "international symposium on measurements and control in robotics" (ismcr2021) and vrise2021 topical event on robotics for risky interventions and environmental surveillance, acta imeko, vol. 11, no. 3, article 2, september 2022, identifier: imeko-acta-11 (2022)-03-02 received september 13, 2022; in final form september 13, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: zafar taqvi, e-mail: ztaqvi@gmail.com dear readers, this special issue includes selected papers from the two events organized by tc-17, the imeko technical committee on robotic measurement. annually tc17 organizes "international symposium on measurements and control in robotics" (ismcr), a full-fledged event, focusing on various aspects of international research, applications, and trends of robotic innovations for benefit of humanity, advanced humanrobot systems, and applied technologies, e.g. in the allied fields of telerobotics, telexistance, simulation platforms, and environment, and mobile work machines as well as virtual reality (vr), augmented reality (ar) and 3d modeling and simulation. during the imeko congress years, tc17 organizes only "topical events." in 2021, tc17 organized two virtual topical events, both following the covid-19 restrictions. ismcr2021 had a theme "virtual media technologies for the post covid19 era" and the other tc17-vrise was a jointly organized event with the theme "robotics for risky interventions and environmental surveillance.” vrise stands for virtual robotics for risky interventions and environmental surveillance, the same as the theme. the papers in this special issue segment were selected from the above two events. this special issue covers a variety of topics that relate to augmented reality/virtual reality (ar/vr), tools impacted by covid-19, and 3-d printing as they relate to robotics, including key applications of robotics technology. one ar/vr paper by karen alexander and jennifer rogers entitled "standards and affordances of 21st-century digital learning: using arlem and xapi to track bodily engagement and learning in xr(vr, ar, mr)" describes digital learning using new tools. the other paper entitled "arel-augmented reality-based enriched learning experience" by a. v. geetha and t. mala shows the usage of ar in the learning process. covid-19 has impacted not only the individual researchers but also the experiments and their underlying tools and methodologies. zuzana kovarikova, frantisek duchon, andrej babinec and dusan labat in their paper entitled "digital tools in the post covid-19 age as a part of robotic system for adaptive joining of objects" described the development of the new tools while ahmed alseraidi, yukiko iwasaki, joi oh, takumi handa, vitvasinvimolmongkolpom, fumihiro kato and hiroyasu iwata in their paper, "experiment assisting system with local augmented body (easy-lab)for the post-covid19 era" presents other covid-related discussions. 3-d printers have made equipment component inventories and procurement issues less problematic. two papers, one entitled "twisted and coiled polymer muscle actuated soft 3d printed robotic hand with peltier cooler for drug delivery in medical management" by pawandeep singh matharu et al, and another entitled "igrab duo: novel 3d printed soft orthotic hand triggered by emg signals," authored by irfan zobayed et al discuss their research work on the 3d activities that relate to robotic components and application. mailto:ztaqvi@gmail.com acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 paper on "jelly-z: twisted and coiled polymer fishing line muscle actuated mini-jellyfish robot for environment surveillance and monitoring" by pawandeep singh matharu et al, paper entitled "disarmadillo: an open source remotely controlled platform for humanitarian demining," by emanuela cepolina, alberto parmiggiani, carlo canali, ferdinando cannella, while a third paper entitled "path planning for data collection robots" by sara olasz-szabo and istvan hermati present other robotics application research work. these symposia are forums for the exchange of recent research results and provide futuristic ideas in robotics technologies and applications. they interest a wide range of participants from government agencies, relevant international institutions, universities and research organizations, working with futuristic applications of automated vehicles. the presentation is also of interest to the media as well as the general public. we are sure the readers will find these papers useful in their professional applications. dr. zafar taqvi editor special issue introductory notes for the acta imeko second issue 2021 general track acta imeko issn: 2221-870x june 2021, volume 10, number 2, 4 5 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 4 introductory notes for the acta imeko second issue 2021 general track francesco lamonaca1 1 università della calabria, ponte p. bucci, 87036, arcavacata di rende, italy section: editorial citation: francesco lamonaca, introductory notes for the acta imeko second issue 2021 general track, acta imeko, vol. 10, no. 2, article 2, june 2021, identifier: imeko-acta-10 (2021)-02-02 received may 25, 2021; in final form may 25, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: francesco lamonaca, e-mail: f.lamonaca@dimes.unical.it 1. introductory notes for the acta imeko general track this issue includes a general track aimed to collect contributions that do not relate to a specific event. as editor in chief, it is my pleasure to give readers an overview of these papers, with the aim of encouraging potential authors to consider sharing their research through acta imeko. elena fitkov-norris et al., in ‘are learning preferences really a myth? exploring the mapping between study approaches and mode of learning preferences’, present an interesting study on the presence of the conversion effect in the mapping related to the strength of students’ preferences for receiving information in a visual, auditory, reading/writing, or kinaesthetic modality and the study approaches they adopt when taking notes in class, learning new concepts, and revising for exams. this paper opens up the possibility of new measurement frontiers, as it stimulates research on the definition of new measurement methods and instruments for assessing and describing the approach taken by students to their studies. in ‘a colour-based image segmentation method for the measurement of masticatory performance in older adults’, lorenzo scalise et al. present a specific measurement method based on the automatic segmentation of two-coloured chewing gum and colour features using the k-means clustering algorithm. the proposed solution aims to quantify the mixed and unmixed areas of colour, separated from any background colour, in order to evaluate masticatory performance among older people with different dental conditions. this innovative measurement method will ameliorate people’s quality of life, especially the elderly. using the example of measurements of ion activity, oleksandr vasilevskyi in ‘assessing the level of confidence for expressing extended uncertainty: a model based on control errors in the measurement of ion activity’ proposes a method for estimating the level of confidence when determining the coverage factor based on control errors. based on information on tolerances and uncertainty, it is possible to establish a reasonable interval around the measurement result, within which most of the values that can be justified are assigned to the measured value. a novel design that changes the accelerometer mounting support of a commercial pneumatic shock exciter is described in ‘investigating the transverse motion of a pneumatic shock exciter using two different anvil mounting configurations’ by christiaan s. veldman. the aim is to reduce the transverse motion to which the accelerometer is subjected during shock excitation. the author describes the mounting support supplied by the manufacturer, the design changes made, and the measurement data to compare the transfer motions recorded using two different mounting designs. roberto de fazio et al., in ‘sensor-based mobile robot for harsh environments: functionalities, energy consumption analysis and characterisation’, illustrate the design of a semicustom wheeled mobile robot with an integrated high-efficiency monoor polycrystalline photovoltaic panel on the roof that supports the lithium ion batteries during specific tasks (e.g. navigating rough terrain, obstacles, or steep paths) in order to extend the robot’s autonomy. a new e-textile-based system for the remote monitoring of biomedical signals, named sweet shirt, is presented by armando coccia et al. in their paper ‘design and validation of an e-textile-based wearable system for remote health monitoring’. the system includes a textile sensing shirt, an electronic unit for data transmission, a custom-made android application for real-time signal visualisation, and desktop software for advanced digital signal processing. the device allows the acquisition of electrocardiographic, bicep electromyographic, and trunk acceleration signals. the study’s results show that the information contained in the signals recorded by the novel systems are comparable with those that can be obtained by a standard medical device used in a clinical environment. mailto:f.lamonaca@dimes.unical.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 5 valery mazin, in ‘measurements and geometry’, demonstrates the points of contact between measurements and geometry, which is done by modelling the main elements of the measurement process by the elements of geometry. it is shown in the study that the basic equation for measurements can be established based on the expression of a projective metric and represents its particular case. commonly occurring groups of functional transformations of the measured value are listed. in ‘towards the development of a cyber-physical measurement system (cpms): case study of a bioinspired soft growing robot for remote measurement and monitoring applications’, stanislao grazioso et al. report a preliminary case study of a cpms, namely an innovative bioinspired robotic platform that can be used for measurement and monitoring applications in confined and constrained environments. the innovative system is a ‘soft growing’ robot that can access a remote site through controlled lengthening and steering of its body via a pneumatic actuation mechanism. the system can be endowed with different sensors at the tip or along its body to enable remote measurement and monitoring tasks; as a result, the robot can be employed to effectively deploy sensors in remote locations. the heterogeneous topics of the papers submitted to the general track confirm acta imeko is the natural platform for disseminating measurement information and stimulating collaboration among researchers of many different fields but united by their common interest in measurement science and technologies. francesco lamonaca editor in chief introductory notes for the acta imeko special issue on the 17th imeko technical committee 10 conference "global trends in testing, diagnostics & inspection for 2030” (2nd conference jointly organized by imeko and eurolab aisbl) acta imeko issn: 2221-870x september 2021, volume 10, number 3, 3 4 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 3 introductory notes for the acta imeko special issue on the 17th imeko technical committee 10 conference ‘global trends in testing, diagnostics & inspection for 2030’ (2nd conference jointly organised by imeko and eurolab aisbl) piotr bilski 1, lorenzo ciani 2 1 warsaw university of technology, ul. nowowiejska 15/19, 00-665 warsaw, poland 2 università degli studi di firenze, p.zza s.marco, 4, 50121 firenze, italy section: editorial citation: piotr bilski, lorenzo ciani, introductory notes for the acta imeko special issue on the 17th imeko technical committee 10 conference "global trends in testing, diagnostics & inspection for 2030” (2nd conference jointly organized by imeko and eurolab aisbl), acta imeko, vol. 10, no. 3, article 2, september 2021, identifier: imeko-acta-10 (2021)-03-02 section editor: francesco lamonaca, university of calabria, italy received september 1, 2021; in final form september 27, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: piotr bilski, e-mail: pbilski@ire.pw.edu.pl fehler! linkreferenz ungültig. dear readers, the area of technical diagnostics is one of the most significant research fields, one that involves a broad range of measurements, with various sensors, actuators and advanced computing techniques exploited. as we are becoming increasingly surrounded by the growing amount of autonomous machinery, it is crucial to ensure regular and accurate monitoring is carried out. the technical committee 10 (measurement for diagnostics, optimization & control) is responsible for fostering the research on such topics, which is expressed in terms of a wide range of activities, including the organisation of annual conferences and workshops aimed solely at the problems pertaining to fault detection, identification and location. the solutions for these problems include the array of algorithms, measurement tools and procedures applicable for individual devices and industrial processes. the 17th imeko tc10 conference held in 2020 is a perfect example of this approach. the conference was special for two reasons, the first of which relates to the covid-19 pandemic, which forced us to switch the location, originally planned for dubrovnik, croatia, to the purely virtual world. the event was then held online, using various internet technologies. this was entirely new to all involved and forced us to use new channels of communications and information-sharing techniques (e.g., cloud services and teleconferences) on an unprecedented scale. despite these challenges, the online event was met with great enthusiasm by both the participants and the invited guests. the second unique feature of the 2020 conference was that it was the second event co-organised by imeko and eurolab (croatian branch). this broadened the scope of the event to include new specific topics largely related to eurolab’s interests. the main drawback of the conference was the lost opportunity to visit the beautiful city of dubrovnik; however, we hope to be able to re-organise the conference there in the more traditional way in the future. the theme of the conference, ‘global trends in testing, diagnostics & inspection for 2030’, was supported by a wide range of topics covered by both existing papers and invited speakers. the problems covered by the speakers included the industrial standards (e.g., iso 3452, or iso/iec 17025), computational methods (reinforcement learning, artificial neural networks, etc.), applications (civil engineering or food industry), advanced measurement equipment (optical sensors or mems solutions), and significant measurement challenges (e.g., uncertainty evaluation). it would appear that the constant advancement in electronics and computer technologies has enabled an increasing number of advanced concepts to be realised. the corresponding special issue covers ten papers, which can be divided into three sections, each devoted to a different topic. the selection allows for assessing the advancements in these research and engineering fields. the section aimed at presenting and solving the general problems and challenges pertaining to technical diagnostics includes three papers. the first, ‘fault compensation effect in fault detection and isolation’, written by michał bartyś, considers mailto:pbilski@ire.pw.edu.pl acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 4 the fault compensation effect in fault detection and isolation. this is an important issue in the area of model-based process diagnostics. here, the author discusses the application of the process graph to accurately represent the monitored phenomenon, allowing for fault detection based on the residuals and diagnostic matrix analysis. the concept is illustrated in terms of examples of the liquid tank and closed-loop liquid level control system. meanwhile, in their paper, ‘estimate the useful life for a heating, ventilation, and air conditioning system on a high-speed train using failure models’, marcantonio catelani et al. cover the problem of the design and application of failure models. here, the methodology, i.e., exploiting the model-based diagnostic approach, is demonstrated using the case of the hvac system (both during the simulation and the actual data provided by the manufacturer), with the results demonstrating the capacity of the approach for correctly evaluating the ability of the monitored object. finally, in their paper, ‘integrating maintenance strategies in autonomous production control using a cost-based model’, robert glawar et al. present the novel cost-based model approach for manufacturing-process monitoring. here, various maintenance strategies for autonomous production control are presented, preceded by the definition of the cost function for comparing their efficiency, while the possible application strategies are also discussed. the section related to the novel measurement and sensing methods for diagnostics includes five papers. the first, ‘overview of the modified magnetoelastic method applicability’, by tomáš klier et al., is devoted to the specific type of sensor in exploiting the magnetoelastic method. this method is used in the field of civil engineering to evaluate the state of buildings or construction operations. the important part of this approach is the coil, whereby the strength of the magnetic field can be evaluated. both the attendant laboratory and field (the bridge structure) tests demonstrated the applicability of the method. the paper entitled ‘bringing optical metrology to testing and inspection activities in civil engineering’, by luís martins et al., is focused on the area of optical metrology in the civil engineering field. here, the dimensional measurements of the concrete structures are supported by specific digital imaging techniques (e.g., cctv cameras or laser interferometry). various applications (e.g., bridge monitoring, sewer inspection, earthquake risk analysis) indicate the importance of this type of sensing technology. elsewhere, in their paper entitled ‘vibration-based tool life monitoring for ceramics micro-cutting under various toolpath strategies’, zsolt j. viharos et al. discuss the novel method of monitoring the state of the ceramics cutting machinery using vibration analysis. here, both the timeand frequency-domain features are exploited in order to evaluate and predict the microcutting tool ‘wearing out’ phenomenon, with the analysis of the selected cnc machine allowing for determining three stages of device degradation. in their paper, ‘magnetic circuit optimization of linear dynamic actuators’, laszlo kazup and angela varadine szarka present the linear braking method used in the actuators. here, the method incorporates the design of the magnetic circuit, with the authors presenting the detailed design and the optimisation procedure for the circuit parameters. the attendant simulations demonstrated that it is possible to optimise the flux leakage and the dimensions of the circuit. finally, in their paper entitled ‘analysing and simulating electronic devices as antennas’, dániel erdősy et al. discuss the electromagnetic compatibility (emc) properties in relation to the operation of the complex antenna system. here, the equivalent circuit and the antenna directivity are calculated using the simulation tools, while the antenna arrays and the problems pertaining to the emc emission are also considered. meanwhile, the section that is focused on the computational methods and algorithms in the field of diagnostics includes two papers. the first, ‘the improved automatic control points computation for the acoustic noise level audits’, by tomáš drábek and jan holub, presents the post-processing method for the acoustic noise evaluation to estimate the comfort level of specific human living conditions. here, the control points localisation method is used to optimise the indoor noise measurement, with the attendant algorithm used to identify both long-term stationary and short-term recurring noise. overall, the authors demonstrate how to select the control points, outline various spatial conditions, and compare different layouts in terms of level evaluation accuracy and computation time. the second paper, ‘artificial neural network-based detection of gas hydrate formation’, which was written by ildikó bölkény, discusses the application of an artificial neural network to detect and prevent the gas hydrate formations in the area of industrialprocess diagnostics. here, two network architectures (nnarx and nnoe) are used as the regression machines, with the approach tested on the actual test equipment for the hydrate forming process. the results allowed for comparing the implemented network architectures in terms of prediction accuracy. suffice it to say, we wish to thank all the esteemed authors for delivering interesting, top-quality papers and all the reviewers who devoted a great deal of time and effort to reading and evaluating the manuscripts. this undoubtedly allowed for preparing the high-quality content of the subsequent acta imeko issue. secondly, our gratitude goes to prof. francesco lamonaca, the current editor-in-chief, for his devotion and support during the handling of the papers. we are extremely grateful for the chance to play the role of guest editors and hope that the papers will prove to be both interesting and useful for both research activities and practical applications alike. piotr bilski and lorenzo ciani, guest editors introductory notes for the acta imeko special issue on the 40th measurement day jointly organised by the italian associations gmee and gmtt acta imeko issn: 2221-870x december 2021, volume 10, number 4, 8 9 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 8 introductory notes for the acta imeko special issue on the 40th measurement day jointly organised by the italian associations gmee and gmtt carlo carobbi1, nicola giaquinto2, gian marco revel3 1 department of information engineering, università degli studi di firenze, via s. marta 3, 50139 firenze, italy 2 department of electrical and information engineering, politecnico di bari, via e. orabona 4, 70125 bari, italy 3 department of industrial engineering and mathematical sciences, università politecnica delle marche, via brecce bianche 12, 60131 ancona, italy section: editorial citation: carlo carobbi, nicola giaquinto, gian marco revel, introductory notes for the acta imeko special issue on the 40th measurement day jointly organised by the italian associations gmee and gmtt, acta imeko, vol. 10, no. 4, article 5, december 2021, identifier: imeko-acta-10 (2021)-04-05 received december 13, 2021; in final form december 13, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: carlo carobbi, e-mail: carlo.carobbi@unifi.it dear readers, the measurement day is an annual italian event, founded in 1982 by the late prof. mariano cunietti. with inspired intuition, cunietti conceived the event as a place where engineers, scientists, logicians, epistemologists, could talk to each other, investigating the very foundations of the act of measuring. at those times the event lasted two days and was a beloved appointment for many italian people passionate about measurements, including scholars, professionals, and simple amateurs, with any cultural background. in its long history, the format of the measurement day has evolved, adapting to the times, but always remaining a muchappreciated annual appointment for people interested in measurement. nowadays, it actually involves public discussion of invited presentations for half a day, and is mainly centred on advancement and updates in measurement technology and standards, including activities of international metrological bodies, contributions from calibration laboratories, discussions on historical perspectives and modern trends in the measurement world, etc. we can affirm that the measurement day has kept intact its vocation of bringing together people from different backgrounds, i.e. academia, industry, laboratories, accreditation bodies, etc., and is still a quite appreciated event by “measurers”. even if it is an event of national dimension, the measurement day involves high-level presentations of international level by recognized experts. therefore, we welcomed with pleasure, on its fortieth anniversary, the invitation of prof. lamonaca to devote a special section of acta imeko to the event. the 40th edition of the measurement day, organized in 2021 by the italian associations of electrical and electronic measurements (gmee) and of mechanical and thermal measurements (gmmt), was titled “comfort measurements between research, foundations, and industrial applications”. invited contributions have been from both young and expert researchers, in fields ranging from standardized measurements to psychology. below, we explain the rationale of the chosen theme for the 40th measurement day, and a brief account of the papers included in the special section. we chose the theme of the event starting from the consideration that the environment affects the well-being and the health and emotional state of people, and it is therefore of great importance – and even more in times of pandemics to guarantee the quality of the indoor life experience. for this purpose, it is essential to measure comfort accurately. but what exactly does it mean to measure comfort accurately? the question has no univocal answer, because we are not dealing with a physical quantity that can be defined regardless of the subject. a key recent trend is to integrate the measurement of actual physiological data, for example with wearable sensory devices, with environmental measurements. the standard predictive models need to be updated, as well as the data analysis methodologies. artificial intelligence has been successfully used in this area. comfort measurements do not only involve an update of hardware and software technologies: they require attention to the psychological aspects. how is the problem of measuring comfort (and in general, the problem of measurement) viewed in psychology and the humanities? how does the problem fit from a psychological perspective? the bipm has very recently circulated the “committee draft” of the new international vocabulary of metrology, the vim4. one of the purposes of vim4 is precisely to bring order mailto:carlo.carobbi@unifi.it acta imeko issn: 2221-870x december 2021, volume 10, number 4, 9 10 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 9 to the measurement of “things” fundamentally different from the familiar physical quantities encoded in the international system, for example by introducing the definitions of measurement scale, ordinal scale, nominal scale. on what scale do the comfort measurements fit? how are they different from measurements of physical quantities, and what do these differences entail? the study of comfort measurement thus offers the opportunity to present some of the main innovations of the new vocabulary. as regards the papers of this acta imeko special section, there are five manuscripts that expand the scientific concepts and applications presented during the event, held online on the 30th of march 2021. in the manuscript ‘is our understanding of measurement evolving?’, authored by mari, an analysis is proposed, from an evolutionary perspective, trying to answer to some fundamental questions, such as: what kind of knowledge do we obtain from a measurement? what is the source of the acknowledged special efficacy of measurement? the measurement process is traditionally understood as a quantitative empirical process, but in the last decades measurement has been reconsidered in its aims, scope, and structure. comfort measurements are an application showing the importance and relevance of a reexamination of measurement fundamentals, as suggested in this paper. the paper ‘application of wearable eeg sensors for indoor thermal comfort measurements’, by mansi et. al., presents a measurement protocol and signal processing approach to use wearable eeg (electroencephalography) sensors for human thermal comfort assessment. results, reported from the experimental campaign, confirm that thermal sensation can be detected by measuring the brain activity with low-cost and wearable eeg devices. the paper entitled ‘impact of the measurement uncertainty on the monitoring of thermal comfort through ai predictive algorithms’, authored by morresi et al., proposes an approach to assess uncertainty in the measurement of human thermal comfort by using an innovative method that exploits a heterogeneous set of data, made by physiological and environmental quantities, and artificial intelligence (ai) algorithms. uncertainty estimation has been developed by applying the monte carlo method (mcm), given the complexity of the measurement method. the paper entitled ‘an iot measurement solution for continuous indoor environmental quality monitoring for buildings renovation’, by serroni et al., proposes an innovative iot sensing solution, the “comfort eye”, specifically applied for continuous and real-time indoor environmental quality (ieq) measurement in occupied buildings during the renovation process. ieq monitoring allows investigating the building’s performance to improve energy efficiency and occupant’s well-being at the same time. finally, considering the correlation between stress and discomfort, in the article ‘continuous measurement of stress levels in naturalistic settings using heart rate variability: an experience-sampling study driving a machine learning approach’ cipresso and colleagues repetitively measured physiological signals in 15 subjects in order to find a model of stress level in daily scenarios. using the experience sampling method, cipresso and colleagues were able to collect stressful moments reported by the 15 subjects through questionnaires. using a machine learning approach, the authors built a model to predict stressful situation based on physiological indexes that rely on cardiovascular measurements, that means only based on electrocardiogram or similar measures such as blood volume pulse extracted with a photopletismograph. carlo carobbi, nicola giaquinto and gian marco revel guest editors laser-doppler-vibrometer calibration by laser stimulation acta imeko issn: 221-870x december 2020, volume 9, number 5, 357-360 laser-doppler-vibrometer calibration by laser stimulation h. volkers1 and th. bruns2 1physikalisch-technische bundesanstalt, braunschweig und berlin, germany, henrik.volkers@ptb.de 2physikalisch-technische bundesanstalt, braunschweig und berlin, germany, thomas.bruns@ptb.de abstract a new set-up for primary laser vibrometer calibration was developed and tested at the acceleration laboratory of ptb. contrary to existing set-ups, this configuration makes use of electro-optical excitation. while avoiding the limitations imposed by mechanical motion generators in classic set-ups, the new method still encompasses all components of commercial laser vibrometers in the calibration and thus goes beyond the current capabilities of the purely electrical excitation schemes. keywords: primary calibration, laser doppler vibrometer, ldv calibration 1 introduction laser doppler vibrometers (ldv) are great tools for all kind of vibration measurements, especially for the use as a primary reference for calibration of accelerometers as described in the standards [1, 2]. if pd x(t) y(t) demodulator/ controller u pd_raw (t) f aom u x,v,a (t) ldv figure 1: schematic signal chain of a laser doppler vibrometer figure 1 shows a schematic diagram of the signal flow in an ldv. at the measurement point, the motion quantity x(t) is measured via a laser beam that passes an interferometer (if) including an acoustooptical modulator (aom) and finally illuminates a photodetector (pd) with an intensity modulated by interference according to i(t) = i0 ·sin(2π faom ·t + 4π x(t) λ + τ0)+ b + enoise (1) with faom as the frequency of the aom, λ the wavelength of the laser, τ0 a constant delay due to the time of flight of the laser light, a constant bias intensity b typical for interference and a noise component enoise, e.g. from stray ambient light. the voltage output updraw of the photo detector follows the intensity with a certain additional delay and additional noise from embedded amplifiers and feeds into the demodulation stage of the ldv controller. the internal processing of a commercial ldv is generally unknown to the user but probably follows an arctangents demodulation scheme as described in [3]. the demodulated analog (voltage) output of the instrument is then characterized by its complex transfer functions s in the frequency domain: sux( f ) = ux( f ) x( f ) (2) where the signal delay is characterized by the phase of this complex quantity. 2 existing methods existing calibration methods and set-ups either follow the classical approach of measuring a mechanical vibration [4, 5] or use an electrical excitation of the ldv controller and thus simulate the optical laser head. for the former set-up a typical primary accelerometer calibration system is utilized where a reference ldv measures the motion of a shaker’s armature to provide a reference acceleration signal. the measurement beam of the device under test (dut) is coaligned to the reference beam using a beam splitter and points at the same spot on the armature surface. in the optimized set-up [6] only a single laser beam needs to be adjusted, hence, inaccuracies of the co-alignment are avoided. this method provides a significant smaller uncertainty for ldv calibration than for the classical accelerometer calibration. however, it still suffers from the mechanical limitations of the utilized shaker system in terms of a limited frequency and amplitude range and non-ideal motion. the second method substitutes the laser head of the dut with an electrical signal generator. the signal generator provides the frequency modulated input to acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 357 henrik.volkers@ptb.de thomas.bruns@ptb.de if pd pol λ/4 demodulator/ controller u pd_ref (t) f aom u x,v,a (t) ld signal generator pd adc1 adc2 u am (t) ldv bs figure 2: schematic of a ldv calibration setup with amplitude-modulated laser diode stimulation the ldv controller of the dut, simulating the photo diode output of the laser head. by providing a corresponding fm signal, a very wide variety of virtual motion patterns can be simulated under almost ideal conditions. by simultaneous sampling of the generator signal and the dut output a precise determination of the complex transfer function of the dut is possible. this method, however, requires good knowledge of the working principle of the hardware in order to provide adequate signal levels and carrier frequencies to the controller. in addition, it is not able to account for any signal preconditioning performed within the original laser head. on the other hand it suffers from far less limitations by avoiding any mechanical components[7]. 3 stimulation by amplitude modulated laser source the basic idea of the new set-up evolves from the eq. (1) and figure 1 and is shown in figure 2. the photo diode, being the central sensing part of the ldv, cannot distinguish the cause of an intensity variation. whether it is the result of optical interference or simply is an intensity variation caused by a modulated external light source makes no difference. if an appropriate light source with a suitable amplitude modulation following eq. (1) is targeted at its sensing element, the ldv response will be identical to the respective interference caused by real motion. this approach combines the benefits of those existing set-ups described above, while avoiding the disadvantages. while ommiting any mechanical moving components, that may limit the scope or accuracy, it still includes the potential preprocessing of the laser head in the calibration. all requested excitations for the calibration of ldvs can be provided by electrooptical means. in the new calibration set-up at ptb (c.f. figure 2), the light source is a common 10 mw laser diode (ld) of a wavelength of 635 nm, well matching the ldv’s he-ne wavelength. the bias current is adjusted such that the mean beam power entering the ldv is less than 1 mw, matching approximately the typical output power of the ldv’s laser. an non-polarising beam splitter (bs) separates about 50 % of the light which is directed to a reference photo diode (femto hca-s-400m) of a bandwidth of 400 mhz and a known time delay [8]. an rf generator with phase modulation capability provides the modulation current for the laser diode. the modulation depth is adjusted in a range of 30% to 50% and monitored by the reference photo diode. a polarization filter (pol) and a λ /4 plate between ld and ldv ensure that circular polarized laser light enters the interferometer. the majority of the emitted light from the ld is already linear polarized, however, for the first set-up, the orientation of the polarity is unknown, hence, a polarization filter eases the initial orientation of the λ /4 plate. not shown in figure 2 is the collimator lens of the laser diode and two aperture plates used to ease the alignment of the stimulating laser diode unit with the ldv’s beam line. 4 signal generation and processing the data acquisition system is based on a pxi system controlled by a labview program and provides two synchronized adc channels for the acquisition of the reference signal and the dut output. while the final stage of the set-up is supposed to include an arbitrary waveform generator and the ability to phase lock the generator or the whole pxi system to the ldv carrier frequency, the preliminary results given here were acquired by utilizing an rf generator (type agilent e4400b). the system is similar to the acquisition systems already used and validated for the primary acceleration calibration facilities at ptb. the synchronous data acquisition was performed at 200 ms/s to account for the high carrier frequency. the demodulation of the reference followed the validated arctangents demodulation of ptb’s national primary calibration standards. finally, the evaluation of the simulated motion used the usual three-parameter sine-approximation. 4.1 determination of time delays figure 3 shows the involved time delays of the calibration set-up depicted in figure 2. all components except hldv are assumed to have a true time delay, i.e. the time delay is constant within the frequency ranges observed. the time delay τcor = φcor ω , with ω = 2π f (3) acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 358 ld pd τ r1 τ m1 ldv head τ m2.1 τm2.2 ldv controller τm2.3 hcontr adc2 adc1 τm3 τ r2 τr3 δτadc hldv figure 3: time delays of the signal chains to correct the measured transfer function hmeas to get hldv hldv(ω) = |hmeas(ω)|·e jω(φmeas−φcor) (4) is calculated as τcor = τr1 −τm1 + τr2 + τr3 −τm3 + ∆τadc (5) with the values: τr1 −τm1 = −545(5)mm·cair =−1.82(3)ns(6) cair = 2.9971×108 m/s being the difference of the laser beam lengths measured from the reference surface of the ldv head stated in the manual. its uncertainty covers the unknown delays of the λ /4 plate and the polarization filter. the photo diode delay τr2 was measured in [8] as τr2 = 3.10(8)ns. (7) the time delay difference between the two simultaneously sampled channels, each with an rg58 cable of 2 m length, representing the term τr3−τm3 + ∆τadc, is measured by feeding both cable ends with a common signal from the generator via a power splitter minicircuits zfsc-2-4. by doing multiple measurements with swapped cables at the power splitter and adc inputs we found τr3 −τm3 + ∆τadc = 0.15(5)ns (8) including variances due to remounting. putting the results into (5) gives τcor = 1.43(10)ns (9) the measurement of a time delay was performed in a two-step process. in the first step, a signal with a carrier frequency of 40 mhz and an fm sine modulation of 500 hz and a modulation depth of 3.15 mhz was applied and a coarse time delay was determined by the maxima of a computed cross correlation, see figure 4. in the final step the modulation was turned off and the phase difference of the 40 mhz carrier signal was measured by applying a three parameter sine approximation. this process was chosen to be able to measure time delays of heterodyne signal outputs of ldv controllers with a time delay greater than one period of the carrier signal (25 ns in this case) due to internal signal processing like amplitude stabilisation or filter. 800 -800 0 a m p li tu d e 800 -800 0 a m p li tu d e 800 -800 0 a m p li tu d e 0 0 80n 120n 160n40n0-40n-80n -10µ -8µ -6µ -4µ -2µ 2µ 4µ 6µ 8µ 10µ -0.2m-0.4m-0.6m-0.8m-1m 0.2m 0.4m 0.6m 0.8m 1m time in s figure 4: cross correlation of the two adc channels at a sample rate of 200 mhz, both fed with a 40 mhz fm signal modulated with a 500 hz sine of 3.15 mhz modulation depth, on different time scales, with one of the two connection cables being about 8 m longer, resulting in a delay of 41.90(5)ns. 5 results first tests with a ldv controller polytec ofv5000-ku and a laser head ofv 353, connected via a 5 m cable, were performed and figure 5 shows a measured frequency response obtained with the new setup. the ldv is based on a carrier frequency faom of 40 mhz. the ldv controller settings were: • velocity decoder: vd-01 • range: 1 m/s/v • max. frequency: 50 khz • tracking filter: off • low pass filter: 100 khz • high pass filter: off the frequency range in terms of the simulated vibration was 100 hz to 30 khz and an amplitude of 1 m/s leading to a modulation depth of 3.159 mhz. the plots show the relative deviation in magnitude and the absolute phase of the analog velocity output in relation to the demodulated intensity stimulus on a linear frequency scale. the nearly linear phase corresponds to a delay of about 7.52 µ s. 6 outlook instead of a photo diode with known delay, a known laser source would obviate the beam splitter and photo acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 359 0 5000 10000 15000 20000 25000 30000 -4.0% -3.5% -3.0% -2.5% -2.0% -1.5% -1.0% -0.5% 0.0% 0.5% -90 -80 -70 -60 -50 -40 -30 -20 -10 0 magnitude phase frequency in hz re l. d ev ia ti o n m ag n it u d e p h as e in 1 ° figure 5: transfer function of a ldv analog velocity output at 1 v/(m/s) diode, simplifying the set-up. it is a classical chickenor-egg dilemma; in our case we had the known pd first to relate an optical signal to an electrical signal. the measurement uncertainty budget is still under investigation, the absence of mechanical excitation are expected to significantly reduce the uncertainties, leaving the absolute ac voltage measurement as the main contributor for magnitude uncertainty, while the phase uncertainties are expected to be better than 0.01° for frequencies up to 100 khz. acknowledgment the authors like to thank dr. siegmund of polytec for some inspiring walks and talks. 7 references [1] iso 16063-11:1999 methods for the calibration of vibration and shock transducers — part 11: primary vibration calibration by laser interferometry, iso, geneva, switzerland, 1999 [2] iso 16063-13:2001 methods for the calibration of vibration and shock transducers — part 13: primary shock calibration using laser interferometry [3] iso 16063-41:2011 methods for the calibration of vibration and shock transducers — part 41: calibration of laser vibrometers [4] u buehn et al., "calibration of laser vibrometer standards according to iso 16063-41", xviii imeko world congress 2006, rio de janeiro, brazil, september, 2006 https://www.imeko.org/publications/wc2006/pwc-2006-tc22-007u.pdf. [5] th. bruns, f. blume, a. täubner, "laser vibrometer calibration at high frequencies using conventional calibration equipment", xix imeko world congress, september 6-11, 2009, lisbon, portugal http://www.imeko2009.it.pt/papers/fp_495.pdf [6] f. blume, a. täubner, u. göbel, th. bruns, "primary phase calibration of laser-vibrometers with a single laser source", metrologia, 2009, vol. 46, n. 5, https://dx.doi.org/10.1088/0026-1394/46/5/013 [7] m. winter, h. füser, m. bieler, g. siegmund, c. rembe, "the problem of calibrating laser-doppler vibrometers at high frequencies", aip conference proceedings 1457, 165 (2012) https://doi.org/10.1063/1.4730555 [8] th. bruns, f. blume, k.baaske, m. bieler, h. volkers "optoelectronic phase delay measurement for a modified michelson interferometer", measurement, 2013, vol.46, n. 5, https://doi.org/10.1016/j.measurement.2012.11.044 acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 360 https://www.imeko.org/publications/wc-2006/pwc-2006-tc22-007u.pdf https://www.imeko.org/publications/wc-2006/pwc-2006-tc22-007u.pdf http://www.imeko2009.it.pt/papers/fp_495.pdf https://dx.doi.org/10.1088/0026-1394/46/5/013 https://doi.org/10.1063/1.4730555 https://doi.org/10.1016/j.measurement.2012.11.044 introduction existing methods stimulation by amplitude modulated laser source signal generation and processing determination of time delays results outlook references an informed type a evaluation of standard uncertainty valid for any sample size greater than or equal to 1 acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 5 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 an informed type a evaluation of standard uncertainty valid for any sample size greater than or equal to 1 carlo carobbi1 1 departement of information engineering, università degli studi di firenze, via santa marta 3, 50139 firenze, italy section: research paper keywords: measurement uncertainty; type a evaluation; pooled variance; bayesian inference; informative prior citation: carlo carobbi, an informed type a evaluation of standard uncertainty valid for any sample size greater than or equal to 1, acta imeko, vol. 11, no. 2, article 29, june 2022, identifier: imeko-acta-11 (2022)-02-29 section editor: francesco lamonaca, university of calabria, italy received october 1, 2021; in final form february 23, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: carlo carobbi, e-mail: carlo.carobbi@unifi.it 1. introduction the quantification of the type a uncertainty contribution in the case of a small sample ( = 1, 2, 3n ) is a subject of research and passionate debate in the working group 1 of the joint committee for guides in metrology (jcgm wg1), the standards working group involved in the maintenance and development of the guide to the expression of uncertainty in measurement (gum, [1]) and its supplements. the topic is so felt that, at the end of 2019, the “jcgm wg1 workshop on type a evaluation of measurement uncertainty for a small set of observations” was held at the bureau international des poids et mesures (bipm, sèvres, paris). the problem arose following the negative reaction to the committee draft (cd) of the review of the gum, circulated at the end of 2014 [2]. one of the most criticized issues of the draft of the “new gum” is the type a evaluation of uncertainty based on the use of a student’s t probability density function having −1n degrees of freedom, shifted by the mean y of the n observations i y , = 1, 2,...,i n , and scaled by the standard deviation of the mean s n , where (1) and . (2) by following this approach, the type a evaluation of standard uncertainty is , (3) which is not valid for a sample having a size of less than = 4n . such solution originates from a bayesian approach to inference, where improper priors (jeffreys prior) are adopted for the mean  and variance  2 parameters of the parent normal probability density function (pdf), i.e. (4) and = =  1 1 n i i y y n ( ) = = − −  22 1 1 1 n i i s y y n ( ) − = − 1 3 n s u y n n ~ const. abstract an informed type a evaluation of standard uncertainty is here derived based on bayesian analysis. the result is mathematically simple, easily interpretable, applicable both in the theoretical framework of the guide to the expression of uncertainty in measurement (propagation of standard uncertainties) and in that of the supplement 1 of the guide (propagation of distributions), valid for any size greater than or equal to 1 of the sample of present observations. the evaluation consistently addresses prior information in the form of the sample variance of a series of recorded experimental observations and in the form of an educated guess based on expert’s experience. it turns out that distinction between type a and type b evaluation is, in this context, contrived. mailto:carlo.carobbi@unifi.it acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 , (5) where ( ) 20p represents the improper prior adopted for  2 , namely . (6) note that the information conveyed by these priors is the one strictly relevant to the character of the two parameters:  is the location parameter and  2 is the scale parameter. in contrast, practitioners in testing and calibration have much richer information about the variability of the measurement process that is represented by (6). the bayesian approach is the one followed by the supplement 1 of the gum (gums1, [3]) and the intent of jcgm wg1 was precisely to align the gum to gums1 by attributing the same student's t probability density to a sample of repeated observations. the problem is that, by doing so, it is possible to propagate the distributions (as foreseen by gums1) but it is not possible to propagate the standard uncertainties (as foreseen by the gum) if the sample size is less than = 4n . this is generally not acceptable (e.g., in destructive), particularly if implemented as a standard (mandatory) method. the gum and the gums1 approaches are therefore inconsistent. they produce substantially different results when random variability is a significant contribution to measurement uncertainty and the number of measurements used for its estimate is low [4]. the jcgm wg 1 did not seemingly yet identify a way out of the inconsistence between the gum and the gums1. both frequentists and bayesians can agree on the fact that the estimate of the average value obtainable from such a small sample is not very reliable. in favor of the bayesian approach to inference, one can observe that no other way to enrich the estimate is available than the use of prior information on the variability of the measurement process that integrates the meagre experimental observation. in this sense, a bayesian approach is useful because, differently from the frequentist approach, it provides us with a method for combining prior information with experimental observation. from the applicative point of view these concepts have relevance to the evaluation of measurement repeatability. measurement repeatability quantifies the variability of measurement results obtained under specified repeatability conditions. measurement repeatability is an essential contribution to measurement uncertainty in every field of experimental activity. in the context of testing and calibration if a stable item is retested or re-calibrated, the new measurement results are expected to be compatible with the old ones. two distinct operators should provide compatible measurement results when testing or calibrating the same item. measurement repeatability is then a reference for qualification of personnel. monitoring measurement repeatability contributes to assuring the validity of test and calibration results. in an accreditation regime [5], measurement repeatability must be kept under statistical control. periodic assessments are carried out by the accreditation body aimed at verifying, through an appropriate experimental check, the robustness of the estimate of measurement repeatability, see [6], equation (6), p. 5 (in italian), and [7], clause 6.6.3. the gum provides type a evaluation of standard uncertainty as the tool to quantify measurement repeatability. type a evaluation is based on a frequentist approach, thus implying that information on the quality of the estimate of measurement uncertainty must be conveyed to the user. this is done in terms of effective degrees of freedom. the gums1 adopts a knowledge based (in contrast to frequentist) approach to model measurement repeatability. the quality of the estimate of measurement uncertainty is accounted for by the available prior knowledge, which eventually determines the width of the coverage interval. the use of numerical methods for professional (accredited) evaluation of measurement uncertainty is expected to increase in the future. indeed, the gums1 numerical method, which is based on the propagation of probability distributions, accounts for possible non-linearity of the measurement model, is simple, less prone to mistakes (partial derivatives are not required), provides all the available information about the measurand in terms of its probability distribution. further, the use of numerical methods is practically unavoidable when the measurement model is complex and/or the measurand is an ensemble of scalar quantities (vector). on the other extreme, the analytical method (based on the law of propagation of uncertainty) is consolidated and the one predominantly adopted nowadays. a further point of strength of the analytical method is its great pedagogical value. achieving consistence between the analytical and numerical approaches to measurement uncertainty quantification is therefore desirable since both have arguments of strength and are expected to coexist in the future. what is proposed here is a knowledge-based approach to the type a evaluation of measurement uncertainty and, specifically, measurement repeatability. an estimate of the repeatability of a measurement system may be available, representative of its performance in testing. this knowledge may be derived from: • systematic recording of periodic verifications of the measurement system • analysis and quantification of the individual sources of variability in the measurement chain • normative reference (for standard measurement systems used in testing) • information from manufacturers of measuring instruments • experience with the specific measurement chain or similar ones. as in the gums1, use is made here of bayesian inference since it provides a straightforward method to incorporate prior knowledge. differently from the gums1 bayesian approach, here an informative prior pdf is assigned to  2 . to obtain analytical results, useful in the framework of the law of propagation of uncertainty, a normal probability model is assumed with a non-informative prior pdf for the mean and a conjugate prior pdf for the variance. in section 2 the theoretical approach is described and in subsection 2.1 is compared with another one [8] previously presented in the scientific literature and proposed by a member of jcgm wg1. in section 3 theoretical results are applied to a practical case, based on the experience of the author, as an assessor of accredited testing laboratories. conclusions follow in section 4. finally, an appendix is devoted to the mathematical derivations supporting the results presented in section 2. ( )~ 2 20p ( )   2 0 2 1 p acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 2. type a evaluation when prior information is available by prior information we mean here information on the variability of the measurement process obtained before that a certain test (or calibration) is carried out. let us consider the case in which the a priori information consists of a relatively long series of experimental observations. the important hypothesis that must be verified is that the previous experimental observations are obtained under repeatability conditions that are representative of those that occur during the test, both as regards the measurement system and the measurand. if this is not verified, the a priori information is not valid to represent the variability observed during the test. this hypothesis is necessarily realized following an experimental procedure based on a physical modeling aimed at identifying the causes of the variability and at limiting its effects. it is the experimenter's task to ensure that the hypothesis is verified in practice. in mathematical terms, the bayesian inference is made on the mean value  and the variance  2 of a gaussian pdf assuming an improper uniform pdf for  and a scaled inverse  2 pdf [9], table a.1, p. 576, for  2 . the choice of the improper uniform pdf for  is justified by the desire to avoid introducing an a-priori bias on the best estimate of the measurand value, which in this way depends solely on the experimental observation obtained during the test. the choice of the scaled inverse  2 pdf for  2 is justified by the desire to incorporate prior information while retaining the well-known student’s t as the posterior pdf of  [9], section 3.3, p. 67. the parameters of the scaled inverse  2 pdf are the prior variance  2 0 and the associated degrees of freedom  0 . another advantage stemming from the use of the scaled inverse  2 pdf is the immediate physical interpretation of the degrees of freedom  0 as the number of measurements that have been necessary to derive the prior estimate  2 0 minus 1. at the same time  0 can be linked to the degree of credibility trusted to  2 0 as an estimate of  2 , as it is demonstrated here, through the use of (11). with this choice of the prior pdfs (see the appendix for the derivation) we obtain, for the posterior marginal pdf of  a student’s t pdf with degrees of freedom , (7) shifted in (8) and with scaling factor  2 n n , where . (9) according to this approach, the type a evaluation of standard uncertainty will be . (10) we observe from (7) that the number of degrees of freedom  0 of the prior evaluation of variability,  0 , add up to the number of degrees of freedom −1n with which the variability s is evaluated during testing. the result is valid if the assumption that repeatability conditions are kept the same both in the prior investigation and testing is verified. the estimate (8) is determined by repeated observations obtained during the testing phase because a constant and improper prior pdf for  has been chosen. the result (9) is particularly simple and convincing: the variance  2 n which quantifies the variability of the measurement process is the result of the pooling of the prior variance  2 0 and the sample variance observed in testing 2 s through a weighted average, the weights being the corresponding degrees of freedom. the type a evaluation of standard uncertainty passes from (3), in absence of prior information, to (10), which is valid also for = 1n provided that   0 3 . the following consideration is also of interest. the prior information about the variability of the measurement process may be derived, for example, from the assessment of an expert. a simple form of this prior information is a best estimate  0 and a quantile   that the expert judges to be exceeded with a small probability  . a link can be established among   ,  and  0 for a given  0 . this can be done through the cumulative distribution function of the scaled inverse  2 prior of  2 evaluated at  2 , namely , (11) where is the upper incomplete gamma function with parameters  and z , and ( ) z is the gamma function. if  0 is known then (11), for any given  , implicitly provides a value for  0 . this relationship can be represented through a plot such as the one in figure 1. note from figure 1 that the larger is    0 the smaller is  0 for a given  . the smaller is  for a given    0 the larger is  0 . the idea of pooling prior variability is not new in the context of the gum. it is indeed briefly mentioned in clause 6.4.9.6 of the gums1 and in 9.2.6 of the cd of gum review [2]. 2.1. comparison with the type a evaluation obtained truncating the improper prior for 𝝈𝟐 in a recent paper [8] cox and shirono propose a solution to the problem of the type a evaluation in case of small sample where  t is an upper bound (truncation) value for the improper prior of  2 , i.e.  = − + 0 ( 1) n n  = n y ( )     − + = − + 2 2 0 02 0 1 ( 1) n n s n      = − 2 n n n n              = −       2 0 0 0 2 0 , 2 2 1 2 ( ) ( )  −  = − 1 , exp z z t t dt acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 . (12) the prior pdf of  is in [8], as in this work, a constant improper prior. by following [8], the type a evaluation of standard uncertainty can be expressed as  s n , where . (13)   0 if  2n is a function of s , n and  t . as shown in figure 2, it results that    t s also when  t s and n is arbitrarily large. this is problematic because, when observed variability is more credible (larger number of degrees of freedom) than prior knowledge of variability, then the observed variability, not its prior estimate, should dominate the type a evaluation. in other words, setting an upper bound on  2 is acceptable provided that irrefutable evidence is available of an upper truncation value. otherwise, setting a large value with an associated small probability of being exceeded is a more cautionary approach. another limitation of the approach in [8] is that necessarily  2n (see (13),  = 0 if = 1n ) while, according to the solution here proposed, also the case = 1n is tractable. 3. application in the context of accreditation to iso/iec 17025 national accreditation bodies require evaluation of measurement repeatability of the test methods in the scope of accreditation. such evaluation is carried out by testing laboratories through periodic recording of measurement results obtained in representative conditions of actual testing. an estimate  0 with corresponding degrees of freedom  0 is thus obtained. how to incorporate this prior knowledge into test outcome? we here provide a numerical example in the context of electromagnetic compatibility (emc) testing. suppose that the estimate of the non-repeatability of the radiated emission measurement chain is  = 0 0.8 db and  = 0 9 . testing two times ( = 2n ) an absolute deviation between measured values of 1.5 db is obtained, then = 1.5 2s db = 1.06 db. by pooling standard deviations  0 and s we have ( ) = − + = + =01 1 9 10n n , ( )      + − + = = = − + + 2 0 2 22 0 0 1 db 0.76 db 1 1 0 ( 1) 8 1 9 .06 9 . n n s n and      = = = − − 10 0.76 0.60 db 2 10 2 2 n n n n as a second example consider the case where an expert of the specific test method provides a guess  = 0 1 db, based on experience with similar test systems. the expert is also confident that, with a low probability  = 5 %,  exceeds   = 2.5 db. this state of knowledge corresponds to approximately (see figure 1)  = 0 4 , from which  = 5 n (instead of 10, as in the previous example),  = 0.86 n db (instead of 0.76 db), and   = 0.78 db (instead of 0.60 db). 4. conclusions reliable statistical techniques to incorporate prior knowledge into the so-called “type a” evaluation of standard uncertainty should be identified to make evaluation more robust in case of small sample. the use of these statistical techniques should be promoted and confidently accepted in accredited testing if competence requirements are fulfilled. gums1 already provides such a tool by pooling prior variance and sample variance. a bayesian derivation of the gums1 pooled variance is here illustrated along with and a more flexible interpretation aimed at addressing expert’s knowledge as a useful source of reliable information. ( ) ~             2 2 0 1 0 0 t t p ( ) ( )      −    −−    =   −    −    1 2 2 2 2 2 3 , 2 2 11 2 1 , 2 2 1 t t n n sn n n s figure 1: plots of the degrees of freedom as a function of the ratio obtained by solving the implicit equation (11) for three values of probability (see the legend). figure 2: plots of as a function of and for selected values of (see the legend). note that for any value of and for any value of .  0  0     t  t s n    ts  ts n acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 5 according to the results described in this work there is no need to distinguish between type a and type b evaluations since a homogeneous mathematical treatment is used to address prior information about variability (notwithstanding is originated from experimental evidence or expert’s experience) and its pooling with present observation. the main ideas and results in this work were presented by the author of this paper, during the 2019 jcgm-wg1 workshop mentioned in the introduction section. i would like to acknowledge, that during the same workshop, also anthony o’hagan (emeritus professor, university of sheffield), proposed the use of the scaled inverse  2 pdf to solve the problem of the type a evaluation in case of small sample size. his formulation of the solution (still unpublished) was different from mine, but it is remarkable that two researchers, having a completely different background, arrived at similar proposal. the concluding section contains the major achievements of the research presented in the manuscript. it should be concise but informative. when numerical results are an essential part of the research, for instance a wider measurement range, higher uncertainty (6), they should be included in the conclusions. notice that conclusions are not the same as an abstract. references [1] gum: bipm, iec, ifcc, ilac, iso, iupac, iupap and oiml, 2008 guide to the expression of uncertainty in measurement, jcgm 100:2008, gum 1995 with minor corrections. [2] jcgm 100 201x cd (committee draft), evaluation of measurement data guide to uncertainty in measurement, circulated in december 2014. [3] gums1: bipm, iec, ifcc, ilac, iso, iupac, iupap and oiml, 2008 supplement 1 to the ‘guide to the expression of uncertainty in measurement’ – propagation of distributions using a monte carlo method jcgm 101:2008. [4] w. bich, m. cox, r. dybkaer, c. elster, revision of the ‘guide to the expression of uncertainty in measurement’, metrologia 49 (2012) 702–705. doi: 10.1088/0026-1394/49/6/702 [5] iso/iec 17025, conformity assessment–general requirements for the competence of testing and calibration laboratories, int. org. standardization, geneva, switzerland (2017). [6] sinal dt-0002/6, guida al calcolo della ripetibilità di un metodo di prova ed alla sua verifica nel tempo, rev. 0, dicembre 2007. [in italian] [7] b. magnusson, u. örnemark (eds.) eurachem guide: the fitness for purpose of analytical methods – a laboratory guide to method validation and related topics, (2nd ed. 2014). isbn 97891-87461-59-0. online [accessed 22 april 2022] https://www.eurachem.org/index.php/publications/guides/mv [8] m. cox, t. shirono, informative bayesian type a uncertainty evaluation, especially applicable to a small number of observations, metrologia 54 (2017), pp. 642–652. doi: 10.1088/1681-7575/aa787f [9] a. gelman, a. vehtari, j. b. carlin, h. stern, d. b. dunson, d. b. rubin, bayesian data analysis, third edition, crc press, 2014, isbn 9781439840955. appendix we here derive the marginal posterior pdf of  given prior information in terms of the prior pdfs of  and  2 and the set of observations i y , where = 1, 2,...,i n . a uniform prior pdf is assigned to  as , (14) while the prior of  2 is a scaled inverse  2 pdf with prior variance  2 0 and associated degrees of freedom  0 . (15)  and are a-priori independent, then the joint prior pdf of and is, from (14) and (15), . (16) the likelihood of the observations is easily obtained as [9] , (17) where y is a vector representing the set of observations i y , = 1, 2,...,i n . due to bayes theorem the joint posterior pdf of  and  2 is given by . (18) substituting (16) and (17) into (18) and marginalizing with respect to  2 it is readily obtained (19) where ( )|p y represents the marginal posterior pdf of  . it is evident from (19) that ( )|p y is a student’s t pdf shifted in y and scaled by  2 n n , where  2 n is given by (9). ~  2 const. ( )~   −2 2 20 0inv ,  2   2 ( ) ( ) ( )       − +    −    0 2 2 1 2 2 0 0 2 , exp 2 p ( ) ( ) ( ) ( ) ( ) ( )         = −  − −        − + − = −       2 2 2 1 21 2 22 2 2 2 exp 2 ; , 1 exp 2 i n i n y l n s n y y ( ) ( ) ( )      2 2 2, | ; , ,p l py y ( ) ( ) ( )      + −  −  +   − +   0 2 2 2 2 0 0 | 1 1 n n y p y n s http://dx.doi.org/10.1088/0026-1394/49/6/702 https://www.eurachem.org/index.php/publications/guides/mv https://doi.org/10.1088/1681-7575/aa787f evaluation and correction of systematic effects in a simultaneous 3-axis vibration calibration system acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 388 acta imeko issn: 2221-870x december 2020, volume 9, number 5, 388 393 evaluation and correction of systematic effects in a simultaneous 3-axis vibration calibration system a. prato1, f. mazzoleni1, a. schiavi1 1 inrim – national institute of metrological research, torino, italy, a.schiavi@inrim.it abstract: this paper presents a calibration method, recently realized at inrim, suitable for the calibration of 3axis accelerometers in frequency domain. the procedure, allows to simultaneously evaluate the main and transverse sensitivities on three axes by means of a single-axis vibration excitation of inclined planes. nevertheless, the excitation system is subjected to spurious motions mainly due to the vibrational modes of the inclined planes and to the horizontal motions of the shaker. in order to provide the proper sensitivities to the 3-axis sensors, the evaluation of systematic effects is experimentally carried out and the related correction is proposed. keywords: calibration, 3-axis accelerometer, systematic effects 1. introduction the 3-axis accelerometers, especially low-cost unconventional-shaped transducers, such as mems sensors, are largely used in a wide range of advanced industrial, environmental, energy and medical applications, and in particular within extensive sensor and multi-sensor networks [e.g., 1 – 10]. for example, in the context of industry 4.0, a huge number of sensors is needed for an effective implementation of smart factories, learning machine and intelligent manufacturing systems, as well as for traditional application such as early failure detection and predictive maintenance; low-power devices and battery-operated systems are practical and useful in iot applications, such as for smart cities, for accurate navigation/positioning systems and in environmental monitoring and survey; moreover, accurate measurements are of paramount importance in medical applications, in remote surgery and remote diagnoses. the possibility to have many accurate, low-power consuming and low-cost sensors present undoubted advantages, in terms of costs reduction and energy saving, in the control processes, monitoring or measurements and being flexible in providing enhanced data collection, automation and operation. by way of example, the calibration of digital mems sensors, with the associated uncertainty budget, allows to ensure traceability and measurement accuracy of nodes in sensor networks, as well as in other innovative implementations. moreover, the evolving improvement of the technical performance and the reliability of mems sensors are emerging quality attributes of interest for manufacturers, costumers and end-users. however, in the particular case of digital mems accelerometers, the sensitivity is generally provided by manufacturer without traceable calibration methods and is obtained in static conditions, whereas dynamic response, as a function of frequency, is often barely known or completely disregarded. given this condition, a simultaneous 3-axis vibration calibration system with a single axis excitation to be exploitable by manufacturers is proposed. this paper deals with the evaluation of systematic effects of this system. 2. description of the work traceable calibration methods for digital sensors, and smart sensors, in metrological terms [11 13], including sensitivity parameter and an appropriate uncertainty evaluation, are necessary in order to consider low-cost and low-power consuming accelerometers as actual measurement devices in frequency domain [14]. the calibration system, developed at inrim, allows to simultaneous evaluate the main and the transverse sensitivity, in frequency domain, of 3-axis accelerometers, by means of a single-axis vibration excitation, by using inclined planes rigidly fixed to the vibrating table [14, 15]. the calibration procedure is based on the comparison to a reference transducer (in analogy to iso 16063-21[16]). a preliminary version of the system was previously investigated for the characterization of analog mems accelerometers performance in operative conditions [17 – 23]. measurements are performed at a nearly constant amplitude of 10 m∙s-2, from 5 hz up to 3 khz. the mechanical calibration system, composed of the shaker and the inclined planes (with tilt angles of 15°, 35°, 55° and 75°), is characterized in order to take into account systematic effects. similar procedure is also applied to the shaker at 0° and 90°. http://www.imeko.org/ mailto:a.schiavi@inrim.it acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 389 measurements of systematic effects is carried out by means of a laser-doppler vibrometer. the detailed uncertainty budged is evaluated according to gum [24]. 3. calibration set-up the calibration set-up here proposed, consisting of a single-axis vibrating table on which aluminum inclined planes are screwed, allows to generate a projection of the reference acceleration along three axes simultaneously. a single vertical sinusoidal acceleration at nearly-constant amplitude acts as reference acceleration aref along the vertical z’-axis of the system. in this way, accelerations of proportional amplitudes, are simultaneously generated on the inclined surface plane, along the three axes. in figure 1, the geometrical principle of the proposed method on which the 3-axis accelerometer is fixed during calibration, is schematically depicted. figure 1: inclined plane – scheme from simple trigonometrical laws, the reference accelerations detected by the sensor in calibration, along its three sensitive axes, are expected to be: ax,theor = |aref sin(𝛼) cos(𝜔)| (1) ay,theor = |aref sin(𝛼) sin(𝜔)| (2) az,theor = |aref cos(𝛼)| (3) where,  is the tilt angle, ω is the angle of rotation, aref is the root mean square (rms) reference acceleration along the vertical z’-axis of the system, and ax,theor, ay,theor, az,theor are the rms reference accelerations spread along x-, yand z-axis of the mems accelerometer in calibration. in the experimental set-up, the inclined plane is screwed on the vertical vibrating table (pcb precision air bearing calibration shaker), and the 3axis accelerometer is fixed to the inclined plane and located along the vertical axis of excitation. the experimental configuration is shown in figure 2. the acceleration along vertical z’-axis aref is measured by a single axis reference transducer (pcb model 080a199/482a23), calibrated according to iso 16063-11:1999 [25], against inrim primary standard, located within the stroke of the shaker and is acquired by an acquisition board ni 4431 (sampling rate of 50 khz) integrated in the pc and processed through labview software to provide the rms reference value in m s-2. the digital mems output is acquired by an external microcontroller at a maximum sampling rate of 6.660 khz and saved as binary files. figure 2: the calibration set-up: the mems fixed to the inclined plane on the vibrating table. 4. evaluation of systematic effects and correction as described in the previous section, the reference accelerations along mems accelerometer sensitivities axes are given by equations (1)-(3), by using trigonometrical laws. however, in dynamic conditions, systematic effects caused by spurious oscillating components along the three axes of the reference system (x’-, y’and z’) at mems position need to be taken into account. these spurious components affect the actual reference accelerations aref splitted on the three sensitivity axes of the mems. in figure 3, a schematic representation of the occurring phenomenon is shown. figure 3: representation of the spurious oscillating components combination along the three axes of the mems during the calibration. http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 390 such components are mainly due to the vibrational modes of the inclined aluminum planes and due to small but not negligible horizontal motions of the shaker. each spuroius component along reference system x’-, y’and z’-axis, has to be decomposed along the 3-axis accelerometer x-, yand z-axis and summed to the reference accelerations ax,theor, ay,theor, az,theor according to wave interference laws. the actual decomposition of the spurious components, acting along the three axes of the reference system, is schematically depicted in figure 4. figure 4: representation of the spurious oscillating component decomposition, along the three axis of the reference system, at a given frequency. by way of example, considering the general case of four overlapping waves, 𝐸1 = 𝐸1,0𝑒 𝑖(2𝜋𝑓𝑡) , 𝐸2 = 𝐸2,0𝑒 𝑖(2𝜋𝑓𝑡+𝜑2) , 𝐸3 = 𝐸3,0𝑒 𝑖(2𝜋𝑓𝑡+𝜑3) , 𝐸4 = 𝐸4,0𝑒 𝑖(2𝜋𝑓𝑡+𝜑4) , oscillating at a same frequency f, with different amplitudes and phase differences with respect to the reference signal e1, along a particular direction, their interference can be opportunely defined according to equation (4), where e2, e3 and e4 are the amplitudeand phase-dependant spurious components along x-, yand z-axis of the mems. 𝐸𝑡𝑜𝑡 = |𝐸1,0 + 𝐸2,0𝑒 𝑖𝜑2 + 𝐸3,0𝑒 𝑖𝜑3 + 𝐸4,0𝑒 𝑖𝜑4 | (4) in this way, it is possible to correct reference theoretical accelerations along mems axes in equations (1)-(3), into ax, ay, az of equations (8)-(10), where ax’,syst, ay’,syst, az’,syst and φx’,syst, φy’,syst, φz’,syst are, respectively, the amplitudes and the phase differences as shown in equations (5)-(7), with respect to the reference signal aref, of the spurious components along x’-, y’and z’-axis. as it will be shown in section 5, the amplitude of spurious components vary as a function of frequency between 0.1% and 10% of the reference acceleration aref. experimental evaluation of systematic effects due to spurious components is carried out by means of a laser-doppler velocimeter (polytec ofv 505). amplitude and phase measurements along the x’-, y’and z’-axis of the reference system are evaluated for each inclined plane and for all frequencies, at reference vertical amplitude of 10 m s-2. laser signal, during measurements of spurious components amplitude, is acquired by a ni 4431 board (sampling rate of 50 khz) integrated into the pc, while measurement of phase differences between reference acceleration and spurious components are measured by means of a dynamic signal analyzer (keysight 35670a). since the digital mems accelerometer is too small to be used as a reflective surface, the beam spot of the laser directly hits a small aluminum triangular-based parallelepiped located at mems position and fixed to the different inclined planes, as shown in figure 5. the volume of the triangular-based parallelepiped is around 0.5 cm3, which is 0.6% of the total volume of the inclined plane, i.e., negligible with respect to the total mass. the values of the measured acceleration amplitudes ax’, ay’ and az’ along x’-, y’and z’-axis are related to the actual systematic effects acting on the axes of the reference system, and are expressed, as a function of frequency and experimental phase-shift, by the following equations: 𝑎𝑥′ = |𝑎𝑥′,𝑡ℎ𝑒𝑜𝑟 + 𝑎𝑥′,𝑠𝑦𝑠𝑡 ∙ 𝑒 𝑖(2𝜋𝑓𝑡+𝜑𝑥′,𝑠𝑦𝑠𝑡)| (4) 𝑎𝑦′ = |𝑎𝑦′,𝑡ℎ𝑒𝑜𝑟 + 𝑎𝑦′,𝑠𝑦𝑠𝑡 ∙ 𝑒 𝑖(2𝜋𝑓𝑡+𝜑𝑦′,𝑠𝑦𝑠𝑡)| (5) 𝑎𝑧′ = |𝑎𝑧 ′,𝑡ℎ𝑒𝑜𝑟 + 𝑎𝑧 ′,𝑠𝑦𝑠𝑡 ∙ 𝑒 𝑖(2𝜋𝑓𝑡+𝜑𝑧′,𝑠𝑦𝑠𝑡)| (6) where ax’,theor and ay’,theor are 0 (i.e., no acceleration is generated along the horizontal system plane x’-y’) and the vertical component az’,theor  aref. in figure 5 the experimental method used to quantify the spurious components from accelerations ax’, ay’ and az’, and the related phase-shift φx’, φy’ and φz’, with respect to the reference acceleration aref, acting along the vertical axis z’, is shown. ax = √ (ax,theor + |a𝑥′,𝑠𝑦𝑠𝑡 cos(𝛼) cos(𝜔)| cos 𝜑𝑥′,𝑠𝑦𝑠𝑡 + |a𝑦′,𝑠𝑦𝑠𝑡 sin(𝜔)| cos 𝜑𝑦′,𝑠𝑦𝑠𝑡 + |a𝑧′,𝑠𝑦𝑠𝑡 sin(𝛼) cos(𝜔)| cos 𝜑𝑧′,𝑠𝑦𝑠𝑡 ) 2 + +(|a𝑥′,𝑠𝑦𝑠𝑡 cos(𝛼) cos(𝜔)| sin 𝜑𝑥′,𝑠𝑦𝑠𝑡 + |a𝑦′,𝑠𝑦𝑠𝑡 sin(𝜔)| sin 𝜑𝑦′,𝑠𝑦𝑠𝑡 + |a𝑧′,𝑠𝑦𝑠𝑡 sin(𝛼) cos(𝜔)| sin 𝜑𝑧′,𝑠𝑦𝑠𝑡 ) 2 (8) http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 391 ay = √ (ay,theor + |a𝑥′,𝑠𝑦𝑠𝑡 cos(𝛼) sin(𝜔)| cos 𝜑𝑥′,𝑠𝑦𝑠𝑡 + |a𝑦′,𝑠𝑦𝑠𝑡 cos(𝜔)| cos 𝜑𝑦′,𝑠𝑦𝑠𝑡 + |a𝑧′,𝑠𝑦𝑠𝑡 sin(𝛼) sin(𝜔)| cos 𝜑𝑧′,𝑠𝑦𝑠𝑡 ) 2 +(|a𝑥′,𝑠𝑦𝑠𝑡 cos(𝛼) sin(𝜔)| sin 𝜑𝑥′,𝑠𝑦𝑠𝑡 + |a𝑦′,𝑠𝑦𝑠𝑡 cos(𝜔)| sin 𝜑𝑦,𝑠𝑦𝑠𝑡′ + |a𝑧′,𝑠𝑦𝑠𝑡 sin(𝛼) sin(𝜔)| sin 𝜑𝑧′,𝑠𝑦𝑠𝑡 ) 2 (9) az = √ (az,theor + |a𝑥′,𝑠𝑦𝑠𝑡 sin(𝛼)| cos 𝜑𝑥′,𝑠𝑦𝑠𝑡 + |a𝑧′,𝑠𝑦𝑠𝑡 cos(𝛼)| cos 𝜑𝑧′,𝑠𝑦𝑠𝑡 ) 2 + +(|a𝑥′,𝑠𝑦𝑠𝑡 sin(𝛼)| sin 𝜑𝑥′,𝑠𝑦𝑠𝑡 + |a𝑧′,𝑠𝑦𝑠𝑡 cos(𝛼)| sin 𝜑𝑧′,𝑠𝑦𝑠𝑡 ) 2 (10) figure 5: the laser beam hitting the aluminum triangularbased parallelepiped located at the mems position. 5. experimental results in the graphs of figures 6 – 9, the values of the amplitudes of ax’, ay’ and az’, are normalized with respect to the reference acceleration aref amplitude. mesurement are performed from 5 hz and 3 khz. figure 6: normalized accelerations along x’-, y’and z’axis at 15° of tilt angle. figure 7: normalized accelerations along x’-, y’and z’axis at 35° of tilt angle. figure 8: normalized accelerations along x’-, y’and z’axis at 55° of tilt angle. figure 9: normalized accelerations along x’-, y’and z’axis at 75° of tilt angle. the experimental values of the spurious components amplitude of accelerations ax’, ay’ and az’, along the axes of the system, are combined in order to calculate the values of systematic effects due to the acceleration amplitudes ax’,syst, ay’,syst and az’,syst, along the mems sensitivity axes. in the graphs of the figures 10 – 13, the experimental values of phase-shift φx’, φy’ and φz’, with respect to the reference acceleration aref, acting along the vertical axis z’, are shown. in this case the values of phase-shift allow to evaluate the phase differences, with respect to the reference signal aref, in terms of φx’,syst, φy’,syst and φz’,syst. since φz’,syst is close to 0° in every configuration, it is not shown in the graphs. http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 392 figure 10: phase-shifts on the x’-y’ horizontal plane, with respect to the vertical axis z’, at 15° of tilt angle. figure 11: phase-shifts on the x’-y’ horizontal plane, with respect to the vertical axis z’, at 35° of tilt angle. figure 12: phase-shifts on the x’-y’ horizontal plane, with respect to the vertical axis z’, at 55° of tilt angle. figure 13: phase-shifts on the x’-y’ horizontal plane, with respect to the vertical axis z’, at 75° of tilt angle. once quantified the acceleration amplitudes ax’, ay’ and az’, and the values of the phase-shift φx’, φy’ and φz’, the reference theoretical accelerations along mems axes in equations (1)-(3), can be opportunely expressed into ax, ay, az of equations (8)-(10), by taking into account the systematic effects due to the vibrational modes of the inclined planes and to horizontal motions of the shaker. in particular, it can be observed an increasing of acceleration amplitudes, as a function of frequency, along the horizontal axes x’, y’ and also along vertical axis z’, this last mainly due to resonant modes. moreover, the lateral motions of the shaker, occurring around 80 hz, are presumably the cause of the large phase-shifts observed at low frequencies, as depicted in figure 10 – 13; these motions are independent from the resonant modes of the inclined planes, but they are affecting in any case, the whole behavior of the inclined plane, in terms of amplitude and phase differences. on the other hand, the analysis of the systematic effects, is an aggregation of both shaker spurious motions and resonant modes. standard uncertainties associated to the amplitude of the spurious components, u(ax’,syst), u(ay’,syst), u(az’,syst), are considered as type b uncertainty contributions with an average error of 0.0025 m s-2, from three repeated measurements, and a uniform rectangular distribution. standard uncertainties associated to the phase difference due to the spurious components, u(φx’,syst), u(φy’,syst), u(φz’,syst), are considered as type a uncertainty contributions with a maximum standard deviation of 2° from five repeated measurements, as shown in [14]. this correction allows to univocally define the actual projection of the reference acceleration aref on the three axes, thus the “standard” calibration can be achieved, by comparison to the reference transducer within the stroke of the shaker and it can be finally related to the primary standard, as declared. 6. summary in this paper is presented a technical insight on the calibration system, recently realized at inrim, suitable for the calibration of 3-axis accelerometers in frequency domain. the procedure, allows to simultaneously evaluate the main and transverse sensitivities on three axes by means of a single-axis vibration excitation of inclined planes. the mechanical calibration system, composed of the shaker and the inclined planes, is characterized in order to take into account systematic effects occuring during dynamical excitation. the evaluation of systematic effects, due to the vibrational modes of the inclined aluminum planes and due to small but not negligible horizontal motions of the shaker, are carried out from 5 hz up to 3 khz at an amplitude of 10 m s-2. the amplitudes of the acceleration spurious components ax’, ay’ and az’, and the values of the phase-shift φx’, φy’ and φz’, with respect to the reference acceleration aref, acting along the vertical axis z’, are accurately measured by means of a laserdoppler vibrometer. this correction allows to http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 393 univocally define the actual projection of the reference acceleration aref on the three axes, thus the “standard” calibration can be achieved and related to the primary standard. 7. note this work has to be considered as an addendum to the paper: “prato, a., mazzoleni, f., & schiavi, a. (2020). traceability of digital 3-axis mems accelerometer: simultaneous determination of main and transverse sensitivities in the frequency domain. metrologia, 57(3), 035013” [14], which shows the detailed extension of the determination of the systematic effects of the calibration system. 8. references [1] noel, a. b., abdaoui, a., elfouly, t., ahmed, m. h., badawy, a., & shehata, m. s. (2017). structural health monitoring using wireless sensor networks: a comprehensive survey. ieee communications surveys & tutorials, 19(3), 1403-1423. [2] dehkordi, s. a., farajzadeh, k., rezazadeh, j., farahbakhsh, r., sandrasegaran, k., & dehkordi, m. a. (2020). a survey on data aggregation techniques in iot sensor networks. wireless networks, 26(2), 1243-1263. [3] deng, x., jiang, y., yang, l. t., lin, m., yi, l., & wang, m. (2019). data fusion based coverage optimization in heterogeneous sensor networks: a survey. information fusion, 52, 90-105. [4] adeel, a., gogate, m., farooq, s., ieracitano, c., dashtipour, k., larijani, h., & hussain, a. (2019). a survey on the role of wireless sensor networks and iot in disaster management. in geological disaster monitoring based on sensor networks (pp. 57-66). springer, singapore. [5] ge, x., han, q. l., zhang, x. m., ding, l., & yang, f. (2019). distributed event-triggered estimation over sensor networks: a survey. ieee transactions on cybernetics. [6] kumar, d. p., amgoth, t., & annavarapu, c. s. r. (2019). machine learning algorithms for wireless sensor networks: a survey. information fusion, 49, 1-25. [7] kandris, d., nakas, c., vomvas, d., & koulouras, g. (2020). applications of wireless sensor networks: an upto-date survey. applied system innovation, 3(1), 14. [8] priyadarshi, r., gupta, b., & anurag, a. (2020). deployment techniques in wireless sensor networks: a survey, classification, challenges, and future research issues. the journal of supercomputing, 1-41. [9] zareei, m., vargas-rosales, c., anisi, m. h., musavian, l., villalpando-hernandez, r., goudarzi, s., & mohamed, e. m. (2019). enhancing the performance of energy harvesting sensor networks for environmental monitoring applications. energies, 12(14), 2794. [10] carminati, m., kanoun, o., ullo, s. l., & marcuccio, s. (2019). prospects of distributed wireless sensor networks for urban environmental monitoring. ieee aerospace and electronic systems magazine, 34(6), 4452. [11] bruns, t., & eichstädt, s. (2018, august). a smart sensor concept for traceable dynamic measurements. in journal of physics: conference series (vol. 1065, no. 21, p. 212011). iop publishing. [12] dorst, t., ludwig, b., eichstädt, s., schneider, t., & schütze, a. (2019, may). metrology for the factory of the future: towards a case study in condition monitoring. in 2019 ieee international instrumentation and measurement technology conference (i2mtc) (pp. 1-5). ieee. [13] seeger, b., bruns, t., & eichstädt, s. (2019). methods for dynamic calibration and augmentation of digital acceleration mems sensors. in 19th international congress of metrology (cim2019) (p. 22003). edp sciences. [14] prato, a., mazzoleni, f., & schiavi, a. (2020). traceability of digital 3-axis mems accelerometer: simultaneous determination of main and transverse sensitivities in the frequency domain. metrologia, 57(3), 035013. [15] schiavi a, mazzoleni f and germak a 2015 simultaneous 3-axis mems accelerometer primary calibration: description of the test-rig and measurements xxi imeko world congress on measurement in research and industry 30 2161-2164 [16] iso 16063-21 2003 methods for the calibration of vibration and shock transducers — part 21: vibration calibration by comparison to a reference transducer (geneva: international organization for standardization). [17] d’emilia g, gaspari a, natale e, mazzoleni f and schiavi a 2018 calibration of tri-axial mems accelerometers in the low-frequency range part 1: comparison among methods journal of sensors and sensor systems 7(1) 245257 [18] d’emilia g, gaspari a, natale e, mazzoleni f and schiavi a 2018 calibration of tri-axial mems accelerometers in the low-frequency range part 2: uncertainty assessment journal of sensors and sensor systems 7(1) 403-410. [19] g. d’emilia, a. gaspari, f. mazzoleni, e. natale, a. prato, a. schiavi, 2020 metrological characterization of mems accelerometers by a ldv, aivela. [20] a. prato, f. mazzoleni and a. schiavi, “metrological traceability for digital sensors in smart manufacturing: calibration of mems accelerometers and microphones at inrim,” 2019 ieee international workshop on metrology for industry 4.0 and iot, 371-375, 2019. [21] m. galetto, a. schiavi, g. genta, a. prato and f. mazzoleni, “uncertainty evaluation in calibration of lowcost digital mems accelerometers for advanced manufacturing applications,” cirp annals 68, 535-538, 2019. [22] a. schiavi, a. prato, f. mazzoleni, g. d’emilia, a. gaspari, e. natale, “calibration of digital 3-axis mems accelerometers: a double-blind «multi-bilateral» comparison”, 2020 ieee international workshop on metrology for industry 4.0 and iot. [23] a. prato, a. schiavi, f. mazzoleni, a. touré, g. genta, m. galetto, “a reliable sampling method to reduce large set of measurements: a case study on calibration of digital 3-axis mems accelerometers” 2020 ieee international workshop on metrology for industry 4.0 and iot. [24] jcgm 100 2008 evaluation of measurement data — guide to the expression of uncertainty in measurement (gum), joint committee for guides in metrology, sèvres, france. [25] iso 16063-11 1999 methods for the calibration of vibration and shock transducers — part 11: primary vibration calibration by laser interferometry (geneva: international organization for standardization. http://www.imeko.org/ multilayer feature fusion using covariance for remote sensing scene classification acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 8 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 multilayer feature fusion using covariance for remote sensing scene classification s. thirumaladevi1, k. veera swamy2, m. sailaja3 1 ece department, jawaharlal nehru technological university, kakinada 533003, andhra pradesh, india 2 ece department, vasavi college of engineering, ibrahimbagh, hyderabad 500 031, telangana, india 3 ece department, jawaharlal nehru technological university, kakinada 533003, andhra pradesh, india section: research paper keywords: feature extraction; pre-trained convolutional neural networks; support vector machine; scene classification citation: s. thirumaladevi, k. veera swamy, m. sailaja, multilayer feature fusion using covariance for remote sensing scene classification, acta imeko, vol. 11, no. 1, article 33, march 2022, identifier: imeko-acta-11 (2022)-01-33 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received december 25, 2021; in final form february 20, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: s. thirumaladevi, e-mail: thirumaladeviece1@gmail.com 1. introduction remote sensing scene categorization has gotten a lot of attention recently, and it may be utilized in a variety of practical applications like urban planning, defence, space applications, in which measurement technology plays a key role [1]. on the other hand, it is a difficult challenge, since scene images often have complicated spatial structures with great intra-class and slight inter-class variability. to solve this problem, numerous strategies for scene classification have been advised in recent years [2]. recently, inspired by the tremendous achievement of convolutional neural networks (cnns) relating to the computer vision field [3]. deep neural networks have gained prominence in the remote sensing community due to their exceptional performance, particularly in scene classification and computer vision applications [4]. developing a deep cnn model from scratch, on the other hand, frequently necessitates a large amount of training data, whereas commercially available scene image data sets of remote sensing scene image data sets are typically tiny. deep cnn models have a high degree of generalization on a wide range of tasks because they are commonly trained on imagenet [5], which contains millions of images (e.g., scene classification and object detection [6]). in this context, the idea of using off-the-shelf pretrained cnn models for example alexnet [7], visual geometry group (vgg) 16 [8], and vgg19 as feature extractors for scene categorization using remote sensing has gained attraction. the success is due to these models representing images using hierarchical architecture and can extract more representative features. while these models can achieve categorization performance is excellent. hu et al. [9] looked into two scenarios for using a pretrained cnn model abstract remote sensing images are obtained by electromagnetic measurement from the terrain of interest. in high-resolution remote sensing imageries extraction measurement technology plays a vital role. the scene classification is one of the interesting and challenging problems due to the similarity of image structure and the available hrrs image datasets are all small. training new convolutional neural networks (cnn) using small datasets is prone to overfitting and poor attainability. to overcome this situation using the features produced by pre-trained convolutional nets and using those features to train an image classifier. to retrieve informative features from these images we use the existing alex net, vgg16, and vgg19 frameworks as a feature extractor. to increase classification performance further makes an innovative contribution fusion of multilayer features obtained by using covariance. first, to extract multilayer features, a pre-trained cnn model is used. the features are then stacked, downsampling is used to stack features of different spatial dimensions together and the covariance for the stacked features is calculated. finally, the resulting covariance matrices are employed as features in a support vector machine classification. the results of the experiments, which were conducted on two difficult data sets, uc merced and siri-whu. the proposed staked covariance method consistently outperforms and achieves better classification performance. achieves accuracy by an average of 6 % and 4 %, respectively, when compared to corresponding pre-trained cnn scene classification methods. mailto:thirumaladeviece1@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 (vgg16). the final few fully connected layers are portrayed as final image attributes for scene classification in the first scenario. in the second case, the final convolutional layer's feature maps are encoded to represent the input image using a standard method of feature encoding, such as the improved fisher kernel [10]. the vector support machine (svm) is used as the final classifier in both cases. to improve the efficacy of the proposed method, the features were extracted from multiple cnns of the same image combined by xue et al. [11] for classification. for feature fusion, sun et al. [12] have used the gated bidirectional connection method. in [13], the image is represented by combining the last two fully connected (fc) layers of a cnn model. here we propose an innovative method, called the stacked covariance strategy to fuse features from different layers of a pre-trained cnn to classify remote sensing scenes. in the first phase, a pre-trained cnn model is used to extract multi-layered features and concatenate them. the covariance approach is used to aggregate the concatenated multiple feature vectors extracted from different layers. in contrast to traditional strategies, which only use first-order statistics to integrate feature vectors, the proposed strategy allows for the use of second-order statistics information. more representative features can thus be learned as a result. then, the features are stacked, and covariance is calculated. finally, for classification using an svm classifier and improved the classification performance. this is how the rest of the paper is organized. section 2 explains the intended scene classification framework, novel aspects of our proposed technique. section 3 contains the full experimental results for two data sets, section 4 of this work concludes with some observations. 2. proposed technique description the process of transforming the raw image into numerical features that can be processed while retaining the original information is referred to as feature extraction. with the upsurge of deep learning, the first layers of deep networks have largely replaced feature extraction, particularly for image data. pretrained networks with hierarchical architecture can extract a large number of features from an image, which is thought to transmit additional information that can be put to much better use to increase categorization accuracy. the learned image characteristics are first retrieved from a pre-trained convolutional neural network and then used to train an image classifier. all pretrained cnns requisite fixed-size input images and specify the desired image size, as well as create augmented image data stores and use these data stores as input arguments to activations to automatically resize the training and test images before they are submitted to the network. remove the pre-trained cnn's last fc layer fc8 and consider the rest as a fixed feature extractor. we feed an input image scene into the cnn and using the vector as a representation of global features of the input image, generate in advance a dimensional activation vector from the first or second fc layer. finally, use the dimensional features to train a linear svm classifier for scene classification. figure 1 shows an illustration of this. to improve the classification accuracy further proposed a modified pre-trained network design by combining information from several convolution layers. the shallower levels of the cnn model are more likely to represent low-level visual components (such as edges), whereas the deeper layers exemplify more abstract information in the images. furthermore, in certain computer vision applications, combining different levels, from shallow to deep, can provide state-of-the-art performance, meaning that merging different layers of cnn can be very helpful. our proposed approach uses a similar strategy to take advantage of the information held by multiple layers in this way. this is represented in figure 2. here convolutional layers of the last three blocks of pretrained networks are adopted and concatenated the features extracted from these layers. namely conv3, conv4, conv5 in case of alexnet and "conv3-3", "conv43", and "conv5-3" in case of vgg16, vgg19. different convolutional layers predominantly have distinctive spatial dimensions, they cannot be directly concatenated. to address this issue, downsampling with bilinear interpolation is used in conjunction with channel-wise average fusion. obtained features that were reformed into a matrix along the channel dimension and aggregated using covariance. the proposed technique is described below. cnn model is a collection of functions in which each function 𝑓𝑛 takes data samples 𝑋𝑛 and a filter bank 𝑏𝑛 as inputs and outputs 𝑋𝑛+1 , where 𝑛 = 1, 2, … 𝑁 and 𝑁 is the number of layers represented as a number (1) 𝐹(𝑋) = 𝑓𝑁(… 𝑓2(𝑓1(𝑋; 𝑏1); 𝑏2) … 𝑏𝑁 ) . (1) the filter bank 𝑏𝑛 for a pretrained cnn model was learned from a large data collection. the multiplayer characteristics are retrieved from an input image 𝑋 as follows: 𝐿1 = 𝑓1(𝑋; 𝑏1), 𝐿2 = 𝑓2(𝑋; 𝑏2), and so on. as pretrained models, alexnet, vgg16, and vgg19 are employed in this paper. the features produced from the convolutional layers of the last three blocks of pretrained networks are adopted and utilized. different convolutional layers typically have different spatial dimensions; therefore, they can't be concatenated directly. direct concatenation is not allowed when conv3 may have 𝐿1 ∈ 𝑅1 ℎ×𝑤×𝑑1 , conv4 𝐿2 ∈ 𝑅2 ℎ×𝑤×𝑑2 , and conv5 has 𝐿3 ∈ 𝑅3 ℎ×𝑤×𝑑3 . downsampling with bilinear interpolation, as well as channel-wise average fusion, are used to solve this problem. using downsampling with a 𝑑 number of dimensions three convolutional layers that have been pre-processed have been obtained and channel-wise average fusion is performed then the stacked feature set is acquired as follows: 𝐿 = [𝐿1, 𝐿2, 𝐿3] ∈ 𝑅𝑖 𝑆×𝑆×𝐷 , where 𝐷 = 7 𝑑 and 𝑆 is the predefined down-sampled spatial dimension. the covariance-based pooling can be written as [14] figure 1. classification using single-layered pre-trained cnn as a feature extractor. figure 2. classification using stacked multilayer pre-trained cnn as a feature extractor. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 𝑃 = 1 𝑁 − 1 ∑(𝑌𝑖 − 𝜇) (𝑌𝑖 − 𝜇) t 𝑁 𝑖=1 ∈ 𝑅𝑖 𝐷×𝐷 , (2) where [𝑌1, 𝑌2, … 𝑌𝑁 ] ∈ 𝑅𝑖 𝐷×𝑁 is the vectorization of 𝐿, 𝑁 = 𝑆 2 and 𝜇 = (1 𝑁⁄ ) ∑ 𝑌𝑖 ∈ 𝑅 𝐷𝑁 𝑖=1 , the covariance between the two separate features makes is represented by p, while the variance of each map is represented by diagonal entries. this method incorporates covariance (i.e., second-order statistics) to produce a more dense and discriminative exemplification. second, the correlation between two distinct feature maps is represented by each entry in the covariance matrix. this is an easy approach to merge data from different feature maps that complement each other. the suggested method varies from existing cnn-based algorithms that are pre-trained. concatenating the cnn's unique convolutional features (from shallow to deep layers), feature maps from several layers are merged. as a result, the suggested technique performs much better in terms of categorization. furthermore, because the covariance matrices do not lie in euclidean space, they cannot be processed by the svm. the covariance matrix, on the other hand, can be mapped into euclidean space using the matrix logarithm operation [15]. �̂� = logm(𝑃) = 𝑈 log ∑ 𝑈t ∈ 𝑅𝐷×𝐷 , (3) where, 𝑃 = 𝑈 ∑ 𝑈t is the eigen decomposition equation of 𝑃. the preceding operations are carried out both on the samples of training and testing. {𝑉𝑖 , 𝑆𝑖 }, 𝑖 = 1, 2, … 𝑛 and the testing sets are now taken into account where 𝑆𝑖 represents the number of training samples and 𝑛 represents the number of corresponding labels. {𝑉𝑖 , 𝑆𝑖 }, 𝑖 = 1, 2, … 𝑛 is exploited to train an svm model as min𝑎,ζ ,𝑏 {1 2⁄ ‖𝑎‖2 + 𝑃 ∑ ζ𝑖 } 𝑆𝑖 (∠φ(𝑉𝑖 , 𝑎 + 𝑏)) ≥ 1 − ζ𝑖 𝜀𝑖 > 0, 𝑉𝑖 = 1, 2, … 𝑛 (4) where 𝑎 and 𝑏 are the parameters of a linear classifier (.) is the mapping function and 𝜀𝑖 are positive slack variables to assert with outliers in the training set. with k(𝑉𝑖 , 𝑉𝑗 ) = 𝑉𝑖 t𝑉𝑗 𝑓(𝑥) = sgn (∑ 𝑆𝑖 𝜆𝑖 𝑘(𝑣𝑖 , 𝑣) 𝑛 𝑖=1 + 𝑏 ). (5) 3. experimental results analysis discussion 3.1 experimental data sets we run tests on two tough image data sets related to remote sensing scene images to see how well the suggested approach performs. 1) land use data set from uc merced [16]: 2100 pictures are classified into 21 scene groups in the uc merced land use (uc) [17] data set. each class consists of 100 images in the rgb space with a size of 256 × 256 pixels. each image has a one-foot pixel resolution. figure 3. depicts sample images a) b) c) d) e) f) g) h) i) j) k) l) m) n) o) p) q) r) s) t) u) figure 3. land-use categories of 21 example classes representation of uc merced data set : a) agricultural13, b) airplane19, c) baseball diamond3, d) beach33, e) buildings21, f) chaparral13, g) denseresidential40, h) forest23, i) freeway23, j) golfcourse41, k) harbour31, l) intersection3, m) mediumresidential12, n) mobilehomepark12, o) overpass45, p) parkinglot32, q) river32, r) runway26, s) sparse residential64, t) storagetanks54, u) tenniscourt16. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 from each class, some categories (forest and sparse residential, for example) exhibit a significant level of interclass similarity, making the uc data set a difficult one to work with. siri-whu was obtained from google earth (google inc.) and covers urban regions in china, as well as siri-whu [18]. there are 12 items in the data set. each class has 200 photos, each cropped to 200 × 200 pixels and with a spatial resolution of 2 meters. in this study, 80 % of the training samples were chosen from the siri-whu [19] google data set, while the remaining amount of samples was kept for testing. the sample images of the siri-whu data set are shown in figure 4. 3.2 experimental setup in our approach, multilayer features are extracted using three well-known cnn pretrained models: alexnet [7], vgg-19 [8], and vgg-16 [8]. vgg-16 and vgg-19 three convolutional layers (e.g., "conv3–3," "conv4–3," and "conv5–3") , alexnet's three convolutional layers (i.e., "conv3," "conv4", and "conv5") are used. for feature extraction, the scene images are preset to the size of the input layer such as 227 × 227 × 3 in the case of alex net and 224 × 224 × 3 forvgg-vg16 and 19. imagenet is used to train both models. for illustration purposes, the uc data set and the siri-whu data set are used, with 80 % training samples and 20 % testing samples selected. the most frequently used image classification assessment criteria are oa and confusion matrix, f1-score. confusion matrix: this is a special matrix that is commonly used g to visualize the output. each column in this matrix represents the estimated value, while each row signifies an authentic category. as a result, evaluating is relatively simple. overall accuracy (oa): the number of appropriately categorized images divide by the total number of images in the data set, regardless of which class they belong to. f1 score: the f1 score is a metric used to determine how accurate a test is. the harmonic mean of recall and precision is calculated using the test's precision and recall. based on the combination of real and anticipated categories, classification problem with several 𝑀, which comprises 𝑃 positive instances and 𝑁 negative instances, there are four types of cases: true positives (𝑇𝑃), false positives (𝐹𝑃), true negatives (𝑇𝑁), and false negatives (𝐹𝑁). the positive sample 𝑃 = 𝑇𝑃 + 𝐹𝑁 provides a positive sample that is expected to be positive, while 𝑇𝑃 represents a positive sample that is forecast to be negative. similarly, 𝑇𝑁 denotes the number of negative cases that are identified as negative, while 𝐹𝑃 denotes the number of negative incidences that are predicted to be positive; thus, 𝑁 = 𝑇𝑁 + 𝐹𝑃 denotes the total number of negative samples. the fraction of correct instances is the accuracy, and the calculation equation is 𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑇𝑃 𝑇𝑃 + 𝐹𝑁 + 𝑇𝑁 + 𝐹𝑃 . (6) the fraction of actually positive instances in all cases projected to be positive elements is called precision. the formula is as follows: 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = 𝑇𝑃 𝑇𝑃 + 𝐹𝑃 . (7) the recall calculation equation is the fraction of all positive samples that are projected to be positive. 𝑅𝑒𝑐𝑎𝑙𝑙 = 𝑇𝑃 𝑇𝑃 + 𝐹𝑁 . (8) the f1-score is a precision and recall evaluation indicator with a comprehensive calculation equation. 𝑅𝑒𝑐𝑎𝑙𝑙 = 2 ∙ 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 ∙ 𝑅𝑒𝑐𝑎𝑙𝑙 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 + 𝑅𝑒𝑐𝑎𝑙𝑙 . (9) the confusion matrix, along with accuracy, is shown in figure 5. the first column depicts a single fully connected layer is used as the final feature extractor for scene classification, whereas the second column depicts the proposed sc-based network classification. experiments on the uc data set revealed that the oas of pre-trained alexnets is 79.76 %, vgg-19 is 81.19 %, and vgg-16 is 83.81 % while using sc and combining the last a) b) c) d) e) f) g) h) i) j) k) l) figure 4. example class representation of the siri-whu dataset: a) agriculture1, b) commercial50, c) harbor64, d) idle_land76, e) industrial111, f) meadow120, g) overpass15, h) park29, i) pond37, j) residential1, k) river106, l) water97. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 three conv layers increases accuracy by 85 %, 87.14 %, and 88.33 %, respectively. the proposed method accomplishes perfect classification performance on the majority of classes, such as agricultural, beach, chaparral, forest, harbour, parking lot, runway and improved classes are buildings, dense residential, baseball diamond, tennis court and on average 6 % accuracy is increased in the case of ucm. similar results can be obtained in the experiments conducted on the siri-whu data set, confusion matrix for the fully connected layer is treated as a final feature extractor for scene classification, proposed sc-based classification are shown in figure 6, where the pre-processed single layer of alexnet is 86.52 %%, vgg-19 is 87.6 %, and vgg-16 is 88.04 % while using the proposed strategy increases the accuracy by 90 %, a) d) b) e) c) f) figure 5. confusion matrix of uc merced dataset using three pre-trained networks. first column corresponding to single-layered, a) alex net, b) vgg19, c) vgg16, second column corresponding to multi-layered fusion, d) sc-alex net, e) sc-vgg19, f) sc-vgg16. table 1. comparison results for the two data sets ucm and siri-whu. ucm dataset with 80% training (overall accuracy %) siri-whu dataset with 80% training (overall accuracy %) network/method pre-trained network fc7 as a feature extractor proposed sc network as a feature extractor pre-trained network fc7 as a feature extractor proposed sc network as a feature extractor alexnet 79.76 85 86.52 90 vgg-vd19 81.19 87.14 87.60 91.08 vgg-vd16 83.81 88.33 88.04 92.60 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 91.08 %, and 92.60 %, respectively. in most classes, the suggested technique achieves optimal classification performance like agriculture, commercial, harbour, meadow, meadow and improved classes are industrial, overpass, pond, overall 4 % accuracy is increased and comparison graphs are shown in figure 7. the related comparison results for the two data sets are shown in table 1. the proposed scenario shows a clear improvement in oa when several conv layers are combined. as illustrated in figure 8, f1 scores of improved classes of ucm data set. by using the proposed strategy, a considerable number of classes show noticeable improvement such as agricultural, beach, harbour, runway reached 100 % and dense residential-class improves approximately 40 %. likewise, figure 9, shows the corresponding f1 scores of pretrained single-layered and proposed networks on the siri-whu data set. as can be witnessed the proposed strategy, exhibits obvious improvements in most of the classes for example, on the siri-whu data set water reached 100% and harbour, idle_land, industrial, overpass, park classes reach above 90%. 4. conclusion in this research, we portray stacked covariance, a new technique for fusing image features from multiple layers of a cnn for scene categorization using remote sensing data. feature extraction is performed initially with a pre-trained cnn model, followed by feature fusion with covariance in the presented scbased classification framework. more dense features are recovered for classification because the proposed scenario takes into account second-order data. each feature represents the covariance of two distinct feature maps and these features are applied to svm for classification. our extensive suggested sc method's effectiveness is validated by comparison with state-ofthe-art methodologies using two publicly accessible remote sensing image scene categorization data sets. we recognize that utilizing the proposed sc technique, the accuracy attained for most classes shows obvious enhancements, indicating that this is a viable improvement strategy. a) d) b) e) c) f) figure 6. confusion matrix of siri-whu dataset using three pre-trained networks. first column corresponding to single-layered, a) alex net, b) vgg19, c) vgg16, second column corresponding to multi-layered fusion, d) sc-alex net, e) sc-vgg19, f) sc-vgg16. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 references [1] l. fang, n. he, s. li, p. ghamisi, j. a. benediktsson, extinction profiles fusion for hyperspectral images classification ieee trans. geosci. remote sens., vol. 56, no. 3, 2018, pp. 1803–1815. doi: 10.1109/tgrs.2017.2768479 [2] x. bian, c. chen, l. tian, q. du, fusing local and global features for high-resolution scene classification, ieee j. sel. topics appl. earth observ. remote sens., vol. 10, no. 6, jun. 2017, pp. 2889– 2901. doi: 10.1109/jstars.2017.2683799 [3] x. lu, b. wang, x. zheng, x. li, exploring models and data for remote sensing image caption generation, ieee trans. geosci. remote sens., vol. 56, no. 4, apr. 2018, pp. 2183–2195. doi: 10.1109/tgrs.2017.2776321 a) ucm dataset b) siri-whu dataset figure 7. comparison of pre-trained networks and proposed sc framework-based networks of uc merced (ucm), siri-whu datasets. figure 8. comparison between f1 scores of uc merced data set improved classes with pre-trained, proposed framework networks. figure 9. comparison between f1 scores siri-whu data set improved classes with pre-trained, proposed framework networks. 0 20 40 60 80 100 alexnet proposed alexnet vgg 19 proposed vgg19 vgg16 proposed vgg16 agricultural beach denseresidential golfcourse harbor intersection river 60 70 80 90 100 alexnet proposed alexnet vgg 19 proposed vgg19 vgg16 proposed vgg16 agriculture harbor idle_land industrial overpass park pond https://doi.org/10.1109/tgrs.2017.2768479 https://doi.org/10.1109/jstars.2017.2683799 https://doi.org/10.1109/tgrs.2017.2776321 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 8 [4] b. m. reddy, m. zia ur rahman, analysis of sar images using new image classification methods, international journal of innovative technology and exploring engineering, vol.8, no.8, 2019, pp. 760-764. [5] j. deng, w. dong, r. socher, l.-j. li, k. li, l. fei-fei, imagenet: a large-scale hierarchical image database, in proc. ieee conf. comput. vis. pattern recognit., miami, fl, usa, 20-25 june 2009, pp. 248–255. doi: 10.1109/cvpr.2009.5206848 [6] r. girshick, j. donahue, t. darrell, j. malik, rich feature hierarchies for accurate object detection and semantic segmentation, in proc. ieee int. conf. comput. vis., columbus, oh, usa, 23-28 june 2014, pp. 580–587. doi: 10.1109/cvpr.2014.81 [7] y. wang, c. wang, l. luo, z. zhou, image classification based on transfer learning of convolutional neural network, chinese control conference (ccc), guangzhou, china, 27-30 july 2019, pp. 7506-7510. doi: 10.23919/chicc.2019.8865179 [8] k. simonyan, a. zisserman, very deep convolutional networks for large-scale image recognition, 3rd iapr asian conference on pattern recognition (acpr), kuala lumpur, malaysia, 3-6 november 2015, pp. 1-13. doi: 10.1109/acpr.2015.7486599 [9] s. tammina, transfer learning using vgg-16 with deep convolutional neural network for classifying images, international journal of scientific and research publications (ijsrp), vol. 9, 2019, no. 10, pp. 143-150. doi: 10.29322/ijsrp.9.10.2019.p9420 [10] s. putluri, m. z. ur rahman, s. y. fathima, cloud-based adaptive exon prediction for dna analysis, healthcare technology letters, vol. 5, no. 1, 2018, pp. 25-30. doi: 10.1049/htl.2017.0032 [11] y. bi, b. xue, m. zhang, genetic programming with image-related operators and a flexible program structure for feature learning in image classification, ieee transactions on evolutionary computation., vol. 25, no. 1, 2020, pp. 87–101. doi: 10.1109/tevc.2020.3002229 [12] h. sun, s. li, x. zheng, x. lu, remote sensing scene classification by gated bidirectional network, ieee trans. geosci. remote sens., vol. 58, no. 1, pp. 82–96, 2019. doi: 10.1109/tgrs.2019.2931801 [13] g.fiori, f.fuiano, a.scorza, j.galo, s.conforto, s.a sciuto, a preliminary study on an image analysis based method for lowest detectable signal measurements in pulsed wave doppler ultrasounds, acta imeko, vol. 10, no.2, 2021, pp. 126-132. doi: 10.21014/acta_imeko.v10i2.1051 [14] l. fang, n. he, s. li, a. j. plaza, j. plaza, a new spatial– spectral feature extraction method for hyperspectral images using local covariance matrix representation, ieee trans. geosci. remote sens., vol. 56, no. 6, jun. 2018, pp. 3534–3546. doi: 10.1109/tgrs.2018.2801387 [15] v. arsigny, p. fillard, x. pennec, n. ayache, geometric means in a novel vector space structure on symmetric positive-definite matrices, siam j. matrix anal. appl., vol. 29, no. , 20071, pp. 328– 347. doi: 10.1137/050637996 [16] uc merced data set. online [accessed december 2019] http://weegee.vision.ucmerced.edu/datasets/landuse.html [17] i. m. e. zaragoza, g. caroti, a. piemonte, the use of image and laser scanner survey archives for cultural heritage 3d modelling and change analysis, acta imeko, vol. 10, no.1, 2021, pp. 114121. doi: 10.21014/acta_imeko.v10i1.847 [18] siri-whu data set. online [accessed august 2020] http://www.lmars.whu.edu.cn/prof_web/zhongyanfei/num/g oogle.html [19] y. liu, y. zhong, f. fei, q. zhu, q. qin, scene classification based on a deep random-scale stretched convolutional neural network, remote sensing, vol. 10, no. 3, apr. 2018. doi: 10.3390/rs10030444 https://doi.org/10.1109/cvpr.2009.5206848 https://doi.org/10.1109/cvpr.2014.81 https://doi.org/10.23919/chicc.2019.8865179 https://doi.org/10.1109/acpr.2015.7486599 https://doi.org/10.29322/ijsrp.9.10.2019.p9420 https://doi.org/10.1049/htl.2017.0032 https://doi.org/10.1109/tevc.2020.3002229 https://doi.org/10.1109/tgrs.2019.2931801 https://doi.org/10.21014/acta_imeko.v10i2.1051 https://doi.org/10.1109/tgrs.2018.2801387 https://doi.org/10.1137/050637996 http://weegee.vision.ucmerced.edu/datasets/landuse.html https://doi.org/10.21014/acta_imeko.v10i1.847 http://www.lmars.whu.edu.cn/prof_web/zhongyanfei/num/google.html http://www.lmars.whu.edu.cn/prof_web/zhongyanfei/num/google.html https://doi.org/10.3390/rs10030444 frequency response function identification using fused filament fabrication-3d-printed embedded aruco markers acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 6 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 frequency response function identification using fused filament fabrication-3d-printed embedded aruco markers lorenzo capponi1, tommaso tocci2, giulio tribbiani2, massimiliano palmieri2, gianluca rossi2 1 aerospace department, university of illinois at urbana-champaign, 61801 urbana, illinois 2 engineering department, university of perugia, 06122 perugia, italy section: research paper keywords: aruco; marker detection; non-contact measurement; structural dynamics citation: lorenzo capponi, tommaso tocci, giulio tribbiani, massimiliano palmieri, gianluca rossi, frequency response function identification using fused filament fabrication-3d-printed embedded aruco markers, acta imeko, vol. 11, no. 3, article 16, september 2022, identifier: imeko-acta-11 (2022)-03-16 section editor: francesco lamonaca, university of calabria, italy received july 24, 2022; in final form september 15, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: lorenzo capponi, e-mail: lcapponi@illinois.edu 1. introduction when a flexible structure is excited at or close to one of its natural frequencies, resonance phenomenon occurs [1], [2]. in resonance operating conditions, most of the energy is released and the response vibration amplitudes significantly increases. this process generally leads to an increasing of the vibration fatigue damage [3]-[5]. due to this, the determination of the modal components (i.e., natural frequencies, mode shapes and damping) is fundamental in any structural dynamics approach, either numerical or experimental, in order to avoid potential critical conditions [2]. with this perspective, many experimental approaches have been developed in years that allows the determination of modal components [6]-[8]. the impact excitation using modal hammers and the shaker excitation are the two most used full-contact experimental approaches [2], [9]. in the last years, many image-based analysis have been introduced for the displacement measurements [10], [11] and consequently for the structural dynamics due to several operating advantages, e.g. high spatial density, full-field information, no sensors to be placed on the structure [12]. javh et al. [13] proved the hybrid modal-parameter identification of full-field mode shapes using a dslr camera for responses far above the camera's frame rate, employing the lucas-kanade optical flow algorithm [14]. gorjup et al. [15] researched on the full-field 3d operatingdeflection-shape (ods) identification using the frequency domain triangulation in the visible spectrum. capponi et al. [16] proposed a methodology based on the thermoelastic principle for the visual modal strain determination, that allowed the fatigue modal damage identification. one of the most promising approaches for deformation, displacement and motion detection involves markers, either physical or virtual [17]-[19]. virtual markers are often employed as they allow tracking objects in subsequent acquired frames without introducing physical targets [20], [21]. when virtual markers are not available, physical markers are employed [22], [23], and among them, the aruco marker library (aruco augmented reality university of cordoba) was found to be one of the most effective and robust to detection errors and occlusion [24]-[26]. elangovan et al. [27] used them for decoding contact forces exerted by adaptive hands, while sani and karamian [28] and lebedev et al. [29] employed them for drone quadrotor and uav autonomous navigation and landing, respectively. in relation to the use of fiducial markers for vibrations measurement, abdelbarr et al. [30] researched abstract the assessment of modal components is a fundamental step in structural dynamics. while experimental investigations are generally performed through full-contact techniques, using accelerometers or modal hammers, this research proposes a non-contact frequency response function identification measurement technique based on aruco square fiducial markers displacement detection. a video of the phenomenon to be analyzed is acquired, and the displacement is measured through markers, using a dedicated tracking algorithm. the proposed method is presented using a harmonically excited fused filament fabrication-3d-printed flexible structure, equipped with multiple embedded-printed markers, whose displacement is measured with an industrial camera. comparison with numerical simulation and an established experimental approach is finally provided for the results validation. mailto:paul@regtien.net acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 structural 3d displacement using aruco markers, while the study of kalybek et al. [31] provides one of the first evidence of the capability of optical vibration monitoring systems in modal identifications. recently, tocci et al. [32] presented an aruco marker-based vibration displacement technique, provided with an uncertainty analysis, based on acquisition parameters influence investigation. in this research, the aruco markers are employed for the determination of the frequency response function (frf) of a flexible structure using image-based analysis. in this study, the growing potential of 3d printing is exploited: the tested structure is realised in polylactic acid using fused filament fabrication 3d printing methodology. the employed markers are still realised using 3d printing methodology and they are generated as embedded in the structure during a unique printing job. an established experimental frf assessment technique and a numerical model are also provided for the results validation. the manuscript is organized as follows. in sec. 2 the theoretical background of structural dynamics and marker detection is given. in sec. 3, the proposed approach is presented and in sec. 4 the experimental campaign and the numerical model are described. sec. 5 gives the results while sec. 6 draws the conclusions. 2. theoretical background 2.1. structural dynamics flexible structures can be represented by n-degrees of freedom (dofs) systems using [8]: 𝑀 �̈�(𝑡) + 𝐷 �̇�(𝑡) + 𝐾 𝑥(𝑡) = 𝐷 �̇�(𝑡) + 𝐾 𝑦(𝑡), (1) where 𝑴, 𝑫 and 𝑲 are the mass, damping and stiffness matrices, while 𝒚(𝑡) and 𝒙(𝑡) are the excitation and the response displacements of the dofs, respectively. assuming a harmonic excitation 𝒚(𝑡) = 𝑌 𝑒 𝑖𝜔𝑡 and a response 𝒙(𝑡) = 𝑋 𝑒 𝑖𝜔𝑡 , eq. (1) can be written as[8]: (−ω2 + i ω 𝐷 𝑀−1 + 𝐾 𝑀−1) 𝑋(ω) = (i ω 𝐷 𝑀−1 + 𝐾 𝑀−1) 𝑌(ω). (2) from eq. (2), the displacement-response amplitude is obtained as [8]: α(ω) = 𝑋(ω) 𝑌(ω) = i ω 𝐷 𝑀−1 + 𝐾 𝑀−1 −ω2 + i ω 𝐷 𝑀−1 + 𝐾 𝑀−1 (3) where 𝜶(𝜔) defines the receptance matrix, that is also know as the frequency-response function (frf) from displacement to displacement [2], [8]. using the eigenvalues notation, 𝜶(𝜔), which relates the j-th response to the k-th excitation, can be written as [2], [8]: α𝑗𝑘 (ω) = ∑ ( 𝑅𝑗𝑘𝑟 i 𝜔 − 𝜆𝑟 + 𝑅 ∗𝑗𝑘𝑟 i 𝜔 − 𝜆 ∗𝑟 ) 𝑁 𝑟=1 , (4) where r is the eigenvalue index (i.e., the mode index), * stands for the complex conjugate notation, 𝑅𝑗𝑘𝑟 is the modal constant and 𝜆𝑟 is r-th eigenvalue [2]. in the experimental modal analysis, several approaches for the excitation and the measurement of the structural dynamics can be employed [2]. the frequency response function of a system can be experimentally determined using frf estimators [8]. when there is no input noise and the output noise is uncorrelated, the α̂(ω) is used: α̂(ω) = 𝑆𝑦�̂� (ω) 𝑆𝑦�̂� (ω) (5) where 𝑆𝑦�̂� (ω) is the cross-spectrum between the input excitation y and the output response x and 𝑆𝑦�̂� (ω) is the autospectrum of the input excitation y (the ^ symbol denotes the estimation from measurements), defined as: 𝑆𝑦�̂� (𝜔) = 1 𝑇 [𝑌∗̂(𝜔) �̂�(𝜔)], (6) 𝑆𝑦�̂� (𝜔) = 1 𝑇 [𝑌∗̂(𝜔) �̂�(𝜔)] (7) where t is the measurement time length, �̂�(ω) and �̂�(ω) are the spectra of the input excitation and of the response, respectively. for the experimental frf reconstruction, the modal parameters identification is required, and, for this purpose, different methods can be used [33], [34]. the preferred procedure in experimental modal analysis consists of using the least square frequency domain (lsfd) approach for the modal constants identification from the eigenvalues λ�̂� obtained through the least square complex frequency (lscf) and the stabilisation chart [35]. 2.2. aruco marker detection an aruco marker is a square-marker composed by a wide black border, that facilitates its detection in the image, and an inner binary-matrix, which determines its identification number [24], [25]. an example of aruco marker is presented in figure 1. the identification of an aruco marker in a captured frame requires several computational steps [32], that, as well as for the generation of the marker, are provided from the opencv python dedicated library. however, properly developed imageprocessing (e.g., filters and thresholds) can facilitate the pattern recognition. the marker detection is based on its 4 corners identification in each captured frame (see figure 1). from the corners, the spatial coordinates of the centre of the marker (𝑥𝑐 , 𝑦𝑐 ) are evaluated frame-by-frame during the acquisition [32]: 𝐶 = (𝑥c, 𝑦c) = �⃗� ⋅ ( 1 4 ∑ |𝑥r| 4 r=1 , 1 4 ∑ |𝑦r| 4 r=1 ) (8) where (𝑥𝑟 , 𝑦𝑟 ) are the coordinates of the r-th vertex and �⃗� is the calibration factor from pixel units to si units, defined as the ratio between the side length of the physical marker in si units 𝑑𝑆𝐼 and figure 1. example of an aruco marker from original dictionary: corners and reference system. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 the average of the four side lengths (in pixels) of the captured marker in the fov 𝑑𝑝𝑥⃗⃗ ⃗⃗ ⃗⃗ ⃗ ∶ �⃗� = 𝑑𝑆𝐼 / 𝑑𝑝𝑥⃗⃗ ⃗⃗ ⃗⃗ ⃗ [m/pixels]. (9) the calibration factor is evaluated at each new acquired frame. in this way, if the marker is subjected to non-planar displacements or deformations during the measurement, the calibration factor is again estimated. the time-history of the centre the marker 𝐶(𝑡) is obtained by tracking it during the acquisition. 3. aruco marker-based frequency-response function identification with this study, a method for the experimental frequency response function identification is proposed. as discussed in sec. 2.1, the receptance matrix α̂(ω), estimated from experiments, can be determined using eq. (5). in eq. (6), the spectrum of the excitation input �̂�(ω) and of the structure response �̂�(ω) are determined from the aruco marker centre displacement time histories 𝐶𝑦 (𝑡) and 𝐶𝑥 (𝑡) (see eq. 8): 𝑌�̂� (𝜔) = ∫ 𝐶𝑦(𝑡) ∞ −∞   𝑒 −i𝜔𝑡 𝑑𝑡 (10) 𝑋�̂� (𝜔) = ∫ 𝐶𝑥 (𝑡) ∞ −∞   𝑒 −i𝜔𝑡 𝑑𝑡 (11) then, the receptance α̂(ω) is estimated using: α̂(ω) = 𝑌𝐶 ∗̂(ω) 𝑋�̂� (ω) 𝑌𝐶 ∗̂(ω) 𝑌�̂� (ω) (12) finally, frfs reconstruction using lsfd and lscf approaches is performed. 4. experimental research 4.1. setup in this research, a y-shaped specimen, shown in figure 2, was used [4], [12]. in particular, this geometry was chosen due to its structural dynamic properties. in fact, by using two steel weights (each of 360 g), fixed to each of the arms, the structural dynamics were adjusted to the research needs. the y-shaped sample was realised in white pla, using an ultimaker3 3d printer (100% infill and 0.1 mm of layer height). default values for other printing parameters were used. in the printing process, three 8x8 mm2 aruco markers were embedded in the last four layers of the y-sample geometry and printed using black pla material in one printing process (see figure 2). the sample was mounted on an electro-dynamical shaker (sentek l1024 with pa115 power amplifier), as shown in figure 3. on the shaker fixation, a fourth aruco marker (printed in b&w on a standard 80 g/m2 paper in 8x8 mm2) was rigidly glued for the input excitation measurement. the marker detection was performed using a flir backfly s 5 mp monochrome camera with sony imx250 sensor and fujinon 12 mm optic mounted. the resolution of the camera was settled at 1000 × 850 pixels and the frame rate at 160 fps. the setup consists also of a pcb-352c34 accelerometer, bonded on the shaker fixation for controlling and measuring the input excitation, and of a pcb-352c23/nc, fixed on one y-sample arm for the response measurement. the main specifications of the accelerometers used are shown in table 1. for the excitation, a sine-sweep of 0.5 g of constant amplitude from 5 hz to 80 hz was given to the shaker (close-loop control), with a sweep-rate of 16 oct/min (i.e., approximately 4 sweeps in 68 seconds). the sweep rate was carefully chosen in order to excite the natural frequencies of the sample. however, the measurement with the camera was limited at approximately 45 seconds due to hardware and memory limitations. 4.2. data acquisition the marker used in this research are from the aruco original library, identified as shown in figure 4. in particular, the markers with id1 and id7 are considered as input reference while the markers on the two arms (i.e., id2 and id5) as output displacement. the displacement of the four markers centre point, captured during the experiment and evaluated using eq. 8 and eq. 9, is shown in figure 5. figure 2. y-shaped specimen with installed sensors. table 1. technical specifications of accelerometers used. specifications pcb-352c34 pcb-352c23/nc sensitivity 100 mv/g (±10 %) 5 mv/g (±20 %) measurement range ±490 m/s² pk ±9810 m/s² pk frequency range (±5 %) 0.5 to 10000 hz 2 to 10000 hz resonant frequency ≥50 khz ≥70 khz broadband resolution 0.0015 m/s2 rms 0.03 m/s2 rms non-linearity ≥ 1 % ≥ 1 % transverse sensitivity ≥ 5 % ≥ 5 % figure 3. experimental setup. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 once the displacement time-histories are obtained, the frfs can be evaluated through eq. 12, and, finally, reconstructed using lsfd and lscf approaches. similarly, the reference and the response accelerations time-histories are measured during the excitation (see figure 6) and the accelerance frf is evaluated [2]. 4.3. finite element model a finite element model of the y-shaped specimen is prepared using a commercial software. figure 7 shows the realized numerical model. the structure has been meshed using solid element with ten degree of freedom for node for a total of 89962 elements and 131580 nodes. to model the external masses attached to the free end of the structures, two-point mass, with a mass equal to 0.36 kg each one, are rigidly connected to the holes on the arms of the structure. moreover, to accurately replicate the experimental test, an additional mass of 1000 kg (namely large mass in figure 7) was connected to the constrain zone of the structure. the displacement along the y axis was not constrained, while all the other degree of freedom were fixed. in such a way it was possible to use the large mass method to evaluate the frequency response both in terms of displacement and in terms of acceleration. to calculate the numerical frequency response function (shown in sec. 5 the modal approach was used. for this reason, a modal analysis was necessary to obtain the natural frequencies of the system and the modal shapes in the point where the responses should be addressed. all the frequency response function shown in sec. 5 were obtained considering a percentage damping equal to 1% constant for each vibrating mode. 5. results four different frfs are obtained from the combination of the two markers as input and the two as output. however, as expected, the displacement measured with id1 and id7 markers is totally comparable and the same consideration can be performed for id2 and id5 markers, due to geometrical considerations. due to this, for the sake of clarity, only id1-id2 markers frf will be shown and considered in the further discussion. from the results in figure 8, the goodness of the numerical model is verified, comparing the frf obtained performing the accelerometer-base experiments. figure 4. aruco markers employed. figure 5. measured displacement of the detected markers. figure 6. reference and response accelerations. figure 7. fe model of the y-shaped specimen. figure 8. experimental and numerical acceleration frequency response functions comparison. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 in the considered frequency range, three predominant natural frequencies are clearly identified at approximately 17 hz, 48 hz and 69 hz from both experiments and numerical model. finally, the comparison between the verified numerical model and the proposed approach is performed in terms of displacement frf (see figure 9). in the same considered frequency range, the same natural frequencies are identified using the aruco markers with high accuracy. the obtained natural frequencies are presented in table 2. a slightly decreasing of the third natural frequency is detected with respect to the numerical model. however, the experimental approaches give similar results, and this deviation can be attributed to the numerical model setting. 6. conclusions this study researches the modal components identification using a non-contact measurement approach based on aruco marker displacement detection. even though the established fullcontact methods are widely used for several research applications, the required instrumentation is expensive and delicate, and the experimental procedures are time consuming for a full-field comprehensiveness of the dynamics of a structure. on the other hand, the proposed method proved high accuracy on the assessment of natural frequencies of a structure with a relatively low computational effort and extremely lower budget sensors instrumentation: each aruco marker can be considered as a sensor, and if multiple markers are placed and detected in the field of view of the camera, more information on the dynamics of the structure can be easily provided. moreover, using the 3d printing technology, embedded sensors are demonstrated to be effective and reliable. further employment of aruco markers in structural dynamics will be investigated. references [1] d. benasciutti, fatigue analysis of random loadings. a frequencydomain approach, phd university of ferrara, department of engineering, 2004. [2] j. slavič, m. boltezar, m. mrsnik, m. cesnik, j. javh, vibration fatigue by spectral methods. elsevier, 2021. doi: 10.1016/c2019-0-04580-3. [3] d. benasciutti, r. tovo, spectral methods for lifetime prediction under wide-band stationary random processes, int j fatigue, vol. 27, no. 8, aug. 2005, pp. 867–877. doi: 10.1016/j.ijfatigue.2004.10.007 [4] l. capponi, m. česnik, j. slavič, f. cianetti, m. boltežar, nonstationarity index in vibration fatigue: theoretical and experimental research, int j fatigue, vol. 104, nov. 2017, pp. 221– 230. doi: 10.1016/j.ijfatigue.2017.07.020 [5] m. mršnik, j. slavič, m. boltežar, vibration fatigue using modal decomposition, mech syst signal process, vol. 98, jan. 2018, pp. 548–556. doi: 10.1016/j.ymssp.2017.03.052 [6] d. j. ewins, modal testing: theory and practice, vol. 15, letchworth: research studies press, 1984. [7] z.-f. fu, j. he, modal analysis. elsevier, 2001. [8] n. m. m. maia, j. m. m. e silva, theoretical and experimental modal analysis. research studies press, 1997. [9] w. heylen, s. lammens, p. sas, modal analysis theory and testing, vol. 200, no. 7. katholieke universiteit leuven leuven, belgium, 1997. [10] t. tocci, l. capponi, r. marsili, g. rossi, optical-flow-based motion compensation algorithm in thermoelastic stress analysis using single-infrared video, acta imeko, vol. 10, no. 4, dec. 2021, p. 169. doi: 10.21014/acta_imeko.v10i4.1147 [11] f. vurchio, g. fiori, a. scorza, s. a. sciuto, comparative evaluation of three image analysis methods for angular displacement measurement in a mems microgripper prototype: a preliminary study, acta imeko, vol. 10, no. 2, jun. 2021, p. 119. doi: 10.21014/acta_imeko.v10i2.1047. [12] l. capponi, j. slavič, g. rossi, m. boltežar, thermoelasticitybased modal damage identification, int j fatigue, vol. 137, aug. 2020, p. 105661. doi: 10.1016/j.ijfatigue.2020.105661. [13] j. javh, j. slavič, m. boltežar, experimental modal analysis on fullfield dslr camera footage using spectral optical flow imaging, j sound vib, vol. 434, 2018, pp. 213–220. [14] b. d. lucas, t. kanade, an iterative image registration technique with an application to stereo vision, proceedings darpa image understanding workshop, 1981, pp. 121–130. [15] d. gorjup, j. slavič, m. boltežar, frequency domain triangulation for full-field 3d operating-deflection-shape identification, mech syst signal process, vol. 133, nov. 2019, p. 106287. doi: 10.1016/j.ymssp.2019.106287 [16] l. capponi, thermoelasticity-based analysis: collection of python packages. 2020. doi: 10.5281/zenodo.4043102 [17] d. g. lowe, object recognition from local scale-invariant features, in proceedings of the seventh ieee international conference on computer vision, 1999, vol. 2, pp. 1150–1157. [18] g. allevi, l. casacanditella, l. capponi, r. marsili, g. rossi, census transform based optical flow for motion detection during different sinusoidal brightness variations, j phys conf ser, vol. 1149, no. 1, dec. 2018, p. 012032. doi: 10.1088/1742-6596/1149/1/012032 [19] t. tocci, l. capponi, r. marsili, g. rossi, j. pirisinu, suction system vapour velocity map estimation through sift-based alghoritm, j phys conf ser, vol. 1589, no. 1, jul. 2020, p. 012004. doi: 10.1088/1742-6596/1589/1/012004 [20] t. khuc, f. n. catbas, computer vision-based displacement and vibration monitoring without using physical target on structures, in bridge design, assessment and monitoring, routledge, 2018, pp. 89–100. doi: 10.1201/9781351208796-8 figure 9. experimental and numerical displacement frequency response functions comparison. table 2. natural frequencies obtained for each technique used. the standard deviation on each value is ±1.28 hz. technique 1st mode frequency / hz 2nd mode frequency / hz 3rd mode frequency / hz aruco markers 17.2 48.9 64.00 accelerometers 17.2 49.0 63.9 numerical model 17.2 48.9 69.5 https://doi.org/10.1016/c2019-0-04580-3 https://doi.org/10.1016/j.ijfatigue.2004.10.007 https://doi.org/10.1016/j.ijfatigue.2017.07.020 https://doi.org/10.1016/j.ymssp.2017.03.052 https://doi.org/10.21014/acta_imeko.v10i4.1147 https://doi.org/10.21014/acta_imeko.v10i2.1047 https://doi.org/10.1016/j.ijfatigue.2020.105661 https://doi.org/10.1016/j.ymssp.2019.106287 https://doi.org/10.5281/zenodo.4043102 https://doi.org/10.1088/1742-6596/1149/1/012032 https://doi.org/10.1088/1742-6596/1589/1/012004 https://doi.org/10.1201/9781351208796-8 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 [21] c.-z. dong, o. celik, f. n. catbas, marker-free monitoring of the grandstand structures and modal identification using computer vision methods, struct health monit, vol. 18, no. 5–6, nov. 2019, pp. 1491–1509. doi: 10.1177/1475921718806895 [22] f. lunghi, a. pavese, s. peloso, i. lanese, d. silvestri, computer vision system for monitoring in dynamic structural testing, in role of seismic testing facilities in performance-based earthquake engineering, vol. 22, m. n. fardis and z. t. rakicevic, eds. dordrecht: springer netherlands, 2012, pp. 159–176. doi: 10.1007/978-94-007-1977-4 [23] s. w. park, h. s. park, j. h. kim, h. adeli, 3d displacement measurement model for health monitoring of structures using a motion capture system, measurement, vol. 59, jan. 2015, pp. 352– 362. doi: 10.1016/j.measurement.2014.09.063 [24] f. j. romero-ramirez, r. muñoz-salinas, r. medina-carnicer, speeded up detection of squared fiducial markers, image vis comput, vol. 76, aug. 2018, pp. 38–47. doi: 10.1016/j.imavis.2018.05.004 [25] s. garrido-jurado, r. muñoz-salinas, f. j. madrid-cuevas, m. j. marín-jiménez, automatic generation and detection of highly reliable fiducial markers under occlusion, pattern recognit, vol. 47, no. 6, jun. 2014, pp. 2280–2292. doi: 10.1016/j.patcog.2014.01.005 [26] l. capponi, t. tocci, m. d'imperio, s. h. jawad abidi, m. scaccia, f. cannella, r. marsili, g. rossi, thermoelasticity and aruco marker-based model validation of polymer structure: application to the san giorgio’s bridge inspection robot, acta imeko, vol. 10, no. 4, dec. 2021, p. 177. doi: 10.21014/acta_imeko.v10i4.1148 [27] n. elangovan, a. dwivedi, l. gerez, c.-m. chang, m. liarokapis, employing imu and aruco marker based tracking to decode the contact forces exerted by adaptive hands, in 2019 ieeeras 19th international conference on humanoid robots (humanoids), oct. 2019, pp. 525–530. doi: 10.1109/humanoids43949.2019.9035051 [28] m. f. sani, g. karimian, automatic navigation and landing of an indoor ar. drone quadrotor using aruco marker and inertial sensors, in 2017 international conference on computer and drone applications (iconda), nov. 2017, pp. 102–107. doi: 10.1109/iconda.2017.8270408 [29] i. lebedev, a. erashov, a. shabanova, accurate autonomous uav landing using vision-based detection of aruco-marker, in international conference on interactive collaborative robotics, springer, 2020, pp. 179–188. doi: 10.1007/978-3-030-60337-3_18 [30] m. abdelbarr, y. l. chen, m. r. jahanshahi, s. f. masri, w.-m. shen, u. a. qidwai, 3d dynamic displacement-field measurement for structural health monitoring using inexpensive rgb-d based sensor, smart mater struct, vol. 26, no. 12, dec. 2017, p. 125016. doi: 10.1088/1361-665x/aa9450 [31] m. kalybek, m. bocian, n. nikitas, performance of optical structural vibration monitoring systems in experimental modal analysis, sensors, vol. 21, no. 4, feb. 2021, p. 1239. doi: 10.3390/s21041239 [32] t. tocci, l. capponi, g. rossi, aruco marker-based displacement measurement technique: uncertainty analysis, engineering research express, vol. 3, no. 3, sep. 2021, p. 035032. doi: 10.1088/2631-8695/ac1fc7 [33] p. guillame, b. peeters, b. cauberghe, p. verboven, identification of highly damped systems and its application to vibro-acoustic modeling, 2004. [34] p. guillaume, p. verboven, b. cauberghe, s. vanlanduit, e. parloo, g. de sitter, frequency-domain system identification techniques for experimental and operational modal analysis, ifac proceedings volumes, vol. 36, no. 16, sep. 2003, pp. 1609– 1614. doi: 10.1016/s1474-6670(17)34990-x [35] b. peeters, h. van der auweraer, p. guillaume, j. leuridan, the polymax frequency-domain method: a new standard for modal parameter estimation?, shock and vibration, vol. 11, no. 3–4, 2004, pp. 395–409. doi: 10.1155/2004/523692 https://doi.org/10.1177/1475921718806895 https://doi.org/10.1007/978-94-007-1977-4 https://doi.org/10.1016/j.measurement.2014.09.063 https://doi.org/10.1016/j.imavis.2018.05.004 https://doi.org/10.1016/j.patcog.2014.01.005 https://doi.org/10.21014/acta_imeko.v10i4.1148 https://doi.org/10.1109/humanoids43949.2019.9035051 https://doi.org/10.1109/iconda.2017.8270408 https://doi.org/10.1007/978-3-030-60337-3_18 https://doi.org/10.1088/1361-665x/aa9450 https://doi.org/10.3390/s21041239 https://doi.org/10.1088/2631-8695/ac1fc7 https://doi.org/10.1016/s1474-6670(17)34990-x https://doi.org/10.1155/2004/523692 zro2-doped zno-pdms nanocomposites as protective coatings for the stone materials acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 6 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 zro2-doped zno-pdms nanocomposites as protective coatings for the stone materials maduka l. weththimuni1, marwa ben chobba2, ilenia tredici3, maurizio licchelli1,3 1 department of chemistry, university of pavia, via t. taramelli 12, i-27100, pavia, italy 2 national school of engineering, university of sfax, box 1173, 3038 sfax, tunisia 3 cisric, university of pavia, via a. ferrata 3, i-27100, pavia, italy section: research paper keywords: zro2-doped zno; nanocomposites; protective coating; self-cleaning effect citation: maduka l. weththimuni, marwa ben chobba, ilenia tredici, maurizio licchelli, zro2-doped zno-pdms nanocomposites as protective coatings for the stone materials, acta imeko, vol. 11, no. 1, article 4, march 2022, identifier: imeko-acta-11 (2022)-01-04 section editor: fabio santaniello, university of trento, italy received march 3, 2021; in final form february 23, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: maduka l. weththimuni, e-mail: madukalankani.weththimuni@unipv.it 1. introduction our artifacts provide the most important and essential availability for the culture of all countries and represent the symbol of history. therefore, protection and conservation of this patrimony are essential tasks for the scientific community. the cultural heritage buildings undergo different damages, particularly because they are exposed outdoors, and are subject to a number of phenomena such as air pollution, water absorption, salt crystallization, photodegradation, and microorganisms colonization which cause transformation and deterioration of surfaces [1]-[3]. moreover, decay of stone materials is strongly related to their porosity properties, which implies that highly porous materials often undergo faster degradation than the other stone substrates [4]-[6]. several methodologies have been developed to clean preserved materials belonging to heritage sites as a first step of the conservation process. cleaning procedures may include the use of solvents, chelating agents, and even of acidic or basic agents [7]. nevertheless, the irreversibility of these methods, the risk of altering the artwork, as well as the toxicity of certain products makes these cleaning procedures scarcely suitable for the application for historic buildings [7]. the bio-cleaning method has been suggested and studied as an alternative technique by using selected microorganisms [8]. despite the efficiency of this method under the laboratory condition, the need of specified conditions to ensure the viability of these microorganisms make its practical application very difficult and subject to further studies [7]. laser method has been considered as a friendly technique for cleaning heritage structure as well as for environment [9]. however, the high cost of this method is still the barrier against its widespread use. a variety of products (biopolymers, ionic liquids, gels, microemulsions, etc.) have been also proposed and used in this field, although their application still has some drawbacks such as high cost of maintenance and toxicological risks [10], [11]. the conservation of monumental cultural heritage with innovative nanocomposites is in the vanguard of conservation science and a plethora of research activities are dedicated to the abstract zno is a semiconductor that has found wide application in the optics and electronics areas. moreover, it is widely used in different technological areas due to its beneficial qualities (high chemical stability, non-toxicity, high photo-reactivity, and cheapness). based on its antibacterial activity, recently it has found also application to prevent bio-deterioration of cultural heritage buildings. as many authors suggested, doped zno nano-structures exhibit better antibacterial properties than undoped analogues. in the present work, zno nanoparticles doped with zro2 have been prepared by a sol-gel method in order to enhance the photocatalytic properties as well as the antibacterial activity of zno. then, zro2-zno-pdms nanocomposite (pdms, polydimethylsiloxane used as the binder) was synthesized by in-situ reaction. the resulting nanocomposite has been investigated as a possible protective material for cultural heritage building substrates. the performances of newly prepared coating were evaluated on three different stones (lecce stone, carrara marble and brick) and compared with plain pdms as a reference coating. mailto:madukalankani.weththimuni@unipv.it acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 design and validation of compatible nanomaterials that may exhibit strengthening, hydrophobic and self-cleaning properties [12-14]. moreover, scientists have focused their research to find out self-cleaning protective materials in order to reduce maintenance costs. in particular, titanium dioxide (tio2) and zinc oxide (zno) nanoparticles (nps) have been studied and tested for self-cleaning applications [15], [16]. the interest in developing and using these nanosized materials in combination with different binders has increased due to their excellent selfcleaning and antibacterial properties in addition to their easy, non-toxic and non-expensive procedures of application [17], [18]. however, their technological application has some important limitations among which the easy recombination of charge carriers and the need of ultraviolet (uv) radiation as an excitation source, considered as the most restrictive drawback, are due to the broadband-gap (zno: 3.2 ev; tio2: 3.3 ev for anatase) of the two oxides [19]. this limitation is particularly restrictive, as uv radiation corresponds to only 3 % of the solar irradiance at the surface of earth. therefore, many research groups have focused their efforts on enhancing the photocatalytic as well as the antimicrobial activity of pure nps by doping with different ions (transition, non-transition, and non-metal ions) [1]-[20]. the process of doping can alter the surface reactivity, functionality and charge of the nps, with possible improvement of their properties such as durability, stability, and dispersive ability of core material [20]. some authors also reported that the doped nanostructures exhibit better antibacterial properties than undoped analogues [20]. zirconium dioxide or zirconia (zro2) is a wide band gap p-type semiconductor, which is used in many fields due to its excellent properties (e.g. good natural color, high strength, high toughness, high chemical stability, and chemical and microbial resistance) [21]. the main objective of this work was the preparation of zno nanoparticles doped with zro2 by a sol-gel method in order to enhance the photocatalytic properties as well as the antibacterial activity of plain zno. moreover, doped nps were combined with a binder (polydimethylsiloxane, pdms) to obtain a nanocomposite, which was tested as a protective material for the cultural heritage. at first, the synthesised core-shell (zro2-zno) nps were characterised by sem-eds. after that, the performances of the nanocomposite coating (zro2-zno-pdms) as a protective material for stone substrates were evaluated when applied on three different stone types: lecce stone (ls), brick (b) and carrara marble (m). moreover, pdms-treated stone specimens were used as the reference for each and every analyses. the evaluation of coatings was done by different techniques: contact angles and chromatic variation measurements, capillary absorption and water vapor permeability determinations, optical microscopy (visible and uv light), sem-eds, self-cleaning and antibacterial tests. 2. materials and methods 2.1 materials analytical grade sodium hydroxide (naoh), ethanol (absolute, 99.8 % etoh), zinc acetate dihydrate (znc4h6o4·2h2o), 2-propanol, zirconium oxychloride octahydrate, orthophosphoric acid, hexamethyldisiloxane, and octamethylcyclotetrasiloxane (d4, utilized as pdms precursor) were purchased from sigma-aldrich. cesium hydroxide (csoh.h2o) was purchased from alfa aesar. all the chemicals used without further purification. water was purified using a millipore organex system (r ≥ 18 m cm). lecce stone specimens (open porosity ˃ 30 %) were provided by tarantino and lotriglia (nardò, lecce, italy), while specimens of brick (open porosity ~24 %) and carrara marble (open porosity ~0.5 %) were provided by favret mosaici s.a.s. (pietrasanta, lucca, italy). 2.2 preparation, application and testing methods of the nanocomposite before treatment, lecce stone (ls), brick (b), and marble (m) (squared 5 × 5 × 1 and 5 × 5 × 2 cm3) specimens were smoothed with abrasive, carbide paper (no: 180 mesh), washed with deionized water, dried in an oven at 60 °c and stored in a desiccator to reach room temperature, then their dry weight was measured [2], [4]. first of all, zro2-zno core-shell nps (molar ratio about 0.01:1 zro2/zno) were synthesised by sol-gel method as reported in the literature [20], [21] and then, zro2-zno-pdms nanocomposite was synthesized by in-situ reaction. for this purpose, doped nps (0.5 % (w/w)) were introduced to the reaction mixture containing octamethylcyclotetrasiloxane (d4, 25 g) and csoh (0.15 g), used as a catalyst for the ring opening polymerization of d4. after ultrasonication (20 minutes), the reaction was carried out at 120 ± 3 °c under vigorous stirring for 2.5 hours in an oil bath and then, hexamethyldisiloxane (0.03 g) was added and the reaction was continued at the same temperature another 2.5 hours as recommended in the literature [20]. all the samples (ls, b, and m) were saturated with ethanol by keeping specimens at least 6 hours in absolute etoh in order to prevent the penetration of coating inside the pores and ensure that it remains on the surface of the stone [1]. after saturation, all specimens were treated with zro2-zno-pdms nanocomposite as well as with plain pdms (as the reference) by brushing method (applied amount, 1.0 ± 0.02 g for each specimen). specimen treated with the nanocomposite were named zn-zr-pdms_ls, zn-zr-pdms_b, and zn-zrpdms_m, while reference specimens were labeled pdms_ls, pdms_b, and pdms_m. optical microscopy observations of the treated specimens were done using a light polarized microscope olympus bx51tf, equipped with the olympus th4-200 lamp (visible light) and the olympus u-rfl-t (uv light). scanning electron microscopy (sem) images (backscattered electron) and energy-dispersive xray spectra (eds) were collected by using a tescan fe-sem, mira xmu series, equipped with a schottky field emission source, operating in both low and high vacuum, and located at the arvedi laboratory, cisric, university of pavia. the amount of absorbed water as a function of time was determined in accordance with the uni en 15801 protocol [22]. water vapor permeability was determined according to uni en 15803:2010 protocol [23]. color changes were measured by a konica minolta cm-2600d spectrophotometer, determining the l*, a*, and b* coordinates of the cielab space, and the global chromatic variations, expressed as δe* according to the uni en 15886 protocol [24]. selfcleaning efficiency of prepared coatings was performed by using multirays, photochemical reactor, composed of uv chamber equipped with 8 uv lamps. the power of each lamp is 15w with a total power of 120 w. the reactor is equipped with a rotating disc in order to ensure a homogenized irradiation on all stained samples. the discoloration of methylene blue (mb) dye (0.1 % wt in ethanol solution), applied on the surface of treated stones specimens and acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 their untreated counterparts, was controlled by measuring chromatic variations (five control points for each sample surface) before and after application of mb dye, after 48 and 96 h of uv exposure. discoloration parameter 𝐷∗ was determined by using (1) [25]: 𝐷∗ = |𝑏∗(𝑡) − 𝑏∗(𝑀𝐵)| |𝑏∗(𝑀𝐵) − 𝑏∗(0)| ∙ 100 % (1) where 𝑏∗(0) is the value of chromatic coordinate 𝑏∗ before staining, while 𝑏∗(𝑀𝐵), and 𝑏∗(𝑡) are the mean values after the application of methylene blue over the surfaces and after t hours of uv-a light exposure, respectively. here, 𝑏∗ coordinate was considered, because this parameter is sensitive to blue colour. 3. results and discussions 3.1 characterisation of doped nps and treated stone specimens sem-eds analyses showed that zro2-doped zno nps are homogeneously dispersed in the pdms binder. most of them are spherical with a size in the 15-30 nm range. anyway, some particles displaying larger size and more irregular shape can be observed, which can be due to occasional aggregation (see figure 1a-b). eds analyses performed on single particles showed the presence of both zirconium and zinc, confirming the expected elemental compositions of the doped inorganic nps (figure 1a). optical microscopy observations suggested that nanocomposite material (zro2-zno-pdms) is homogeneously distributed on the treated stone surfaces (ls, b, and m). moreover, it seems that the coating covered pores on the surface and acting as a protective layer to the stone (figure 2). this observation was confirmed even by sem experiments (figure 3). quite acceptable chromatic variations (∆e* < 5) were observed (see table 1) after application of zro2-zno-pdms on any considered substrate, suggesting that the natural colour of the stones is not dramatically affected by the treatment. the corresponding chromatic coordinates are graphically resumed in figure 4. coordinate 𝐿∗ (related to the lightness) is considerably affected by all the treatments, regardless of the considered treatment. variations of 𝑏∗ (related to the blue to yellow change) are more relevant on lecce stone and brick specimens treated with pdms or nanocomposite material. hydrophobic properties of the treated stones are also summarized in table 1. results indicate that all the stone surfaces show hydrophobic behavior (contact angle measurements α ˃ 90°) after the treatments. it’s worth to highlight that nanocomposites-coated stones showed higher hydrophobic properties than polymer-coated stones. it may be due to the homogeneous distribution of nps in the polymer matrix which increase the hydrophobic nature of pdms. figure 1. sem images of doped nps. table 1. overall chromatic variations and contact angle measurements of treated stones. samples ∆e* α (°) pdms_ls 4.4 (±0.2) 111 (±1) zn-zr-pdms_ls 3.9 (±0.1) 131 (±2) pdms_b 4.4 (±0.5) 128 (±1) zn-zr-pdms_b 4.8 (±0.2) 136 (±2) pdms_m 4.9 (±0.1) 98 (±2) zn-zr-pdms_m 4.8 (±0.1) 107 (±3) figure 2. optical microscope images of treated stones: (a) zn-zr-pdms_ls, (b) zn-zr-pdms_b, and (c) zn-zr-pdms_m. figure 3. sem images of treated stones: (a) zn-zr-pdms_ls, (b) zn-zrpdms_b, and (c) zn-zr-pdms_m. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 moreover, preliminary data concerning capillary absorption tests confirmed that treated stones exhibited water repellent behavior. as can be seen in the table 2, ca values (related to the first 30 minutes) are affected by both treatments (polymer as well as nanocomposite), while it was possible to note some reduction of qf value compared the untreated stone at the end of the test (about 96 h). so, it seems that the treatments, especially with nanocomposite coating have long-term water resistant activity. the results are in good agreement with hydrophobic properties of treated stones. in addition, vapour permeability was preserved at acceptable levels upon treatment with zro2-zno-pdms nanocomposite. hence, all these results suggest that the newly prepared coating can be considered as a promising protective material. 3.2 analysing the self-cleaning properties as reported in the introduction, evaluation of the selfcleaning effectiveness of the newly prepared coating is one of the main aspect of this research study. figure 5 shows (as an example) the behavior of lecce stone before and after the test. the different behavior of stone treated with the nanocomposite and with plain pdms can be observed even by naked eye. in order to evaluate the self-cleaning effect of the coatings, discoloration test was performed on the treated stones. after the application of mb solution on the treated surface, the specimens were exposed to uv irradiation. a quantitative evaluation of the self-cleaning behavior of zro2-zno-pdms on the different stone was obtained by calculating the discoloration parameter d, which was determined at two different time intervals (see figure 6). it is related to the variation of 𝑏∗ coordinate (cielab space) and corresponds to the amount of mb removed from the coated surfaces. this test provided quite similar behavior for ls and b: the coating containing doped nps showed a higher effectiveness than the plain pdms coating. for instance, the discoloration factor related to the new coating is about double compared to pdms (at the end of the test: d*pdms ~ 20 %, d*nanoc ~ 40 % both for ls and b) and as reported in the literature, nps as well as doped nps with pdms coating have been used as a selfcleaning protective coating due to their photocatalytic performance under uv light (when the presence of nps, the discoloration factor is always higher than pdms) [1], [16], [25]. the new coating showed even better results when applied on marble surface, as the discoloration factor was around 70 % after 96 hours of uv irradiation. the results tally with the reported literature [1]. nevertheless, it should be noted that even plain pdms display better performances on marble specimens if compared to the other considered stones. although it is difficult to reliably compare the effectiveness of a treatment on different substrates, due to the different original stone properties (e. g. absorbability of the products, porosity, etc.) that may affect the figure 4. chromatic coordinates of treated stones. table 2. maximum water absorbed per unit area (qf, mg/cm2), capillary water absorption coefficient (ca, mg/cm2 s 1/2), and values of the water vapor permeability of untreated and treated samples. samples qf in mg/cm2 ca in mg/ cm2 s 1/2 permeability in g/m2 24 h ls 518,74 (±9,62) 8,73 (±0,57) 236 (±9) pdms_ls 479,04 (±8,16) 4,63 (±0,50) 185 (±6) zn-zr-pdms_ls 434,18 (±6,68) 2,22 (±0,54) 174 (±4) b 431,57 (±12,34) 2,12 (±0,21) 159 (±4) pdms_b 346,66 (±10,49) 1,53 (±0,44) 107 (±4) zn-zr-pdms_b 263,19 (±13,49) 1,03 (±0,01) 95 (±5) figure 5. images of ls before and after the self-cleaning test. figure 6. the discoloration percentage (d* (%)) after uv exposure. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 final performance, the experimental data discussed above indicate that the nanocomposite coating (zro2-zno-pdms) provides better results in terms of the self-cleaning effect when compared to plain pdms. 4. conclusions in order to enhance the photocatalytic properties of zno, zro2-doped zno nps were synthesised in the laboratory. after that, the nanocomposite zro2-zno-pdms was synthesized, applied on different stone substrates (ls, b, and m), and its protecting behaviour was compared to the well-known pdms. the morphological analysis suggested that prepared doped nps have spherical shape, very small sizes (15-30 nm). this small size of the particles is effectively involved in providing great performances of the prepared coating. for instance, the results of contact angles, chromatic variations, capillary absorption and water vapour permeability measurements indicate satisfactory protecting behaviour of the resulting coating. moreover, surface analyses suggested that nanoparticles included into the binder matrix are homogeneously distributed on all the stone surfaces. furthermore, the new coating (zro2zno-pdms) showed better results when compared to pdms in terms of self-cleaning effect due to uv irradiation, which represents one of the most important result of this research work. further experiments are still in progress to better assess the nanocomposite properties. references [1] c. kapridakia, l. pinhob, m. j. mosquerab, p. maravelakikalaitzakia, producing photoactive, transparent and hydrophobic sio2-crystalline tio2 nanocomposites at ambient conditions with application as self-cleaning coatings, appl. catalys. b: environ, vol. 156–157, 2014, pp. 416–427. doi: 10.1016/j.apcatb.2014.03.042 [2] m. licchelli, m. malagodi, m. weththimuni, c. zanchi, nanoparticles for conservation of bio-calcarenite stone, appl. phys. a, vol. 114, no. 3, 2014, pp. 673–683. doi: 10.1007/s00339-013-7973-z [3] m. ricca, e. le pera, m. licchelli , a. macchia, m. malagodi, l. randazzo, n. rovella, s. a. rule, m. l. weththimuni, m. f. la russa, the crati project: new insights on the consolidation of salt weathered stone and the case study of san domenico church in cosenza (south calabria, italy), coat., vol. 9, no. 5, 2019, pp. 330-345. doi: 10.3390/coatings9050330 [4] m. licchelli, m. malagodi, m. l. weththimuni, c. zanchi, waterrepellent properties of fluoroelastomers on a very porous stone: effect of the application procedure, prog. in org. coat., vol. 76, no. 2-3, 2013, pp. 495-503. doi: 10.1016/j.porgcoat.2012.11.005 [5] m. licchelli, s. j. marzolla, a. poggi, c. zanchi, crosslinked fluorinated polyurethanes for the protection of stone surfaces from graffiti, j. cult. herit., vol. 12, 2011, pp. 34–43. doi: 10.1016/j.culher.2010.07.002 [6] m. licchelli, m. malagodi, m. weththimuni, c. zanchi, antigraffiti nanocomposite materials for surface protection of a very porous stone, appl. phys. a, vol. 116, no. 4, 2014, pp. 1525-1539. doi: 10.1007/s00339-014-8356-9 [7] e. balliana, g. ricci, c. pesce, e. zendri, assessing the value of green conservation for cultural heritage: positive and critical aspects of already available methodologies, int. j. conserv. sci, vol. 7, no. 1, 2016, pp. 185-202. online [accessed 9 march 2022] http://www.ijcs.uaic.ro/public/ijcs-16-si01_balliana.pdf [8] g. alfano, g. lustrato, c. belli, e. zanardini, f. cappitelli, e. mello, c. sorlini, g. ranalli, the bioremoval of nitrate and sulfate alterations on artistic stonework: the case-study of matera cathedral after six years from the treatment, int. biodeter. biodegr, vol. 65, no. 7, 2011, pp. 1004–1011. doi: 10.1016/j.ibiod.2011.07.010 [9] q. h. tang, d. zhou, y. l. wang, g. f. liu, laser cleaning of sulfide scale on compressor impeller blade, appl. surf. sci., vol. 355, 2015, pp. 334–340. doi: 10.1016/j.apsusc.2015.07.128 [10] p. baglioni, d. berti, m. bonini, e. carretti, l. dei, f. fratini, r. giorgi, micelle, microemulsions, and gels for the conservation of cultural heritage, adv. colloid. interface. sci, vol. 205, 2014, pp. 361–371. doi: 10.1016/j.cis.2013.09.008 [11] j. a. l. domingues, n. bonelli, r. giorgi, e. fratini, f. gorel, p. baglioni, innovative hydrogels based on semi-interpenetrating p(hema)/pvp networks for the cleaning of water-sensitive cultural heritage artifacts, langmuir, vol. 29, no. 8, 2013, pp. 2746−2755. doi: 10.1021/la3048664 [12] c. kapridaki, a. verganelaki, p. dimitriadou, p. maravelakikalaitzaki, conservation of monuments by a threelayeredcompatible treatment of teos-nano-calcium oxalate consolidant and teos-pdms-tio2 hydrophobic/photoactive hybrid nanomaterials, materials, vol 11, no. 5, 2018, pp. 684 (23 pages). doi: 10.3390/ma11050684 [13] f. gherardi, m. roveri, s. goidanich, l. toniolo, photocatalytic nanocomposites for the protection of european architectural heritage, materials, vol. 11, no. 1, 2018, pp. 65 (page 15). doi: 10.3390/ma11010065 [14] m. l. weththimuni, m. licchelli, m. malagodi, n. rovella, m. f. la russa, consolidation of bio-calcarenite stone by treatment based on diammonium hydrogenphosphate and calcium hydroxide nanoparticles, measurement, vol. 127, 2018, pp. 396405. doi: 10.1016/j.measurement.2018.06.007 [15] p. munafò, g. b. goffredo, e. quagliarini, tio2-based nanocoatings for preserving architectural stone surfaces: an overview. constr. build. mater, vol. 84, 2015, pp. 201–218. doi: 10.1016/j.conbuildmat.2015.02.083 [16] v. crupi, b. fazio, a. gessini, z. kis, m. f. la russa, d. majolino, c. masciovecchio, m. ricca, b. rossi, s. a. ruffolo,v. venuti, tio2–sio2–pdms nanocomposite coating with self-cleaning effect for stone material: finding the optimal amount of tio2, constr. build. mater., vol. 166, 2018, pp. 464–471. doi: 10.1016/j.conbuildmat.2018.01.172 [17] m. a. aldoasri, s. s. darwish, m. a. adam, n. a. elmarzugi, s. m. ahmed, protecting of marble stone facades of historic buildings using multifunctional tio2 nanocoatings, sustainability, vol. 9, 2017, pp. 1-15. doi: 10.3390/su9112002 [18] m. l. weththimuni, d. capsoni, m. malagodi, m. licchelli, improving the protective properties of shellac-based varnishes by functionalized nanoparticles, coatings, vol. 11, 2021, pp. 419-437. doi: 10.3390/coatings11040419 [19] a. w. xu, y. gao, h. q. liu, the preparation, characterization, and their photocatalytic activities of rare-earth-doped tio2 nanoparticles, j. catal. vol. 207, 2002, pp. 151–157. doi: 10.1006/jcat.2002.3539 [20] m. s. selim, m. a. shenashen, a. elmarakbi, n. a. fatthallah, s. i. hasegawa, s. a. el-safty, synthesis ultrahydrophobic thermally stable inorganic-organic nanocomposites for self-cleaning foul release coatings, chem. eng. j, vol. 320, 2017, pp. 653-666. doi: 10.1016/j.cej.2017.03.067 [21] a. k. singh, u. t. nakate, microwave synthesis, characterization, and photoluminescence properties of nanocrystalline zirconia, sci. world j, vol. 2014, 7 pages. doi: 10.1155/2014/349457 https://doi.org/10.1016/j.apcatb.2014.03.042 https://doi.org/10.1007/s00339-013-7973-z https://doi.org/10.3390/coatings9050330 https://doi.org/10.1016/j.porgcoat.2012.11.005 https://doi.org/10.1016/j.culher.2010.07.002 https://doi.org/10.1007/s00339-014-8356-9 http://www.ijcs.uaic.ro/public/ijcs-16-si01_balliana.pdf https://doi.org/10.1016/j.ibiod.2011.07.010 https://doi.org/10.1016/j.apsusc.2015.07.128 https://doi.org/10.1016/j.cis.2013.09.008 https://doi.org/10.1021/la3048664 http://dx.doi.org/10.3390/ma11050684 https://doi.org/10.3390/ma11010065 https://doi.org/10.1016/j.measurement.2018.06.007 https://doi.org/10.1016/j.conbuildmat.2015.02.083 https://doi.org/10.1016/j.conbuildmat.2018.01.172 http://dx.doi.org/10.3390/su9112002 https://doi.org/10.3390/coatings11040419 https://doi.org/10.1006/jcat.2002.3539 https://doi.org/10.1016/j.cej.2017.03.067 https://doi.org/10.1155/2014/349457 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 [22] uni en 15801:2010, conservazione dei beni culturali, metodi di prova, determinazione dell’assorbimento dell’acqua per capillarità, 2010 [in italian]. [23] uni en 15803:2010, conservazione dei beni culturali, metodi di prova, determinazione della permeabilità al vapore d’acqua, 2010[in italian]. [24] uni en 15886:2010, conservazione dei beni culturali, metodi di prova, misura del colore delle superfici, uni ente italiano di normazione, 2010[in italian]. [25] m. b. chobba, m. l. weththimuni, m. messaoud, c. urzi, j. bouaziz, f. d. leo, m. licchelli, ag-tio2/pdms nanocomposite protective coatings: synthesis, characterization, and use as a selfcleaning and antimicrobial agent, prog. in org. coat, vol. 158, 2021, pp. 106342-106359. doi: 10.1016/j.porgcoat.2021.106342 https://doi.org/10.1016/j.porgcoat.2021.106342 optical-flow-based motion compensation algorithm in thermoelastic stress analysis using single-infrared video acta imeko issn: 2221-870x december 2021, volume 10, number 4, 169 176 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 169 optical-flow-based motion compensation algorithm in thermoelastic stress analysis using single-infrared video tommaso tocci1, lorenzo capponi1, roberto marsili1, gianluca rossi1 1 department of engineering, university of perugia, via g. duranti 93, 06125 perugia, italy section: research paper keywords: thermoelastic stress analysis; optical flow; motion compensation; mechanical stress; experimental mechanics citation: tommaso tocci, lorenzo capponi, roberto marsili, gianluca rossi, optical-flow-based motion compensation algorithm in thermoelastic stress analysis using single-infrared video, acta imeko, vol. 10, no. 4, article 27, december 2021, identifier: imeko-acta-10 (2021)-04-27 section editor: roberto montanini, università di messina and alfredo cigada, politecnico di milano, italy received july 30, 2021; in final form december 9, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: tommaso tocci, e-mail: tommaso.tocci@outlook.it 1. introduction the thermoelastic stress analysis (tsa) [1]-[4] is an infrared image-based technique for non-contact measurement of stress fields. this technique is based on the detection and analysis of the temperature fluctuations amplitude on the surface of a mechanical component dynamically loaded, performed by a thermal camera. since the temperature fluctuation produced by the load are very small, a lock-in [5], [6] data processing is normally applied to the thermal video [7], which makes it possible to emphasise phenomena at particular loading frequencies, reducing the noise effects, normally higher than thermal fluctuation induced by stress. the tsa is largely applied for many mechanical stress analysis application, fem validation, design comparison and very useful for high resolution stress concentration analysis, non-contact stress detection in application where classical sensors have problems or limitations [8]-[11]. for an ideal application of this technique, the specimen surface should not move or make small movements throughout the test [2]. clearly, this ideal condition is not always achievable since, for example, in the case of a specimen with low stiffness, the phenomena of deformation and displacement are very large [3]. this phenomenon is emphasised by the fact that the tsa requires the application of dynamic loads, with sufficiently high stress levels, in order to generate, by the thermoelastic principle, sufficiently high temperature fluctuations. the main problem caused by specimen movement is that, in a sequence of consecutive frames, the same pixel does not always correspond to the same part of the specimen as it moves. this phenomenon affects the lock-in operation, causing alterations in the measured stress field. especially on the border of mechanical component, it may also happen that a pixel corresponds at a surface point or a background point alternatively: this phenomenon is commonly known as edge effect [1]. therefore, in the thermal video, it is necessary to compensate in a certain way these movements and deformations by means of algorithms commonly called motion compensation [2]. abstract thermoelastic stress analysis (tsa) is a non-contact measurement technique for stress distribution evaluation. a common issue related to this technique is the rigid-displacement of the specimen during the test phase, that can compromise the reliability of the measurement. for this purpose, several motion compensation techniques have been implemented over the years, but none of them is provided through a single measurement and a single sample surface conditioning. due to this, a motion compensation technique based on optical-flow has been implemented, which greatly increases the strength and the effectiveness of the methodology through a single measurement and single specimen preparation. the proposed approach is based on measuring the displacement field of the specimen directly from the thermal video, through optical flow. this displacement field is then used to compensate for the specimen’s displacement on the infrared video, which will then be used for thermoelastic stress analysis. firstly, the algorithm was validated by a comparison with synthetic videos, created ad hoc, and the quality of the motion compensation approach was evaluated on video acquired in the visible range. the research moved into infrared acquisitions, where the application of tsa gave reliable and accurate results. finally, the quality of the stress map obtained was verified by comparison with a numerical model. mailto:tommaso.tocci@outlook.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 170 the state of the art relies on sakagami et al. in [12], where the compensation of the motion using displacement fields is obtained with the digital image correlation (dic) [13], [14] on a second video recorded simultaneously in the visible range. the motion compensation is then applied to thermal video. this technique requires the employment of two different cameras and the double surface conditioning of the specimen: the application of speckle for the dic and a high emissivity black paint for tsa. silva et al. [15] simplified the compensation procedure limiting the use of a single camera both for the acquisition of the video for the dic and the tsa techniques. in this case, however, in order to be able to make displacement measurements with the dic, the speckle must be made through a paint with an emissivity that can be detected by an infrared camera. nevertheless, two surface conditioning of the specimen surface are still required. compared with sakagami et al. [12], since one camera is employed for the two acquisitions, the alignment problems, which are very evident in the use of two cameras that should ideally be positioned in the same place, are greatly reduced. this research proposes an algorithm that allows the compensation of the rigid motion using a single thermal video with a single step specimen preparation, from which it will be possible to perform both the compensation and the tsa. therefore, compared to the state of the art, it will no longer be necessary to first prepare the specimen for measurement with dic, then remove the speckle and finally prepare the surface for analysis by the tsa, but it will be sufficient to do only the preparation for the tsa analysis, significantly reducing the time required for the preparation phase. in addition, with a single thermal video it is possible to obtain both the motion compensation field and the stress field, thus also reducing the time of the experimental phase. this technique allows to carry out tests in which it is not possible to repeat the measurement, such as, for example, structural failure or random tests. the motion compensation phase sees the replacement of the dic with a computer vision technique called optical-flow [16]-[18], which is based on motion of visual-features, such as corners, edges, ridges and textures in two consecutive frames of a video scene [19], [20]. within optical-flow methods, the differentiation between sparse and dense approach is fundamental [18]. there are many different types of algorithms [21] based on different correlation techniques, such as gradient-based, based on the brightness constancy equation [22] or region-based approaches rely on the correlation of different features like normalized crosscorrelation and laplacian correlation [23], [24]. optical-flow has been widely applied to measure displacement fields in solid bodies [25]-[27]. in this work, the dense optical-flow farneback algorithm was implemented [28], [29]. a nylon specimen, realized by additive manufacturing, was tested applying a sinusoidal load along its y-axis, so that the movement of the specimen is exclusively along the vertical direction. in a preliminary step the algorithm was validated by analysing synthetically generated videos with imposed motion. the coincidence between the displacement measured by the implemented algorithm and the displacement imposed on the synthetic video was then evaluated. the algorithm was then tested in visible video to evaluate the efficiency of the motion compensation and then it is applied to a thermal video for the tsa. the manuscript is organized as follows: in sec. 2, the implemented algorithm, the test bench used and the experimental setup are presented. in sec.3 the algorithm is validated and the results discussed. sec. 4 draws the conclusions. 2. materials and method 2.1. algorithm implementation all the algorithms of this work have been implemented in a python environment. below is the description of the two main algorithms: one for measuring the displacement field and one for compensating for motion the thermal film. 2.1.1. algorithm for displacement field measurement data processing for the purpose of determining the displacement field is not carried out on the original video matrix but on a copy of it. especially for thermal video, the copy is normalised from the 14-bit of the original video to 8-bit. the compensation field detected by the 8-bit video is then applied to the original 14-bit video. each pair of frames, before being calculated by optical-flow, is subjected to a pre-processing phase, designed to reduce noise (very evident especially in infrared images) and to make a masking of the sample, which will be used later. each acquired frame is filtered, in sequence, through a gaussian filter [30], [31] with 5x5 kernel, a morphological transformation of dilation and erosion [30]-[32] , with 3x3 kernel. this produces a good noise reduction, especially in the thermal image, as shown in figure 1. the result obtained is then sorted through an image binarization algorithm [30], which returns a black and white binary field, showing the non-presence/presence of the specimen in a given pixel. example of mask of the specimen are shown in figure 2. as can be seen in the figure, the mask quality of the visible image is higher than that of the infrared image, but in both cases a good result is obtained. the farneback optical-flow algorithm [28], [29] is now applied to each pair of frame. each frame of the video is compared with a reference frame, which is usually the first frame of the video. the displacement field measured in this way indicates the displacement of the i-th frame with respect to the reference one. the displacement field indicates the displacement figure 1. image pre-processing: (a) original frame (b) filtered frame. figure 2. specimen mask: (a) visible video (b) infrared video. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 171 undergone by each pixel between the two frames. the field is displayed in hsv colour code, where the colour indicates the direction of displacement and the colour intensity indicates the magnitude. an example of a rough displacement field is given in figure 3: it can be seen that the displacement is mainly traced along the edges of the specimen and that the inner areas are detected as stationary. this requires a phase of improvement of the displacement field. we start with an initial masking, which eliminates all displacement vectors detected outside the specimen due to noise. operationally, the displacement matrix is multiplied by the mask matrix, as shown in figure 4. since it was decided to compensate for motion only along the vertical axis, the maximum displacement detected for each line is determined, which is likely to be located along the edges of the specimen. at this point, each row of the matrix has a constant value over the length of the frame, ideally including areas outside the specimen, as shown in figure 5. a new masking is then performed. to better understand this, it can be said that the first masking is intended to prevent the algorithm from taking for granted a maximum displacement that could be detected at a point outside the specimen, while the second is applied to eliminate line by line displacement values per pixel outside the geometry of the specimen. finally, the field of compensated displacements is filtered with a blur filter, in order to make the field of compensated displacements uniform in pixels, as shown in figure 7. compared to the one shown in figure 3, a greater uniformity and regularity can be observed. 2.1.2. algorithm for motion compensation motion compensation takes place on the original visible video or the original 14-bit infrared video, which has not been altered in any way during the previous step. a column vector containing the displacement of each row is then extracted from the displacement field. this displacement vector, once inverted, represents the motion to be applied to each row of the original video for motion compensation. in order to improve the compensation and make it smoother and more consistent, this compensation vector is manipulated using the kalman filter [33], as shown in figure 8. this vector is called compensation vector. figure 3. example of rough displacement field. figure 4. first masking operation: incorrect displacement vectors are masked out. figure 5. calculation of maximum displacement line by line. figure 6. second masking operation: compensation field is created. figure 7. example of smooth and masked displacement field. figure 8. extraction and smoothing of the compensation vector. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 172 we then move on to the actual compensation by applying a shift to the lines of the video to be compensated using the information contained in the compensation vector. the i-th row of the original video is moved to position i+k, where k is the value of the compensation move, as shown in figure 9. a check is also made to see if this move would place the line outside the maximum frame size, in which case no compensation is done for that line. the output is a compensated matrix starting from the original video, which contains the thermal and stress information, but which is not manipulated in any way: it is only compensated by moving the position of the lines, but not by modifying their content in any way. 2.2. test bench and acquisition system the test bench used during the experimental campaign consists of a shaker sentek l1024m, over which a structure for tensile testing was built, as shown in figure 10. a specimen with identical geometry to that used by sakagami et al. [12] was used. the specimen was produced using fdm printing in nylon 6-6 with 100% infill ratio and a layer height of 0.1 mm. the material has a tensile strength of 80 mpa and an elastic module of 2.2 gpa. two pairs of clamps, one connected to the head of the shaker and the other to the fixed part of the structure, were used to fix the specimen, as shown in figure 11. a load cell is positioned at the top of the specimen in order to monitor the loads exchanged during testing. two cameras were used: one for visible video and one for infrared video. a canon eos 7d reflex camera with 24 mm – 70 mm lens capable of acquiring 1920x1080 pixels resolution video at 30 fps was used. therefore, a flir a6751sc thermal camera with a cooled sensor was used for infrared video acquisition. this thermal camera captures 640x512 pixels resolution video at a framerate of 100 fps and has a thermal sensitivity of less than 20 mk. both video cameras were positioned in front of the specimen at the height of the specimen hole. 2.3. experimental campaign the experimental campaign was divided into two steps: first, videos were acquired in the visible at different frequencies in order to test and validate the compensation algorithm. subsequently, infrared videos were acquired at different frequencies and different load levels in order to test the effectiveness of the motion compensation algorithm for thermoelastic application. a summary is given in table 1. 3. results 3.1. algorithm validation since a validation phase of the motion compensation algorithm described above is required, it was decided to generate synthetic videos in which a body is present that is clearly identifiable from the optical-flow and has a known imposed displacement. the motion of the body in the synthetic video is imposed in a sinusoidal regime with imposed amplitude and frequency. several videos have been generated with different figure 9. motion compensation algorithm. figure 10. test bench. figure 11. specimen geometry and grasping. table 1. summary of the experimental campaign. frequency in hz sinusoidal load in n visible infrared 4 1500 x 3000 x x 5 1500 x 3000 x 8 1500 x 3000 x x table 2. summary of synthetic video. #syntethic video frequency in hz amplitude in pixel 1 1 5 2 1 10 3 1 20 4 4 5 5 4 10 6 4 20 7 8 5 8 8 10 9 8 20 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 173 operating conditions, as illustrated in fehler! verweisquelle konnte nicht gefunden werden.. in order to validate the displacement fields obtained by optical-flow, a comparison was made between the displacement detected in different survey lines on the object placed in the video and the displacement imposed during the generation of the synthetic video. in each video, three horizontal survey lines are defined: an upper red line, a central green line and a lower blue line. note that the red line and the blue line run alternately from the marker area. some results obtained at 4 hz and 8 hz are shown below. however, validation was carried out for each synthetic video generated, obtaining superimposable results between detected and imposed displacement. it is also reported the hsv displacement field, where it is possible to see how the detected displacement occupies the zone with a colour gradation going from green to violet, which corresponds in fact to the colour connected to displacements perfectly along the y-axis; this gives us a further confirmation of the correct application of the optical-flow. in figure 12 you can see how the green line in the vicinity of the marker areas gives us a maximum displacement of about 5 pixels which is exactly coincident with the imposed one. the same result is obtained at figure 13 where a displacement of 20 pixels is found, coinciding with the imposed one. also, from the point of view of the frequency of the load, it can be said that this face does not have a negative influence on the measurement of the displacement. the compensation algorithm is then validated and tested by computing visible video, which is easier to process due to the lower amount of noise than infrared video. as for the videos in the visible, the quality of motion compensation is verified by comparing the positions of the lower end of the hole in the specimen. a line is placed at the lower end of the hole, as shown in figure 14. it can be seen from the model that in the uncompensated frame, because there is rigid motion, the position of the hole is varied. thanks to the compensation of the motion instead, the position of the centre of the hole of the i-th frame, is brought back to the position of the reference frame. to illustrate the result obtained by means of compensation, we will insert the superimposed image of the 3 frames, as shown in figure 15: reference, uncompensated frame and compensated frame; it is clearly evident that in the compensated case, the position of the hole is in line with that of the reference frame. we also report the overlapped frames between the reference image and the uncompensated one (figure 16-a) and between the reference image and the compensated one (figure 16-b). also, in this case it is clear that in the compensated image the overlapping is almost perfect, because the edges of the reference image are not visible. on the contrary in the not compensated one we have that the displacement is well evident. in the following, the effect of motion compensation was evaluated by measuring the displacement of the centre of the hole in relation to the reference position. it can be seen that, for a load frequency of 4 hz, a displacement attenuation of approx. 92% was obtained. if, on the other hand, 8 hz is taken as the load frequency, since smaller amplitudes occur as the frequency increases, compensation of approx. 80% is achieved. in both cases, this is a good result. results are shown in table 3. figure 12. synthetic video #7: (a) survey lines monitored (b) field of measured displacement (c) dynamic plot of measured displacement along survey lines. figure 13. synthetic video #6: (a) survey lines monitored (b) field of measured displacement (c) dynamic plot of measured displacement along survey lines. figure 14. diagram of the comparison model between the reference hole and the uncompensated/compensated one. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 174 3.2. thermoelasticity application the following graphs (figure 17, figure 18 and figure 19) compare the stress fields obtained from uncompensated and compensated infrared videos. the stress distribution field is expressed in terms of temperature variation in °c. analysis of the fields clearly shows the improvement in quality produced by the motion compensation. in the uncompensated image, a significant component of edge effect is visible, due to the rigid motion of the specimen. this phenomenon makes it difficult to detect distress zones, especially in the areas close to the borehole, where a higher stress concentration is expected. after compensation, however, it is easiest to identify the typical stress distribution around the hole. it is also noticeable that the field is much sharper than the uncompensated one. in order to validate the result, the tsa stress fields are compared with those obtained from the fem model. a coherent trend of the experimental fields in relation to the numerical ones can be seen. below, in figure 20, is the graph comparing the stressconcentration factor at the hole in the uncompensated, compensated video and the fem model along the two check lines passing through its centre. in order to be able to compare the experimental stress profiles with the fem, the concentration of the normalised stresses was figure 15. comparison of the position of the 3 holes in the reference, uncompensated and compensated frame. figure 16. frame comparisons: (a) reference non-compensated (b) reference – compensated. table 3. measurement of hole displacement uncompensated and compensated configuration. test not compensated displacement in pixel compensated displacement in pixel 4 hz – frame 11 36 3 4 hz – frame 118 35 3 4 hz – frame 226 33 3 8 hz – frame 16 9 2 8 hz – frame 124 11 2 8 hz – frame 228 10 2 figure 17. 4 hz infrared video: comparison of distress fields of uncompensated and compensated video. figure 18. 5 hz infrared video: comparison of distress fields of uncompensated and compensated video. figure 19. 8 hz infrared video: comparison of distress fields of uncompensated and compensated video. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 175 evaluated, i.e., by dividing the series of values by the maximum of the series. it can be seen that, along the vertical direction, in the uncompensated video there is an inversion of the stress concentration at the bottom of the hole (red circle in figure 20), which in this case corresponds to the area between 80 and 100 along the y-direction, due to the edge effect. on the contrary, this inversion of stress concentration is not present in the compensated video, which presents a trend similar to that of the fem model. 4. conclusions the implementation of a new motion compensation algorithm for thermoelastic applications respect the state of the art was proposed in this research. the algorithm was first validated by means of a series of synthetic videos generated in such a way as to have a known imposed displacement. then, the tests were carried out on videos in the visible range where the performances of the motion compensation were evaluated, which was around 92% for 4hz displacements and 80% for 8hz displacement. stress distribution fields produced from the same compensated and uncompensated video were compared. the blurring and edge effects produced by the motion were almost completely eliminated, making it possible to correctly measure the stress field, especially in the area around the hole. this result was compared with stress fields obtained from finite element analysis. videos were tested at different frequencies in order to verify the robustness of the algorithm. finally, the normalised stress concentration was compared along two perpendicular survey lines passing through the centre of the hole. it can be seen that, along the y-compensation line, the inversion of tension due to the effect of motion was significatively reduced by the here proposed motion compensation algorithm. references [1] w. thomson, on the dynamical theory of heat, earth environ. sci. trans. r. soc. edinburgh, vol. 20, no. 2, pp. 261–288, 1853. [2] w. weber, über die specifische wärme fester körper, insbesondere der metalle, ann. phys., vol. 96, no. 10, pp. 177–213, 1830. [3] j. m. dulieu-barton, s. quinn, c. eyre, p. r. cunningham, development of a temperature calibration device for thermoelastic stress analysis, in applied mechanics and materials, 2004, vol. 1, pp. 197–204. doi: 10.4028/www.scientific.net/amm.1-2.197 [4] j. m. dulieu-barton, thermoelastic stress analysis, opt. methods solid mech. a full-f. approach, pp. 345–366, 2012. [5] n. harwood, w. m. cummings, applications of thermoelastic stress analysis, strain, vol. 22, no. 1, pp. 7–12, 1986. doi: 10.1111/j.1475-1305.1986.tb00014.x [6] n. harwood, w. m. cummings, calibration of the thermoelastic stress analysis technique under sinusoidal and random loading conditions, strain, vol. 25, no. 3, pp. 101–108, 1989. doi: 10.1111/j.1475-1305.1989.tb00701.x [7] l. capponi, lollocappo/pylia: digital lock-in analysis. zenodo, 2020, doi: 10.5281/zenodo.4043175 [8] g. allevi, l. capponi, p. castellini, p. chiariotti, f. docchio, f. freni, r. marsili, m. martarelli, r. montanini, s. pasinetti, a. quattrocchi, r. rossetti, g. rossi, g. sansoni, e. p. tomasini, investigating additive manufactured lattice structures: a multiinstrument approach, ieee trans. instrum. meas., 2019, pp. 2459 2467. doi: 10.1109/tim.2019.2959293 [9] r. montanini r. montanini, g. rossi, a. quattrocchi, d. alizzio, l. capponi, r. marsili, a. d. giacomo, t. tocci, structural characterization of complex lattice parts by means of optical non-contact measurements, in 2020 ieee international instrumentation and measurement technology conference (i2mtc), 2020, pp. 1–6. doi: 10.1109/i2mtc43012.2020.9128771 [10] l. capponi, j. slavič, g. rossi, m. boltežar, thermoelasticitybased modal damage identification, int. j. fatigue, vol. 137, aug. 2020, p. 105661. doi: 10.1016/j.ijfatigue.2020.105661 [11] l. capponi, r. marsili, g. rossi, t. zara, thermoelastic stress analysis on rotating and oscillating mechanical components, int. j. comput. eng. res., vol. 10, no. 6, 2020, pp. 2250–3005. [12] t. sakagami, n. yamaguchi, s. kubo, t. nishimura, a new fullfield motion compensation technique for infrared stress measurement using digital image correlation, j. strain anal. eng. des., vol. 43, no. 6, 2008, pp. 539–549. doi: 10.1243/03093247jsa360 [13] b. pan, k. qian, h. xie, a. asundi, two-dimensional digital image correlation for in-plane displacement and strain measurement: a review, meas. sci. technol., vol. 20, no. 6, 2009, p. 62001. doi: 10.1088/0957-0233/20/6/062001 [14] jason cantrell, sean rohde, david damiani, rishi gurnani, luke disandro, josh anton, andie young, alex jerez, douglas steinbach, calvin kroese, peter ifju, experimental characterization of the mechanical properties of 3d printed abs and polycarbonate parts, conf. proc. soc. exp. mech. ser., vol. 3, 2017, pp. 89–105. doi: 10.1007/978-3-319-41600-7_11 [15] m. l. silva, g. ravichandran, combined thermoelastic stress analysis and digital image correlation with a single infrared camera, j. strain anal. eng. des., vol. 46, no. 8, 2011, pp. 783–793. doi: 10.1177%2f0309324711418286 [16] j. l. barron, d. j. fleet, s. s. beauchemin, performance of optical-flow techniques, int. j. comput. vis., vol. 12, 1994, no. 1, pp. 43–77. doi: 10.1007/bf01420984 figure 20. comparison of stress concentration in uncompensated, compensated and fem video along the two check lines passing through the centre of the hole. https://doi.org/10.4028/www.scientific.net/amm.1-2.197 http://dx.doi.org/10.1111/j.1475-1305.1986.tb00014.x https://doi.org/10.1111/j.1475-1305.1989.tb00701.x https://doi.org/10.5281/zenodo.4043175 https://doi.org/10.1109/tim.2019.2959293 https://doi.org/10.1109/i2mtc43012.2020.9128771 https://doi.org/10.1016/j.ijfatigue.2020.105661 https://doi.org/10.1243%2f03093247jsa360 https://doi.org/10.1088/0957-0233/20/6/062001 https://doi.org/10.1007/978-3-319-41600-7_11 https://doi.org/10.1177%2f0309324711418286 https://doi.org/10.1007/bf01420984 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 176 [17] b. d. lucas, t. kanade, an iterative image registration technique with an application to stereo vision, 1981. [18] b. d. lucas, generalized image matching by the method of differences. phd. carnegie mellon university, 1984. [19] p. turaga, r. chellappa, a. veeraraghavan, advances in videobased human activity analysis: challenges and approaches, in advances in computers, vol. 80, elsevier, 2010, pp. 237–290. doi: 10.1016/s0065-2458(10)80007-5 [20] s. akpinar, f. n. alpaslan, video action recognition using an optical-flow based representation, in proceedings of the international conference on image processing, computer vision, and pattern recognition (ipcv), 2014, p. 1. [21] t. fuse, e. shimizu, m. tsutsumi, a comparative study on gradient-based approaches for optical-flow estimation, int. arch. photogramm. remote sens., vol. 33, no. b5/1; part 5, pp. 269– 276, 2000. [22] s. baker, i. matthews, lucas-kanade 20 years on: a unifying framework, int. j. comput. vis., vol. 56, no. 3, pp. 221–255, 2004. doi: 10.1023/b:visi.0000011205.11775.fd [23] w. k. pratt, correlation techniques of image registration, ieee trans. aerosp. electron. syst., no. 3, pp. 353–358, 1974. [24] p. j. burt, local correlation measures for motion analysis: a comparative study, 1982. [25] g. allevi, l. casacanditella, l. capponi, r. marsili, g. rossi, census transform based optical-flow for motion detection during different sinusoidal brightness variations, in journal of physics: conference series, 2018, vol. 1149, no. 1, p. 12032. doi: 10.1088/1742-6596/1149/1/012032 [26] d. gorjup, j. slavič, a. babnik, m. boltežar, still-camera multiview spectral optical-flow imaging for 3d operatingdeflection-shape identification, mech. syst. signal process., vol. 152, p. 107456, 2021. doi: 10.1016/j.ymssp.2020.107456 [27] j. javh, j. slavič, m. boltežar, experimental modal analysis on fullfield dslr camera footage using spectral optical-flow imaging, j. sound vib., vol. 434, pp. 213–220, 2018. doi: 10.1016/j.jsv.2018.07.046 [28] g. farnebäck, polynomial expansion for orientation and motion estimation. linköping university electronic press, 2002. [29] g. farnebäck, two-frame motion estimation based on polynomial expansion, in scandinavian conference on image analysis, 2003, pp. 363–370. doi: 10.1007/3-540-45103-x_50 [30] r. szeliski, computer vision: algorithms and applications. springer science & business media, 2010. [31] g. bradski, a. kaehler, learning opencv: computer vision with the opencv library. o’reilly media, inc., 2008. [32] a. eleftheriadis, a. jacquin, image and video segmentation, in advances in image communication, 1999, pp. 1–68. [33] g. welch, g. bishop, an introduction to the kalman filter, 1995. https://doi.org/10.1016/s0065-2458(10)80007-5 https://doi.org/10.1023/b:visi.0000011205.11775.fd https://doi.org/10.1088/1742-6596/1149/1/012032 http://dx.doi.org/10.1016/j.ymssp.2020.107456 http://dx.doi.org/10.1016/j.jsv.2018.07.046 https://doi.org/10.1007/3-540-45103-x_50 monte carlo-based 3d surface point cloud volume estimation by exploding local cubes faces acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 9 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 monte carlo-based 3d surface point cloud volume estimation by exploding local cubes faces nicola covre1, alessandro luchetti1, matteo lancini2, simone pasinetti2, enrico bertolazzi1, mariolino de cecco1 1 department of industrial engineering at the university of trento, via sommarive 9, 38123 trento, italy 2 department of mechanic and industrial engineering at the university of brescia, via branze 38, 25121 brescia, italy section: research paper keywords: monte carlo; volume estimation; affiliation criterion; cube explosion; point cloud citation: nicola covre, alessandro luchetti, matteo lancini, simone pasinetti, enrico bertolazzi, mariolino de cecco, monte carlo-based 3d surface point cloud volume estimation by exploding local cubes faces, acta imeko, vol. 11, no. 2, article 32, june 2022, identifier: imeko-acta-11 (2022)-02-32 section editor: francesco lamonaca, university of calabria, italy received november 22, 2021; in final form february 25, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this project has received funding from the european union’s horizon 2020 research and innovation program, via an open call issued and executed under project eurobench (grant agreement n° 779963). corresponding author: nicola covre, e-mail: nicola.covre@unitn.it 1. introduction estimating the volume enclosed in a three-dimensional (3d) surface point cloud is a widely explored topic in several scientific fields. with the increase of technologies for the virtual reconstruction of 3d environments and objects, many devices, such as kinect, lidar, real sense, make it possible to acquire a depth image with increasing accuracy and resolution [1]-[4]. several fields, such as mobile robotics [5], reverse prototyping [6], industrial automation, and land management, require accurate and efficient data processing to extract geometrical features from the real environment, such as distances, areas, and volume estimations. among different geometric features, volume estimation has been presented as a challenging issue and widely studied with different approaches in the literature. from the literature contributions on object volume estimation based on the 3d point cloud, chang et al. [7] used the slice method and least-squares approach achieving high accuracy by investigating mainly known and homogeneous solids. the same 3d point cloud volume calculation based on the slice method was applied by zhi et al. [8]. in general, the main limitation of using the sliding method for volume estimation is the dependence on the quality of the point cloud and the impossibility to work with complex shapes. bi et al. [10] and xu et al. [11] estimated the canopy volume measurement by using only the simple convex hull algorithm [12] with the problem of volume overestimation in the case of concave surfaces. lin et al. [13] improved the convex hull algorithm to handle concave polygons for the estimation of the abstract this article proposes a state-of-the-art algorithm for estimating the 3d volume enclosed in a surface point cloud via a modified extension of the monte carlo integration approach. the algorithm consists of a pre-processing of the surface point cloud, a sequential generation of points managed by an affiliation criterion, and the final computation of the volume. the pre-processing phase allows a spatial reorientation of the original point cloud, the evaluation of the homogeneity of its points distribution, and its enclosure inside a rectangular parallelepiped of known volume. the affiliation criterion using the explosion of cube faces is the core of the algorithm, handles the sequential generation of points, and proposes the effective extension of the traditional monte carlo method by introducing its applicability to the discrete domains. finally, the final computation estimates the volume as a function of the total amount of generated points, the portion enclosed within the surface point cloud, and the parallelepiped volume. the developed method proves to be accurate with surface point clouds of both convex and concave solids reporting an average percentage error of less than 7 %. it also shows considerable versatility in handling clouds with sparse, homogeneous, and sometimes even missing points distributions. a performance analysis is presented by testing the algorithm on both surface point clouds obtained from meshes of virtual objects as well as from real objects reconstructed using reverse engineering techniques. mailto:nicola.covre@unitn.it acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 tree crown volume, but their approach is still limited to providing a gross volume estimation that cannot be applied to complex objects with fine details. lee et al. [9] proposed a waste volume calculation using the triangular meshing method starting from the acquired point cloud. on one hand, this method results as accurate as the goodness of the acquired point cloud, on the other hand, it completely relies on mesh processing tools, such as meshlab [14], and it cannot work if the 3d reconstructed object is an open domain. we propose an innovative and competitive method to compute the volume of an object based on 3d point clouds via a modified extension of the monte carlo integration approach without the interpolation or the mesh reconstruction of the surface point cloud, it can handle homogeneous and nonhomogeneous point cloud surfaces, complex and simple shapes as well as open and close domains. 1.1. the monte carlo approach traditional methods for numerical integration on a volume, such as riemann integral [15] or cavalieri-simpson [16] partition the space into a dense grid, approximate the distribution in each cell with elements of known geometry and compute the overall volume by summing up all the contributions. in contrast, within a previously defined interval containing the distribution of interest, the monte carlo method generates random points with uniform distributions along the different dimensions to estimate the integral [17]. as shown in figure 1 in the case of a 2d example we wish to calculate the area atarget of a closed surface. the geometric element under consideration (green line, s2dtargetobject) is enclosed within a 2d box of known area abox which encloses s2dtargetobject. the total amount (n) of randomly generated points (pgenerated) will fall inside s2dbox. while some of them will fall outside s2dtargetobject (blue points), the others will fall inside (red points, ninsidepoints). in the 2d case, pgenerated is identified with its 2d cartesian coordinates (xpgenerated, ypgenerated). an affiliation criterion (most often expressed by a simple mathematical equation) allows the identification and counting of points dropped in and out of the s2dtargetobject. in particular, these elements are regulated by the following expression: 𝐴target = lim 𝑛→∞ 𝑛insidepoints 𝑁 ∙ 𝐴box (1) 1.2. 3d extension of the monte carlo approach as reported by newman et al. [18], this computational approach is particularly suitable for high-dimensional integrals. for this reason, we extended the 2d monte carlo method described in the previous subsection to the calculation of the 3d volumes starting from their discrete surface’s representation. each point cloud can have, within a certain range, variable resolution, and spatial distribution homogeneity. in this case, n points are generated with uniform distribution along the three dimensions to estimate the volume of the unknown object (vtargetobject) inside the prismatic element (s3dbox). in the 3d case, pgenerated is identified with its 3d cartesian coordinates (xpgenerated, ypgenerated, zpgenerated). the vtargetobject is then calculated by counting the number of pgenerated that fell inside it (ninsidepoints). equation (1) becomes: 𝑉targetobject = lim 𝑛→∞ 𝑛insidepoints 𝑁 ∙ 𝑉box (2) usually, having the target object represented by a continuous surface and described by a mathematical equation, as in the 2d case, the affiliation criterion is expressed by a continuous mathematical model. it is, therefore, easier to determine when a point falls within and without the s3dtargetobject. however, the problem becomes more difficult when the s3dtargetobject is represented by the discrete distribution of some points lying on its surface (s3dpointcloud), as can be shown in figure 2. in this case, it is difficult to determine when a point falls within the s3dpointcloud or not. moreover, in cases where the s3dpointcloud comes from a real acquisition, noise must also be taken into account. in fact, due to some acquisition errors not all the points of the cloud lie on the s3dtargetobject. this paper can be divided as follows: • in the introduction we presented the problem of volume computation from point clouds, the state of the art, and our approach by describing the traditional monte carlo method, its 3d extension, and its limitations. • in the following section, we describe our algorithm for volume estimation of point clouds based on the monte carlo approach. • in the third section, we present the results obtained for the validation of the algorithm, testing it on both virtual and real objects. • in the final section, we expose the drawn conclusions. figure 1. 2d example of monte carlo integral approach in green the geometric element under consideration (s2dtargetobject), in black the 2d box (s2dbox) of known area, in red the inside points, and blue the outside points. figure 2. extension of the monte carlo integral approach to the point cloud of a 3d sphere (s3dpointcloud) enclosed in a 3d box (s3dbox). acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 2. developed algorithm the algorithm reported as pseudocode in appendix a and explained here below takes the following parameters as input decided by the user: • the point cloud acquired from the surface of an object whose volume is computed. • the number of points n with which the monte carlo volume estimation must be performed. the algorithm is composed of a pre-processing function and a classification function. the pre-processing of the point cloud checks its orientation, encloses the cloud surface in a box of known volume (s3dbox), and performs a preliminary analysis of its distribution. the classification function is based on the "cube explosion" affiliation criterion, described in the following paragraphs. 2.1. pre-processing given the total amount of points n, an efficient monte carlo approach should provide the smallest box which encloses the volume taken for the measurement. in fact, with a fixed n the bigger is s3dbox the lower is the 3d point resolution and the worst is the accuracy of the monte carlo method. hence the s3dbox is defined by taking the minimum and maximum points in each one of the three principal directions x, y, z of the s3dpointcloud. to further minimize the box dimensions a previous re-orientation of the s3dpointcloud is performed by applying the principal component analysis (pca). once the s3dbox is defined its volume is computed and used at the end of the affiliation criterion as vbox. in the case of real objects, s3dpointcloud is collected by using depth cameras or other 3d scanners. the higher is the resolution of the tool, the denser is the point cloud. in the case of virtual objects, this point cloud is obtained by collecting the mesh nodes. the denser is the mesh, the more homogeneous is the point cloud distribution. however, the homogeneity of the point distribution is a factor that cannot be taken for granted. for this reason, a preliminary statistical analysis over the s3dpointcloud is carried out to obtain the parameters needed for the affiliation criterion, the core of the monte carlo method. in particular, the parameters are obtained from the quantile distribution q of the distances between each point and its closest neighbors. 2.2. affiliation criterion using the "explosion" of cube faces the proposed affiliation criterion iteratively defines whether each pgenerated inside s3dbox by the monte carlo method belongs to the external or internal domain of the s3dpointcloud. the affiliation criterion that we have developed is based on the concept of "explosions of cube faces". the idea is based on the generation of a cube of known edge (lcube), around each pgenerated, and iteratively extruding each one of its faces to determine how and how many times it encounters the s3dpointcloud, figure 3. each one of the 6 face extrusions corresponds to a specific direction η along with one of the 3 main directions x, y, z and returns a binary judgment (jη). jη is 0 or 1 respectively if the point is supposed to be outside or inside the s3dpointcloud. eventually, by taking the mode of all jη the final judgment (j) is assessed, and the point affiliation (internal or external) is defined. each cube is oriented by using the same reference system of s3dpointcloud. at each iteration, the procedure selects the direction η and extrudes the relative faces of the cube along their outgoing normal. initially, lcube is determined by the following empirical equation: 𝑙cube = 𝑄0.5 ∙ 3.5 ( 𝑄0.85 𝑄0.5 − 1) (3) where q0.5 and q0.85 are the quantiles at 50 % and 85 % of the distribution of the distances between each point and its closest neighbors respectively. equation (3) and the chosen quantiles are obtained by an empirical validation of the performances. in particular, the choice of q0.5 considers the median distance of two consecutive points and q0.85 highlights the s3dpointcloud sparse distribution and avoids initializing a small cube whose extruded faces pass through s3dpointcloud without touching its points. histograms of the distribution of the relative distances between each point and its closest neighbors for two different s3dpointcloud are shown in figure 4. in particular, the first histogram is referred to as non-homogenous point cloud, specifically, pokémon (mew), while the second is referred to the homogeneous cloud of the geometric solid sphere, both reported in table 1. on one hand from the first distribution is possible to observe that the ratio between q0.85 and q0.5 is 2.74, on the other hand, the quantile ratio of the second distribution is 1.01. the quantile ratio shows the proportion of the sparse portions of s3dpointcloud concerning the distribution of the average distances. in mew ‘s s3dpointcloud the ratio is higher because we have strong non-homogeneous distribution, such as a dense clustering of points for the eyes and sparse on the belly. in the sphere’s s3dpointcloud the ratio is close to one as the difference between q0.85 and q0.5 is almost null due to the homogeneity of the point cloud distribution. each face extrusion may intercept a sub-portion of s3dpointcloud points (pintercepted). if the total amount of intercepted points overcomes the threshold value (thintercepted) of 3 points, lcube is reduced of lreduction and the extrusion is repeated. the criterion for choosing thintercepted equal to 3 points depends on the clusterization checks introduced to strengthen the algorithm, as explained at the end of this section. the value of lreduction has been empirically set equal to 10 % of the actualized value of lcube as a compromise between final accuracy and computational time. the smaller lreduction is the higher the final accuracy is but the longer the final computational time. figure 3. example of explosion in the z-direction (η = z) of one cube’s face from the position of a pgenerated interception of 2 clusters of points (in green, pintercepted). acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 following, a clustering algorithm subdivides the pintercepted into different clusters of points (pcluster) along η. jη assumes the values 1 or 0 if the total amount of clusters is, respectively, odd or even. the clusterization is performed by re-ordering pintercepted along η. computing the coordinate differences along η of each point from its consecutive, it is possible to obtain a sequence of distances. ideally, considering pcluster orthogonal to η, this difference within each pcluster should be null, as all the pcluster points lie on the same orthogonal plane. however, in real applications, several issues may occur, such as a slight inclination of pcluster concerning η or a random noise affecting the pcluster distribution along η. to overcome these problems, a threshold (thcluster) is set empirically equal to q0.5. therefore, considering the pintercepted re-ordered along η, whenever a distance between table 1. algorithms outputs with virtual objects. virtual object name original virtual model original point cloud ( s3dtargetobject) meshlab reconstruction our output cube sphere arm hand pokémon mew acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 5 two consecutive points results to be less than thcluster the two points belong to the same pcluster. otherwise, a new pcluster begins. a further issue that affects the clusterization occurs when the extrusion intercepts tangentially s3dpointcloud. in this case, a misleading clusterization is performed and further checks are needed to make the affiliation criterion more robust. in particular, this can be detected by performing an analysis of variance over each pcluster. if the variance of one single pcluster is greater than thcluster the entire analysis along η is compromised and jη needs to be discarded. to make the affiliation criterion more robust a limitation on the maximum number of clusters encountered along each extrusion is also introduced. in particular, only those directions η are selected that have a total amount of clusters less than or equal to 1. this reduces the probability of encountering misleading clusters such as in the case of noisy point clouds acquired from real objects. the noise mostly appears as isolated points outside s3dpointcloud, and rarely inside it. furthermore, this allows justifying the choice of thintercepted referring to the minimum number of points that define a plane. 3. results this section reports the validation of the extended monte carlo algorithm on real and virtual objects considering different shapes. a total of nine objects, including regular geometric solids, such as spheres or prisms as well as more complex shapes, such as human hands, arms, pokemon (mew), and the 3d scanning of ancient bronze statuettes of mythological figures were considered for this discussion. point clouds of real objects were acquired from the real environment with an azure kinect tof camera and a konica minolta vivid vi-9i 3d scanner for reverse engineering. the volumes used as a reference for the validation of the measures on virtual and real objects were respectively computed using the virtual mesh and the volume estimation by immersion in water [19]. the monte carlo algorithm accuracy increases with the total amount of points generated, as can be observed in figure 5 and figure 6. in particular figure 6 reports the measurement of the mean and variance with a box plot of the relative error distribution as a function of the number of pgenerated. as can be seen, the error is particularly high when few pgenerated are generated and gradually decreases as the number increases. for both virtual and real solids, depending on the resolution of the point cloud and the reported details, the asymptotic percentage error is below 7 % computed with respect to the reference volume when the total amount of pgenerated is greater than 42875. on the contrary, with a low number of pgenerated, the accuracy is low. it is also worth noting that the variance of the measures decreases as the number of pgenerated increases. this indicates that (a) (b) figure 4. distribution of distances between each point and its closest neighbors for (a) pokemon mew and (b) the sphere point clouds. figure 5. error distribution of the pokémon mew volume estimation in % with respect to the increasing number of generated points with the explosion cubes criterion. figure 6. box plot of the error distribution considering all the nine objects along with the increasing number of generated points with the explosion cubes criterion. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 6 for all objects considered, even for convex, non-uniform, and folded point clouds, the relative error decreases and converges with the same trend. as a general rule, on one hand, the performance of the algorithm can be driven by selecting the number of generated points. on the other hand, the more are the pgenerated the long is the computational time. the average times spent by the proposed algorithm are shown in figure 7. the tests were performed on a macbook pro 2 ghz intel core i5 quad-core 16 gb of ram in matlab_r2020b environment. the average time grows as the number of points increases no-linearly as shown in figure 7. however, with the same accuracy, the average computational time is less than or comparable to the time taken by the volume estimation methods reported in the literature. therefore, the user is allowed to choose the total amount of pgenerated with which monte carlo has to be executed as a tradeoff between the level of desired accuracy, figure 6, and the computational time, figure 7. in addition, due to the asymptotic behavior of the error, the increase in performance becomes negligible after a threshold. for these reasons, it is convenient to choose a total amount of pgenerated just above the estimated volume that has stabilized (in our case 42875 pgenerated). however, the greater is the total amount of pgenerated the higher is the resolution of the point cloud generated by the monte carlo algorithm. this can be used as visual feedback to evaluate the goodness of the affiliation criterion and compare it with the table 2. algorithms outputs with real objects. real object name real object original point cloud (s3dtargetobject) meshlab reconstruction our output cerbero ballerina head mecurio figure 7. computational times of the proposed algorithm changing the number of generated points (pgenerated). acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 7 algorithms that reconstruct the mesh. as shown in the last column of both tables table 1 and table 2, the representation of the inner points with the developed method returns a good representation of the original point cloud, s3dpointcloud. on the contrary, mesh reconstructions are not always reliable. a mesh reconstruction using the default parameters of the poisson’s reconstruction method [8] in the meshlab environment is shown in the fourth column of the same tables. as can be observed, good results are returned only for uniformly distributed point clouds and regular shapes, while the same cannot be assessed for complex shapes or non-uniform point clouds, such as pokémon mew and the hand, with consequent negative repercussions on volumes calculation. another important aspect to consider when the volume is computed concerns the possibility to work with discontinuous and partially open surfaces, such as in figure 8(a), where the acquired point cloud results to have huge discontinuities around the elbow. most of the volume estimation algorithms based on the mesh reconstruction need manual fitting and adjustments to manage the missing clusters of points, otherwise, the measurement cannot be pursued. on the contrary, the prosed affiliation criterion for the monte carlo volume integration is robust to discontinuities due to the democratic judgment of the 6 cube faces. in fact, even if along a few directions the point cloud results to be open, and these faces extrusions will return jη = 0, the final judgment j will still be equal to 1. table 3 shows the actual volume of the objects compared with that obtained with meshlab, when possible, and our proposed method with its error percentage on the measurement. nevertheless, given the choice of the external box and maintaining its ratio to the calculated volume, taken any object, its percentage error does not change by scaling its size. this means that the uncertainty on the measurement is proportional to the percentage error multiplied by the calculated volume. 4. conclusions this paper proposes an extension of the 3d monte carlo method for calculating the volumes of objects starting from their surface point clouds. the overall algorithm includes a preprocessing analysis that re-orient and evaluates the point cloud, an affiliation criterion based on the explosion of cube faces to discern the inner and the outer points of the monte carlo method, and a final volume estimate as described in equation (2). as the reported results include also convex, complex, and folded surfaces, such as the pokémon mew or the cerbero point cloud it was possible to show that the cube explosion affiliation criterion results be stable and reliable, returning consistent and repeatable measurements compared with gold-standard software for volume measurements, such as meshlab. the algorithm proves to be accurate with point clouds of different objects, both in terms of shape and distribution of points. the performances are then tested with the surface point clouds of 9 virtual and real objects, reporting an average percentage error on the tested samples lower than 7% with a computational amount of time of a few minutes depending on the desired accuracy. references [1] k. khoshelham, s. elberink, accuracy and resolution of kinect depth data for indoor mapping applications, sensors, vol. 12, n. 2, 2012, pp. 1437-1454. doi: 10.3390/s120201437 [2] k. khoshelham, kourosh, accuracy analysis of kinect depth data, isprs workshop laser scanning, calgary, canada, 29-31 august 2011, pp. 133-138. doi: 10.5194/isprsarchives-xxxviii-5-w12-133-2011 [3] j. vaze, j. teng, g. spencer, impact of dem accuracy and resolution on topographic indices. environmental modelling & software, vol. 25, n. 10, 2010, pp. 1086-1098. doi: 10.1016/j.envsoft.2010.03.014 [4] l. keselman, j. woodfill, a. jepsen, a. bhowmik, intel realsense stereoscopic depth cameras. proc. of the ieee conference on computer vision and pattern recognition workshops, honolulu, hawaii, 21-26 july 2017, pp. 1267-1276. doi: 10.1109/cvprw.2017.167 [5] d. borrmann, a. nüchter, m. ðakulović, i. maurović, i. petrović, d. osmanković, j. velagić. a mobile robot based system for fully automated thermal 3d mapping, advanced engineering informatics, vol. 28, n. 4, 2014, pp. 425-440. doi: 10.1016/j.aei.2014.06.002 [6] d. li, x. feng, p. liao, h. ni, y. zhou, m. huang, z. li y. zhu, 3d reverse modeling and rapid prototyping of complete denture. in frontier and future development of information technology in medicine and education, springer, dordrecht, 2014, pp. 1919-1927. doi: 10.1007/978-94-007-7618-0_226 figure 8. (a) open point cloud from a real acquisition of a human arm, (b) monte carlo representation of inside generated points, (c) monte carlo representation of outside generated points. table 3. actual volume of the objects and its estimation with meshlab and our method. 3d object actual volume in dm³ meshlab volume in dm³ montecarlo volume in dm³ monte carlo error in % cube 1.00 1.00 0.99 0.10 sphere 4.19 4.19 4.10 2.15 arm 1.33 na 1.39 4.51 hand 0.412 na 0.409 0.73 pokemon mew 0.539 na 0.548 1.67 cerbero 4.08 na 4.18 2.45 ballerina 1.24 na 1.18 4.84 head 4.54 4.52 4.50 0.88 mercurio 4.53 4.57 4.66 2.82 https://doi.org/10.3390/s120201437 https://doi.org/10.5194/isprsarchives-xxxviii-5-w12-133-2011 https://doi.org/10.1016/j.envsoft.2010.03.014 https://doi.org/10.1109/cvprw.2017.167 https://doi.org/10.1016/j.aei.2014.06.002 https://doi.org/10.1007/978-94-007-7618-0_226 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 8 [7] w. chang, c. wu, y. tsai, w. chiu, object volume estimation based on 3d point cloud, 2017 international automatic control conference (cacs), pingtung, taiwan, 12-15 november 2017, pp. 1-5. doi: 10.1109/cacs.2017.8284244 [8] y. zhi, y. zhang, h. chen, k. yang, h. xia, a method of 3d point cloud volume calculation based on slice method, international conference on intelligent control and computer application (icca 2016), atlantis press, zhengzhou, china, 19-17 january 2016, pp. 155-158. doi: 10.2991/icca-16.2016.35 [9] y. lee, s. cho, j. kang. a study on the waste volume calculation for efficient monitoring of the landfill facility. in computer applications for database, education, and ubiquitous computing, springer, berlin, heidelberg, 2012, 978-3-642-356025, pp. 158-169. [10] y. bi, l. qi, s. chen, l. li, s. liu, canopy volume measurement method based on point cloud data, science &technology review, beijing, china, vol.31, no. 27, 2013, pp. 31-36. [in chinese] doi: 10.3981/j.issn.1000-7857.2013.27.004 [11] w. xu, z. feng, z. su, h. xu, y. jiao, o. deng, an automatic extraction algorithm for individual tree crown projection area and volume based on 3d point cloud data. spectroscopy and spectral analysis, vol. 34, no. 2, 2014, pp. 465–471. doi: 10.3964/j.issn.1000-0593(2014)02-0465-07 [12] g. klette, a recursive algorithm for calculating the relative convex hull, 25th international conference of image and vision computing, ieee, new zealand, 8-9 november 2010, pp. 1–7. doi: 10.1109/ivcnz.2010.6148857 [13] w. lin, y. meng, z. qiu, s. zhang, j. wu, measurement and calculation of crown projection area and crown volume of individual trees based on 3d laser scanned pointcloud data, international journal of remote sensing, vol. 38, no. 4, 2017, pp. 1083–1100. doi: 10.1080/01431161.2016.1265690 [14] p. cignoni, m. callieri, m. corsini, m. dellepiane, f. ganovelli, g. ranzuglia, meshlab: an open-source mesh processing tool, in eurographics italian chapter conference, salerno, italy, 2008, pp. 129-136. doi: 10.2312/localchapterevents/italchap/italianchapconf2008/1 29-136 [15] r. mcleod. the generalized riemann integral, vol. 20. american mathematical soc.,1980. [16] m. kiderlen, k. petersen. the cavalieri estimator with unequal section spacing revisited. image analysis & stereology, vol. 36, no. 2, 2017, pp.133–139. doi: 10.5566/ias.1723 [17] w. press, s. teukolsky, w. vetterling, b. flannery. numerical recipes: the art of scientific computing, cambridge university press, 1992. [18] m. newman, g. barkema. monte carlo methods in statistical physics chapter 1-4, vol. 24, oxford university press: new york, usa, 1999. [19] d. m. k. s. kaulesar sukula, p. t. den hoed, e. j. johannes, r. van dolder, e. benda. direct and indirect methods for the quantification of leg volume: comparison between water displacement volumetry, the disk model method and the frustum sign model method, using the correlation coefficient and the limits of agreement. journal of biomedical engineering vol.15, no. 6, 1993, pp. 477-480. doi: 10.1016/0141-5425(93)90062-4 [20] m. kazhdan, m. bolitho, h. hoppe, poisson surface reconstruction, proc. of the fourth eurographics symposium on geometry processing, vol. 7, 2006, pp. 61-70. online [accessed 21 april 2022] https://hhoppe.com/poissonrecon.pdf https://doi.org/10.1109/cacs.2017.8284244 https://doi.org/10.2991/icca-16.2016.35 https://doi.org/10.3981/j.issn.1000-7857.2013.27.004 https://doi.org/10.3964/j.issn.1000-0593(2014)02-0465-07 https://doi.org/10.1109/ivcnz.2010.6148857 http://dx.doi.org/10.1080/01431161.2016.1265690 http://dx.doi.org/10.2312/localchapterevents/italchap/italianchapconf2008/129-136 http://dx.doi.org/10.2312/localchapterevents/italchap/italianchapconf2008/129-136 https://doi.org/10.5566/ias.1723 https://doi.org/10.1016/0141-5425(93)90062-4 https://hhoppe.com/poissonrecon.pdf acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 9 appendix a analysis of the mathematical modelling of a static expansion system acta imeko issn: 2221-870x september 2021, volume 10, number 3, 185 191 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 185 analysis of the mathematical modelling of a static expansion system carlos mauricio villamizar mora1, jonathan javier duarte franco1, victor josé manrique moreno1, carlos eduardo garcía sánchez1 1 grupo de investigación en fluidos y energía, corporación centro de desarrollo tecnológico del gas, bucaramanga, colombia section: research paper keywords: modelling; pressure; static expansion; uncertainty citation: carlos mauricio villamizar mora, jonathan javier duarte franco, victor jose manrique moreno, carlos eduardo garcía sánchez, analysis of the mathematical modelling of a static expansion system, acta imeko, vol. 10, no. 3, article 25, september 2021, identifier: imeko-acta-10 (2021)-03-25 section editor: francesco lamonaca, university of calabria, italy received january 29, 2021; in final form july 9, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was funded by colombia’s servicio nacional de aprendizaje (sena) through the special cooperation agreement no. 0233 of 2018. corresponding author: carlos eduardo garcía sánchez, e-mail: cgarcia@cdtdegas.com 1. introduction pressure measuring instruments, like any other measuring device, require periodic calibrations, to monitor changes in their performance, and guarantee their comparability with other meters [1]. in simple terms, a calibration consists of establishing a relationship between the values given by measurement standards, and those given by an instrument under test [2]. in the case of vacuum pressure gauges, that is, that measure absolute pressure values lower than atmospheric pressure, a system that can produce specific vacuum pressure values is required, given the importance of comparing the measurements given by the standard and by the meter under test at different values of the measured variable [3]. in the calibration process, it is of utmost importance that the specific pressure values that are generated have a low uncertainty. uncertainty is a characteristic of any measurement, indicating the level of doubt about the reported value [4]. in this way, a better comparability of meters that have been calibrated with the process in question can be guaranteed. currently, in colombia there is a lack of absolute pressure calibration services in the medium and high vacuum regions. for this reason, the centro de desarrollo tecnológico del gas (cdt de gas) has developed a static expansion system, which allows the generation of pressures in the medium and high vacuum ranges, making it possible to calibrate pressure gauges in those regions. this type of system has been implemented in multiple laboratories worldwide. the present study shows the mathematical design process of the system, through an evaluation of the possible models to represent the behavior of the gas inside the system, and the use of uncertainty to define restrictions on the input quantities of the system. 2. state of the art 2.1. static expansion systems the pressure region between absolute zero (total absence of molecules) and atmospheric pressure is called “vacuum”. in turn, vacuum is classified as coarse (from 3 000 pa to atmospheric pressure), medium (between 0.1 pa and 3 000 pa), high (from abstract static expansion systems are used to generate pressures in medium and high vacuum and are used in the calibration of absolute pressure meters in these pressure ranges. in the present study, the suitability of different models to represent the final pressures in a static expansion system with two tanks is analysed. it is concluded that the use of the ideal gas model is adequate in most simulated conditions, while the assumption that the residual pressure is zero before expansion presents problems under certain conditions. an uncertainty analysis of the process is carried out, which leads to evidence of the high importance of uncertainty in a first expansion over subsequent expansion processes. finally, an analysis of the expansion system based on uncertainty is carried out to estimate the effect of the metrological characteristics of the measurements of the input quantities. said design process can make it possible to determine a set of restrictions on the uncertainties of the input quantities. mailto:cgarcia@cdtdegas.com acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 186 1 · 10-7 pa to 0.1 pa) and ultra-high (less than 1 · 10-7 pa) [5]. in general, there is no pressure measurement technology that covers all regions of interest [6]. among the most accurate equipment for generating pressures in the medium and high vacuum ranges are static expansion systems, with which pressures as low as 10-6 pa [7]-[9] can be obtained. several metrology laboratories and national metrology institutes have developed static expansion systems [10]-[13]. the generation of vacuum pressures using static expansion of a gas is a mature technology [14], and much of the development in recent years in the topic has been related to a careful evaluation of possible causes of error and the estimation of the uncertainty of the final pressure [8], [10], [15]. calibration and measurement capabilities with expanded uncertainties lower than 0.3 % (k = 2) using static expansion systems have been reported [15]. nitrogen is commonly used as a gas for this type of calibration, although other inert gases could also be used [3], [7], [16]. static expansion systems are a set of tanks of different dimensions, connected by pipes and valves. to generate low pressures with high precision, a volume of gas, with a defined pressure, is allowed to expand to a larger volume, previously at a pressure as close to zero as possible [17]. figure 1 presents a simplified diagram of the static expansion process, using a small tank (where the initial pressure is set) and a large tank (to which the equipment to be calibrated is connected). in figure 1 and equations included in this work, 𝑉𝑃 is the volume of the small tank, 𝑉𝐺 is the volume of the large tank, 𝑇𝑖 is the initial temperature of the process, 𝑇𝑓 is the final temperature of the process, 𝑃𝑖 is the initial pressure of the process in the small tank, 𝑃𝑖,𝐺 is the initial pressure of the process in the large tank, which should be as close as possible to zero, and 𝑃𝑓 is the final pressure of the process. to further reduce the pressure, it is possible to repeat the expansion process, using the initial pressure resulting from the previous expansions [8]; the lower achievable limit is imposed by the level of vacuum that can be generated in the large tank, and by the effects of sorption and degassing in the tanks and the instruments under test [18]. static expansion systems can be used as primary calibration standards, calculating the final pressure from the initial pressure and the volume ratios of the gas expansion processes performed. they can also be used to generate pressure but using a pressure gauge as a reference for calibration, in this case being a calibration by direct comparison [3], [16], [19]-[22]. it is important to note that several of the high vacuum pressure measurement technologies, such as pirani gauges, exhibit high non-linearity with respect to pressure [3], which usually implies requiring several calibration points in each order of magnitude of pressure. table 1 presents the fundamental characteristics of some static expansion systems developed by various national metrology institutes. regarding the modelling of the process, some institutions have chosen to use the ideal gas model [13], [24], [25], while others have proposed the use of the virial equation as a real gas model for the expansion process [5], [7], [8], [23]. one aspect that is quite generalised is the assumption that the initial pressure in the calibration tank is zero, although the question remains whether this assumption is valid as the final pressure is smaller (that is, as the vacuum increases). equation (1) presents the calculation of the pressure after a static expansion process, modelling the substance as an ideal gas and neglecting the initial pressure in the large tank [10], [13], [24], [25]. 𝑃𝑓 = 𝑃𝑖 𝑉𝑃 𝑉𝑃 + 𝑉𝐺 𝑇𝑓 𝑇𝑖 . (1) on the other hand, (2) shows the calculation of the final pressure obtained with a static expansion, based on the truncated virial expansion in the second term and neglecting the initial pressure in the large tank [5], [7], [ 8], [23]. 𝑃𝑓 = 𝑃𝑖 𝑉𝑃 𝑉𝑃 + 𝑉𝐺 𝑇𝑓 𝑇𝑖 1 + 𝐵𝑓 𝑃𝑓 𝑅 𝑇𝑓 1 + 𝐵𝑖 𝑃𝑖 𝑅 𝑇𝑖 . (2) in (2) and following equations, 𝑅 is the molar constant of the gases, equal to 8.314 462 618 j mol-1 k-1, 𝐵𝑓 is the second virial coefficient of nitrogen in the conditions of the end of the process and 𝐵𝑖 is the second virial coefficient of nitrogen in the conditions of the beginning of the process. the second virial coefficient is a function of the substance or mixture of substances, and a function of temperature. another important aspect related to the initial pressure in the calibration tank is that to reach the residual pressure, or minimum pressure achievable in the calibration chamber, it is usually required to pump for many hours and to bake the tank [16]. residual pressure is limited by pumping speed, by leaks, by gas desorption from materials exposed to vacuum, and by cleanliness of test gauges [1]. baking refers to the heating of the chamber, to about 200 °c, to desorb the gas from the internal table 1. examples of static expansion systems developed in different national metrology institutes. it is not an exhaustive list: the ptb has another static expansion system in addition to the one mentioned here, and systems such as those of kriss (from south korea) or npl (from england) were not included either. institution approximate volumes of tanks (l) pressure range (pa) reference l'istituto nazionale di ricerca metrologica (inrim) 0.01, 0.5 and 68 0.1 – 1 000 [4] centro nacional de metrología (cenam) 0.5, 1, 50 and 100 0.000 01 – 1 000 [23] centro español de metrología (cem) 0.5, 1, 1, 100 and 100 0.000 1 – 1 000 [13] physikalisch-technische bundesanstalt (ptb) 0.017, 0.017, 1, 20 and 233 0.000 001 – 1 000 [24] tübitak-ulusal metroloji enstitüsü (ume) 0.15, 0.15, 0.7, 15, 15 and 72 0.000 9 – 1 000 [25] figure 1. static expansion process with two tanks. the initial state of the expansion process is shown on the left, and the final state on the right. it is assumed that there is spatial homogeneity of temperature in both tanks, but not necessarily temporal homogeneity. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 187 walls of the materials, a necessary procedure to maintain ultrahigh and superior voids [18], [19]. other aspects that have been studied in static expansion systems are the determination of tank volumes by methods other than gravimetry [13], [24] and the effect of the inhomogeneity of temperature in the tanks on the process [8]. 2.2. uncertainty uncertainty is the name given to the level of doubt that a measurement result has [2], [4], [12]. the two most common ways to report uncertainty are standard uncertainty, which represents the standard deviation of the probability distribution with which the measurement result is modeled, and expanded uncertainty, which is half the length of a coverage interval on the measurement result, with a specified coverage percentage (for example, 95%). the most widely used method to estimate the uncertainty of a measurement result is the gum method [4]. the uncertainty estimation process requires the clear establishment of the measurement model, which presents the way in which the measurand (quantity to be measured) is calculated from its input quantities. subsequently, in the gum method, the standard uncertainty 𝑢(𝑦) of the measurand 𝑦 will be estimated from the uncertainties 𝑢(𝑥1), 𝑢(𝑥2),…, 𝑢(𝑥n) of the input variables 𝑋1, 𝑋2,…, 𝑋n using the measurement model 𝑦 = 𝑓 (𝑋1, 𝑋2, … , 𝑋𝑛). neglecting the correlation between the input quantities and the higher order terms, it is obtained the simplest version of the gum method, in which the estimation of the standard uncertainty of the measurand is made according to (3) [4]: 𝑢(𝑦) = √( 𝜕𝑓 𝜕𝑥1 ) 2 𝑢2(𝑥1) + ( 𝜕𝑓 𝜕𝑥2 ) 2 𝑢2(𝑥2) + ⋯ + ( 𝜕𝑓 𝜕𝑥𝑛 ) 2 𝑢2(𝑥𝑛) . (3) 3. methodology in a two-tank static expansion process, the final pressure depends on the initial pressure in the small tank, the initial pressure in the large tank, the volumes of the tanks, the initial and final temperatures, and the nature of the gas that is expanding. the model used, however, may differ according to the assumptions made about the process, which can lead to the neglect of the effect of some variables. as mentioned in the state of the art, different institutions have opted for different models to calculate the pressures after the expansion processes. in the present work, three simplified models were compared against the complete model to calculate the resulting pressure after an expansion process, to define which of the models can be considered adequate to calculate the pressure after one or more expansion processes. the most complete model to represent the process is based on a real gas model and considers both the initial pressure in the large tank and the inhomogeneity of temperatures at the beginning and at the end. it is well known that the ideal gas model represents the behavior of a gas when p → 0, since with a non-existent pressure the assumptions of said model would be exactly fulfilled (the volumes of the molecules are negligible with respect to the total volume of the gas, the forces intermolecular tends to zero and molecular shocks are perfectly elastic) [26]. the ideal gas model is a convenient limiting case, which can be deduced from theoretical considerations, but it does not accurately represent the behavior in the gas phase of pure substances or mixtures that are at pressures other than zero [27]. among the real gas models that have been developed, the virial equation of state has the desirable characteristic that its parameters can be related to intermolecular forces [27]. considering the above, in the present work a model based on the virial equation to represent the behavior of the substance that undergoes expansion was established as the reference model for the final pressure after expansion. it was assumed spatial homogeneity (although not temporal) of temperature, considering that the static expansion system will eventually operate under controlled environmental conditions. two simplifications were evaluated: (1) using an ideal gas model instead of a real gas model, and (2) neglecting the initial pressure in the calibration tank. the main reason that would support the use of ideal gas is that this model is based on assumptions about the behavior of gaseous substances that are approximately satisfied at very low pressures [26]. additionally, it is common to use nitrogen as a gas inside the expansion system, and the virial coefficient for this gas is very small; this coefficient may be relevant for other heavier inert gases [14]. regarding the initial pressure in the large tank, in all the references consulted [5], [7], [8], [10], [13], [23], [24], [25] is neglected, but it is possible to wonder how much of an impact this assumption can have. in this way, four models were compared. model 1 was the model without simplifications and was therefore taken as the reference model; this model is presented at the beginning of section 4 (results and discussion). model 2 was based on the ideal gas model and considered the initial pressure in the large tank. model 3 was based on the real gas model but neglecting the initial pressure in the large tank. and model 4 contained the two simplifying assumptions, that is, it was based on ideal gas and used zero as the initial pressure value in the calibration tank. to make the comparisons, the final pressure with each of the four models was calculated, and the error in the pressure value of each of the last three models with respect to the reference one (model 1) was determined. the error of models 2, 3 and 4 was calculated with (4), where 𝐸𝑖 is the percentage error made by the i-th model, i = 2, 3, 4, 𝑃𝑓,1 is the final pressure calculated with model 1, and 𝑃𝑓,𝑖 is the final pressure calculated with the i-th model: 𝐸𝑖 = ( 𝑃𝑓,𝑖 − 𝑃𝑓,1 𝑃𝑓,1 ) ∙ 100 % (4) in order to consider a wide range of conditions, comparisons were made with the possible combinations of two volume relationships, two differences between initial and final temperatures, two initial pressures in the small tank, two initial pressures in the large tank, and four values of consecutive expansions. after determining whether any of the simplified models was appropriate for the two-tank static expansion system, the gum method was applied, without correlation or higher order terms, to estimate the uncertainty using the chosen model as the measurement model. the uncertainty budget, that is, the contribution of the different input quantities to the final pressure, was evaluated for different uncertainty values of said input quantities [4]. in this way, the importance of the different input magnitudes on the final pressure was evaluated, in a wide range of conditions. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 188 4. results and discussion by modelling the behaviour of nitrogen using the virial expansion truncated in the second term and considering the initial pressure in the large tank, the final pressure after a static expansion process is calculated by 𝑃𝑓 = 1 + 𝐵𝑓 𝑃𝑓 𝑅 𝑇𝑓 (𝑉𝑃 + 𝑉𝐺 ) [ 𝑃𝑖 𝑉𝑃 1 + 𝐵𝑖 𝑃𝑖 𝑅 𝑇𝑖 + 𝑃𝑖,𝐺 𝑉𝐺 1 + 𝐵𝑖 𝑃𝑖,𝐺 𝑅 𝑇𝑖 ] 𝑇𝑓 𝑇𝑖 . (5) this model was called "model 1” and is the reference model. the main drawback of the model in (5) is that 𝑃𝑓 is an implicit variable, so it is required to solve the equation using a numerical method. in the present work, the secant method [28] was used to solve (5). it became evident that using as the starting point for the method the value of 𝑃𝑓 calculated with model 4 (which is the simplest), the method required very few steps to achieve convergence. equation (6) presents the “model 2”, which consists of the resulting model for the final pressure after a static expansion process, using ideal gas to represent the behaviour of the substance and considering the initial pressure in the large tank: 𝑃𝑓 = 𝑃𝑖 𝑉𝑃 + 𝑃𝑖,𝐺 𝑉𝐺 𝑉𝑃 + 𝑉𝐺 𝑇𝑓 𝑇𝑖 . (6) the “model 3” is (2) (shown in the “state of the art” section), which represents the calculation of the final pressure obtained with a static expansion, based on the virial expansion truncated in the second term and neglecting the initial pressure in the large tank. this model is implicit for 𝑃𝑓 , just like model 1. the "model 4" is equation (1) (shown in the "state of the art"), which represents the calculation of the resulting pressure after a static expansion process modelling the substance as an ideal gas and neglecting the initial pressure in the large tank. sixty-four conditions were simulated, corresponding to the possible combinations of the following values of the input variables: two volume relationships (1:20 and 1:150), two differences between final and initial temperatures (0 k and 5 k), two initial pressures in the small tank (10 000 pa and 50 000 pa), two initial pressures in the large tank (0.001 pa and 0.000 01 pa) and four consecutive expansion amounts (1, 2, 3 and 4). the percentage errors in the calculation of the final pressure committed by the three models evaluated in each of the sixtyfour conditions are summarised in figure 2. the highest error made with model 2 is -0.0134 %, and it occurs under certain conditions by performing the expansion process only once. this behaviour is explained taking into account that the ideal gas model works better the lower the pressure, and the highest pressure values (the lowest vacuum levels) are obtained when performing a single expansion. in any case, the error is quite low, and depending on the target uncertainty in the final pressure, it is possible that model 2 can be used without problem. on the other hand, with models 3 and 4 very high errors are made under certain conditions, exceeding -20 % after three consecutive expansions, and reaching -97 % in some cases with four consecutive expansions. these very high errors occur when the initial pressure in the large tank is 1 · 10-3 pa, which is an exaggeratedly high value considering the capabilities of current vacuum pumps, such as turbomolecular pumps, but that could occur if the system is not properly baked. in any case, considering the purpose of the analysis to evaluate the performance of the models under different conditions, the combinations of values of input quantities tested indicate that in some situations models 3 and 4 will have an unacceptable performance to determine the pressure of reference in a pressure gauge calibration process. figure 2. percentage error in the final pressure calculated with the evaluated models, against the number of consecutive expansions. a: error made by model 2. b: error presented by model 3. c: resulting error when applying model 4. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 189 based on the results of the previous section, it was decided to use model 2 to perform the uncertainty analysis. the gum equation applied to said model is shown in (7). 𝑢(𝑃𝑓 ) = [( 𝜕𝑃𝑓 𝜕𝑃𝑖 ) 2 𝑢2(𝑃𝑖 ) + ( 𝜕𝑃𝑓 𝜕𝑃𝑖,𝐺 ) 2 𝑢2(𝑃𝑖,𝐺 ) + ( 𝜕𝑃𝑓 𝜕𝑉𝑃 ) 2 𝑢2(𝑉𝑃 ) + ( 𝜕𝑃𝑓 𝜕𝑉𝐺 ) 2 𝑢2(𝑉𝐺 ) + ( 𝜕𝑃𝑓 𝜕𝑇𝑓 ) 2 𝑢2(𝑇𝑓 ) + ( 𝜕𝑃𝑓 𝜕𝑇𝑖 ) 2 𝑢2(𝑇𝑖 )] 0.5 (7) to evaluate the generalities of the effect of the uncertainty of the input quantities on the uncertainty of the final pressure, a base case and six derived cases were considered. the base case is presented in table 2. four consecutive expansions were considered in the base case, so that the pressures after the first, second, third, and fourth consecutive expansion were 496.7 pa, 4.935 pa, 0.049 03 pa, and 0.000 497 0 pa, respectively. in the six derived cases, the values of the input quantities remained identical to those of the base case, as were all the uncertainties except one, which was set at 1 % of the value of the quantity. table 3 presents the standard uncertainties, in terms of percentage of the value of the measurand, obtained after the different numbers of expansions tested in each of the 7 study cases. in all cases, the relative standard uncertainty increases as more expansions are made and the final pressure decreases. in the hypothetical base case, the uncertainty of the pressure values figure 3. percentage contributions of the uncertainties of the input quantities over the uncertainty of the final pressure (“uncertainty budget”), for the seven case studies, with four different numbers of consecutive expansions. the percentage contribution of the initial pressure in the large tank was omitted from the budgets, since it was less than 0.03 % in all cases. a: one expansion. b: two expansions. c: three expansions. d: four expansions. table 2. values of the input quantities and their uncertainties in the base study case defined for both uncertainty analysis. the standard uncertainty of each input quantity was 0.3 % of the respective value of the quantity. input quantity value standard uncertainty 𝑃𝑖 (pa) 50 000 45 𝑃𝑖,𝐺 (pa) 0.000 010 0.000 002 𝑉𝑃 (m 3) 0.001 000 0.000 005 𝑉𝐺 (m 3) 0.100 00 0.000 05 𝑇𝑖 (k) 296.15 0.30 𝑇𝑓 (k) 297.15 0.30 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 190 obtained is very high (reaching 1.21 %), considering the possibilities of the system. it can also be seen that increasing the uncertainty from 0.3 % to 1 % for 𝑉𝑃 , 𝑉𝐺 , 𝑇𝑖 and 𝑇𝑓 has a practically identical effect on the pressure uncertainty, while the impact of this increase in uncertainty for 𝑃𝑖 is smaller as the number of consecutive expansions grows. on the other hand, increasing the uncertainty of 𝑃𝑖,𝐺 has a negligible effect, for the values used in the base case. figure 3 presents the uncertainty budgets of the seven case studies. after a single expansion, the budgets of the base case and the case with 1 % uncertainty for 𝑃𝑖,𝐺 show a balanced contribution of the different input magnitudes, while for the other 5 cases the budget is dominated by the input quantity to which the uncertainty was increased. on the other hand, as the number of successive expansions increases, for all the study cases the role of the initial pressure of the expansion process (which is the final pressure reached in the previous expansion) gradually increases over the uncertainty of the resulting pressure. this fact indicates the high importance of uncertainty during the first expansion process over uncertainty in subsequent expansions. additionally, an uncertainty analysis was carried out for a case more adjusted to reality, taking values of the input magnitudes within the expected intervals, and assigning them uncertainties similar to those that can be obtained when using medium-high quality measurement instruments. table 2 summarises the values used for that case. four consecutive expansions were simulated, and the value of the final pressure, its uncertainty and the respective uncertainty budget were determined. the result is presented in table 4. this study case shows that the resulting pressure uncertainty grows from 0.72 % with one expansion to 1.5 % after the fourth expansion. it is also evidenced that the uncertainty of the final pressure is being dominated by the uncertainties of the volumes of the tanks, with the uncertainty of the volume of each of the tanks contributing 47.2 % to the uncertainty of the pressure after expansion (and considering that the pressure after 2, 3 and 4 expansions is dominated by the initial pressure, that is, the final pressure of the previous expansion process). it is interesting that the contribution of the initial pressure in the large tank is only appreciable in the fourth expansion. 5. conclusions it was possible to evaluate the adequacy of the different models proposed. it was determined that the use of an ideal gas model instead of a real gas model caused a maximum error of 0.0135 % on the pressure value, under the evaluated conditions (64 different conditions, between 1 and 4 expansion processes). in this way, depending on the uncertainty objective in the calibration process with the expansion system, it is possible that this simplification can be used without problems. on the other hand, neglecting the initial pressure in the calibration chamber can lead to errors in the pressure value of several tens in percentage, of even 97 % under the evaluated conditions, especially as the number of consecutive expansions that take place increases. therefore, it is concluded that it is preferable not to neglect the initial residual pressure in the calibration chamber, unless it is guaranteed that said pressure is maintained at 1 · 10-7 pa or less, with the baking processes and long periods of pumping that that requires. additionally, it was possible to analyse the effect of the uncertainty of the input quantities on the uncertainty of the final pressure after one or more consecutive expansions. it became evident that the magnitudes with the greatest influence on the final pressure obtained are the volumes of the tanks used in the expansion processes. acknowledgements this work was financed by colombia’s servicio nacional de aprendizaje (sena) through the special cooperation agreement no. 0233 of 2018. sena’s centro industrial del diseño y la manufactura and centro industrial y del desarrollo tecnológico participated in the project transfer plan. table 3. relative standard uncertainty (%) of the final pressure after several consecutive expansions, for the seven case studies proposed to review the impact of the input quantities. study case first expansion second expansion third expansion fourth expansion base case 0.67 0.90 1.1 1.2 𝑢(𝑃𝑖 ) = 0.01 ∙ 𝑃𝑖 1.2 1.3 1.4 1.5 𝑢(𝑃𝑖,𝐺 ) = 0.01 ∙ 𝑃𝑖,𝐺 0.67 0.90 1.1 1.2 𝑢(𝑉𝑃) = 0.01 ∙ 𝑉𝑃 1.2 1.6 2.0 2.2 𝑢(𝑉𝐺 ) = 0.01 ∙ 𝑉𝐺 1.2 1.6 2.0 2.2 𝑢(𝑇𝑖 ) = 0.01 ∙ 𝑇𝑖 1.2 1.6 2.0 2.2 𝑢(𝑇𝑓 ) = 0.01 ∙ 𝑇𝑓 1.2 1.6 2.0 2.2 table 4. results of the second uncertainty analysis. expansion number 𝑷𝒇 (pa) 𝒖(𝑷𝒇) (pa) contribution to the uncertainty budget in % 𝑷𝒊 𝑷𝒊,𝑮 𝑉𝑃 𝑉𝐺 𝑇𝑖 𝑇𝑓 1 496.7 3.6 1.6 0.0 47.2 47.2 2.0 2.0 2 4.935 0.050 50.4 0.0 23.8 23.8 1.0 1.0 3 0.049 03 0.000 61 66.8 0.0 15.9 15.9 0.7 0.7 4 0.000 497 0 0.000 007 3 69.4 7.5 11.1 11.1 0.5 0.5 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 191 references [1] national physical laboratory, guide to the measurement of pressure and vacuum, london, 1988. [2] joint committee for guides in metrology, jcgm 200:2012 – international vocabulary of metrology – basic and general concepts and associated terms (vim) – 3rd ed., joint committee for guides in metrology, 2012. [3] r. e. ellefson, a. p. miller, recommended practice for calibrating vacuum gauges of the thermal conductivity type, journal of vacuum science & technology a 18(5) (2000), pp. 2568-2577. doi: 10.1116/1.1286024 [4] joint committee for guides in metrology, jcgm 100:2008 – evaluation of measurement data – guide to the expression of uncertainty in measurement, joint committee for guides in metrology, 2008. [5] s. ruiz gonzález, desarrollo de un nuevo patrón nacional de presión desde la columna de mercurio a patrones primarios de vacío (tesis doctoral), universidad de valladolid, valladolid, españa, 2000. [6] b. g. lipták, instrument engineers’ handbook – fourth edition – process measurement and analysis – volume i, crc press llc, boca raton, 2003. [7] j. c. greenwood, simulation of the operation and characteristics of static expansion pressure standards, vacuum 80 (2006), pp. 548-553. doi: 10.1016/j.vacuum.2005.09.003 [8] w. jitschin, high-accuracy calibration in the vacuum range 0.3 pa to 4000 pa using the primary standard of static gas expansion, metrologia 39 (2002), pp. 249-261. doi: 10.1088/0026-1394/39/3/2 [9] n. medina, s. ruiz gonzález, c. matilla, developments in the pressure field at cem, imeko 20th tc3, 3rd tc16 and 1st tc222 international conference – cultivating metrological knowledge, mérida, mexico, 2007. online [accessed 8 september 2021] https://www.imeko.org/publications/tc16-2007/imekotc16-2007-036u.pdf [10] m. astrua, d. mari, s. pasqualin, improvement of inrim static expansion system as vacuum primary standard between 10-4 pa and 1000 pa, 19th international congress of metrology, 2019, pp. 27007. doi: 10.1051/metrology/20192700 [11] j. c. torres guzmán, l. a. santander romero, k. jousten, realization of the medium and high vacuum primary standard in cenam, mexico, metrologia 42 (2005), pp. s157-s160. doi: 10.1088/0026-1394/42/6/s01 [12] s. phanakulwijit, j. pitakarnnop, establishment fo thailand's national primary vacuum standard by a static expansion method, journal of physics: conference series 1380 (2019), pp. 012003. doi: 10.1088/1742-6596/1380/1/012003 [13] d. herranz, a. pérez, realización de un sistema de expansión estática como patrón nacional de presión absoluta en el rango de 10-4 a 1000 pa, jornada de difusión de resultados de proyectos cem, madrid, españa, 2010. [14] k. f. poulter, the calibration of vacuum gauges, journal of physics e: scientific instruments 10(2) (1977), pp. 112-125. doi: 10.1088/0022-3735/10/2/002 [15] y. takei, h. yoshida, e. komatsu, k. arai, uncertainty evaluation of the static expansion system and its long-term stability at nmij, vacuum 187 110034. doi: 10.1016/j.vacuum.2020.110034 [16] international organization for standardization, international standard iso 3567 – vacuum gauges – calibration by direct comparison with a reference gauge, international organization for standardization, 2011. [17] d. herranz, s. ruiz gonzález, n. medina, volume ratio determination in static expansion systems by means of two pressure balances, xix imeko world congress – fundamental and applied metrology, lisboa, portugal, 2009. online [accessed 8 september 2021] https://www.imeko.org/publications/wc-2009/imeko-wc2009-tc16-280.pdf [18] w. steckelmacher, the calibration of vacuum gauges, vacuum 37(8-9) (1987), pp. 651-657. doi: 10.1016/0042-207x(87)90051-0 [19] centro español de metrología, procedimiento me-001 para la calibración de medidores de vacío – edición digital 1, centro español de metrología, madrid, 2011. [20] j. a. fedchak, p. j. abbott, j. h. hendricks, p. c. arnold, n. t. peacock, review article: recommended practice for calibrating vacuum gauges of the ionization type, journal of vacuum science & technology a 36(3) (2018), pp. 030802. doi: 10.1116/1.5025060 [21] international organization for standardization, international standard iso 19685 – vacuum gauges – vacuum gauges – specifications, calibration and measurement uncertainties for pirani gauges, international organization for standardization, switzerland, 2017. [22] p. semwal, z. khan, k. r. dhanani, f. s. pathan, s. george, d. c. raval, p. l. thankey, y. paravatsu, m. himabindu, spinning rotor gauge based vacuum gauge calibration system at the institute for plasma research (ipr), journal of physics: conference series 390 (2012), pp. 012027. doi: 10.1088/1742-6596/390/1/012027 [23] s. cardona b., j. c. torres guzmán, l. santander romero, sistema de referencia nacional para la medición de vacío, simposio de metrología 2001 cenam, querétaro, méxico, 2001. [24] k. jousten, p. röhl, v. m. aranda contreras, volume ratio determination in static expansion systems by means of a spinning rotor gauge, vacuum 52 (1999), pp. 491-499. doi: 10.1016/s0042-207x(98)00337-6 [25] r. kangi, b. ongun, a. elkatmis, the new ume primary standard for pressure generation in the range from 9 x 10-4 to 103 pa, metrologia 41 (2004), pp. 251-256. doi: 10.1088/0026-1394/41/4/005 [26] j. m. smith, h. c. van ness, m. m. abbott, m. m. introducción a la termodinámica en ingeniería química 7th ed., mcgraw-hill interamericana, mexico, 2007. [27] j. m. prausnitz, r. n. lichtenthaler, e. gomes de azevedo, termodinámica molecular de los equilibrios de fase 3rd ed., prentice-hall, madrid, 2000. [28] r. l. burden, j. d. faires. numercal analysis 9th ed., brooks/cole, cengage learning, usa, 2011. https://doi.org/10.1116/1.1286024 https://doi.org/10.1016/j.vacuum.2005.09.003 https://doi.org/10.1088/0026-1394/39/3/2 https://www.imeko.org/publications/tc16-2007/imeko-tc16-2007-036u.pdf https://www.imeko.org/publications/tc16-2007/imeko-tc16-2007-036u.pdf https://doi.org/10.1051/metrology/20192700 https://doi.org/10.1088/0026-1394/42/6/s01 https://doi.org/10.1088/1742-6596/1380/1/012003 https://doi.org/10.1088/0022-3735/10/2/002 https://doi.org/10.1016/j.vacuum.2020.110034 https://www.imeko.org/publications/wc-2009/imeko-wc-2009-tc16-280.pdf https://www.imeko.org/publications/wc-2009/imeko-wc-2009-tc16-280.pdf https://doi.org/10.1016/0042-207x(87)90051-0 https://doi.org/10.1116/1.5025060 https://doi.org/10.1088/1742-6596/390/1/012027 https://doi.org/10.1016/s0042-207x(98)00337-6 https://doi.org/10.1088/0026-1394/41/4/005 path planning for data collection robot in sensing field with obstacles acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 10 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 path planning for data collection robot in sensing field with obstacles sára olasz-szabó1, istván harmati1 1 dept. of control engineering and information technology, budapest university of technology and economics, budapest, hungaryl section: research paper keywords: path planning, mobile robots, obstacle avoidance citation: sára olasz-szabó, istván harmati, path planning for data collection robot in sensing field with obstacles, acta imeko, vol. 11, no. 3, article 5, september 2022, identifier: imeko-acta-11 (2022)-03-05 section editor: zafar taqvi, usa received february 26, 2022; in final form july 21, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: sára olasz-szabó, e-mail: olasz-szabo.sara@edu.bme.hu 1. introduction nowadays there is a more and more common need for continuous data collection on a specified area. the simplest way for such data collection is using wireless sensor networks (wsn) [1], [2]. in most applications, a wsn consists of two parts: one data collection unit (also known as a sink or base station) and a large number of tiny sensor nodes. typically, both sensor nodes and sink remain static after deployment. sensor nodes, which are equipped with various sensor units, are capable of sensing the physical world and providing data to the sink through single-hop or multi-hope routing [3]. sensors are usually powered by batteries, which cannot be replaced in some applications, e.g., battlefield surveillance.[4]. since the data loss rate is increasing with the distance and each data transmission rate is associated with an energy consumption rate, which is modelled as a non-decreasing staircase function of the distance [5], the remote data sending is uses a lot of energy and this deteriorates network lifetime. for these reasons, the data transmission is executed by data collection robots [6], [7]. there are many applications of this technology in literature from recent years. for example, in [8] it is reviewed a range of techniques related to mobile robots in wsns. in paper [9], considered deploying a flying robotic network to monitor mobile targets in an area of interest for a specific time period with using wsns. in the work [10] investigated using a mobile sink, which is attached to a bus, to collect data in wsns with nonuniform node distribution. however, the robots have limited velocity and this way the data delay is significantly increasing. since transmitting over a short distance is more reliable than long distance, using robots improves the data collection rate. in addition, in terms of security, sending mobile sinks to collect data is more secure than transmitting via multihop communication [11]. this may be important in some military applications, as well. in paper [12] the authors raise and solve a problem of viable path planning for data collection unicycle robots in a sensing field with obstacles. the robots must visit all sensing nodes and then return to the base station and upload the collected data. path planning for the robots is a crucial problem since the constructed paths directly relate to the performance such as the delivery delay and energy consumption of the system. in a sensing field there are obstacles as well and the robots must not collide with them. the data collection is carried out by unicycle dubins-car [13], which can only move with constants velocity and bounded angular velocity, so it can move only on straight lines and turn with bounded turning radius. abstract using mobile robots to collect data from wireless sensor network can reduce energy dissipation and this way improves network lifetime. our problem is to plan paths for unicycle robots to visit a set of sensor nodes and download data on a sensing field with obstacles while minimizing the path length and the collecting time. reconstructing the path of an intruder in a guarded area is also a possible application of this technology. during path planning we greatly emphasize the handling of obstacles. if the area contains many or large obstacles, the robots may spend long time for avoid them so this is a critical point of finding the minimal path. this paper proposes a new approach for handling obstacles during path planning. a new algorithm is developed to plan the visiting sequence of nodes taking into consideration the obstacles as well. mailto:olasz-szabo.sara@edu.bme.hu acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 for successful path planning it is necessary to determine the criteria of an adequate path. in paper [12] the authors define a viable path which is smooth, collision-free with sensor nodes/base station and obstacles, closed, and provides enough contact time with all the sensor nodes. because of the kinematic properties of the robots the path must be smooth. safety boundary is determined around obstacles and nodes for the sake of collision-free path. all nodes are bounded with a visiting circle with the minimum turning radius of the unicycle robot. the minimum turning radius depends on the speed of the robot and its maximum angular velocity. moreover, all obstacle’s convex hull are bounded with a safety margin, since in case of the shortest path the robot should move on the boundary of the convex hull. the path must be closed because of the periodical data collection. the robot downloads data only when it moves around the visiting circle, so it makes round trips around the node as long as it collects all the data from the sensor node. during path planning it is assumed that the location of all nodes and obstacles as well as the shapes of the obstacles are known. between two objects nodes and obstacles there are always defined four tangents but any tangents that intersect other obstacle are removed. so when the robot arrives to a node on a tangent it starts downloading data and during it makes round trips as long as it collects all the data from the node and then it leaves the node on a tangent. so a path consists of an adequate configuration of tangents and arcs around objects at the safety distance. the paper organized as follows. in section 2 we summarize the basic method [12] and then section 3 describes the proposed concepts for the path planning and also presents our new algorithms. in section 4 we demonstrate simulation results. 2. summary of the shortest viable path planning algorithm in paper [12] the shortest viable path planning (svpp) algorithm was defined. the main steps of svpp are outlined as algorithm 1. this algorithm first computes a 𝛴 permutation of nodes without obstacles by solving an asymmetric travelling salesman problem. for this they construct a directed graph, where the vertices are the nodes and the length of the edges are calculated as follows. the length of the edge between two vertices takes into account two aspects: the length of the valid path between their visiting circles and the length of the adjusted arc on the latter vertex. thus, the length of the edge from 𝑠1 to 𝑠2 equals to the summation of the average length of tangents and the length of the adjusted arc on the visiting circle of 𝑠2. in contrast, the length of the edge from 𝑠2 to 𝑠1 equals to the summation of the average length of tangents and the length of the adjusted arc on the visiting circle of 𝑠1. with such directed graph, they use an atsp solver [14] to calculate the permutation 𝛴 . at this point there can be tangents that intersect obstacles in 𝐺(𝑉, 𝐸) tangent graph [15]. 𝑉 denotes the tangent points and 𝐸 denotes the tangents. the second and third step of this algorithm adds the blocking obstacles to the permutation and constructs a simplified tangent graph. having 𝛴, 𝐺(𝑉, 𝐸) can be simplified by keeping only the tangent edges that connect succeeding visiting circles in 𝛴 and the corresponding arc edges. when any obstacle blocks the route between any pair of visiting circles, the tangents passing the obstacle’s safety boundaries are also included in the 𝐺′(𝑉′, 𝐸′) simplified tangent graph and the algorithm inserts the obstacle to the 𝛴′ permutation between the two nodes. one obstacle can block more than one pair of nodes. in this case the algorithm inserts the obstacle to the 𝛴′ permutation into more than one place. the algorithm constructs a 𝐺′(𝑉′, 𝐸′) by keeping the edges and vertices related to the permutation of nodes and obstacles while deleting others. obviously, 𝛴 ⊆ 𝛴′. the new graph is called the simplified tangent graph 𝐺′(𝑉′, 𝐸′), where 𝑉′ ⊆ 𝑉 and 𝐸′ ⊆ 𝐸. the next step is converting 𝐺′(𝑉′, 𝐸′) to a tree-like graph 𝑇. this gives additional information about the succeeding usable tangents and arcs. from every object there are four tangents departing to the next object, the starting tangent points of these are the departure configurations, and there are four tangents arriving from the previous object, the tangent points of these are the arrival configurations. this means that every object can be transformed to 8 vertices in a tree-like graph. the path length between two objects in 𝛴′ permutation always consist of two components. the first component is the arc around the first object from the arrival to the departure tangent point, including the additional full circles if these are necessary to download the data. the second component is the length of tangent between the two tangent points. for the calculation of distance between the 𝑖th and 𝑖 + 1th (𝑖, ∈ [2, 𝑛′ − 1], where 𝑛’ denote the number of objects in the permutation) objects, we need information about the tangent and tangent point between the 𝑖 − 1th and 𝑖th objects. one should know which tangent point will be used by the tangent on the visiting circle of the 𝑖th object in order to calculate the arc length on the visiting circle. figure 1 illustrates this algorithm 1: shortest viable path planning (svpp) 1. compute 𝛴 by solving atsp instance based on 𝐺(𝑉, 𝐸) 2. compute 𝛴′ by adding those obstacles to 𝛴 that safety boundaries block tangents between nodes. 3. simplify 𝐺(𝑉, 𝐸) to 𝐺′(𝑉′, 𝐸′) by keeping the edges and the vertices related to 𝛴′ and deleting others. 4. convert 𝐺′(𝑉′, 𝐸′) to tree-like graph 𝑇. 5. given an initial configuration, search the shortest path 𝑃 in 𝑇. figure 1. the distance between two objects in permutation consist of two part: arc and tangent length. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 problem. the path between two objects is represented by two segments in the directed tree graph. these two parts are the arc and tangent. so in the tree-like graph the vertices are the tangent points and the edges are the arcs and the tangents. the direction of edges points to the next part of the path. during the representation of the edges one must pay attention to the heading constraints. the heading constraint refers to that the robot's heading at the beginning of an edge should be equal to that at the ending of the last edge. the base station is the starting node, so the first element of the tree-like graph is one of the points of the base station visiting circle's. because of closed path, the final element of the tree-like graph should be also one of the points of the base station visiting circle. since the authors use dubins-car, they construct the tree-like graph both for positive and negative, clockwise and anti-clockwise initial direction as well. it can be seen that from each arrival configuration of an element there are two options to reach the arrival configurations of the next element because of the heading constraint. from a given starting point the total number of paths starting and ending at this point is 2𝑛 ′−1 taking into consideration that the starting and ending direction should be the same because of the continuous data collection. in paper [12], a dynamic programming based method is used to solve the shortest path search in the tree-like graph. 3. new concepts of solution in this paper new concepts of svpp algorithm are developed. the new algorithm based on these modifications is called generalized-svpp algorithm. in the following these modifications and new algorithms will be described in detail. 3.1. constructing tangents graph in paper [12] the tangents that intersect visiting circles are not included in the tangent graph (assumption 1, see below). however, the robot can move collision-free on a tangent that does not intersect the circle with centre of node and radius 𝑑𝑠𝑎𝑓𝑒 . therefore in the proposed new algorithm tangents that do not intersect the circle with 𝑑𝑠𝑎𝑓𝑒 radius around a node are available as well (assumption 2). this way the planned path may be shorter in certain cases. assumption 1: the tangents that intersect visiting circles are not included in the tangent graph. assumption 2: the tangents that intersect visiting circles, but do not intersect circles with centre of a node and radius 𝑑𝑠𝑎𝑓𝑒 , are included in the tangent graph. 3.2. permutation of nodes at this point the obstacles are not taken into account when creating the permutation of nodes. tangents that are intersecting obstacles are allowed in this step. a graph is constructed where the vertices are the nodes and the length of the edges is the average length of tangents between the two nodes. after this there is a searching for the shortest closed cycle with all of the nodes, namely the shortest hamilton cycle in this graph. this problem is the travelling salesman problem and by solving it the 𝛴 permutation of nodes is determined. as it was presented in section 2, in paper [12] the authors take into account the path length necessary to download the data and solve this problem with atsp. the exact path length around a visiting circle cannot be determined since the actual tangents are not known at this point. this is the reason why the average length of the tangents is used. 3.3. new concept of handling obstacles, construction of simplified tangent graph in algorithm 1 (svpp) there are two types of problem of handling obstacles. first these problems are described and then the solutions for them are presented. these problems are illustrated in figure 2. 1. for example, in figure 2 between node-1 and node-2 there is only one available tangent. when obstacle-1 is in the permutation, the available tangent between the nodes is not feasible in the shortest path planning. but when obstacle-1 is not in the permutation, there may be no solution at all depending on the initial configurations due to heading constraints. 2. there can be more obstacles between two nodes so that these obstacles block different tangents. for instance, in figure 2 between node-3 and node-4 obstacle-2 and obstacle-3 are blocking different tangents. tangent directions: the tangent direction is called positive-negative (pn) if the robot can make round trip around the first node positive clockwise direction and around the succeeding second node in negative direction. the positivepositive (pp), negative-positive (np) and negative-negative (nn) directions can be defined similarly as well. in this paper a new algorithm is proposed instead of the second and third step of algorithm 1. algorithm 1 creates one permutation of nodes and obstacles (assumption 3). the basic idea of the new algorithm is to calculate more than one permutation (assumption 4) and then using these to construct the 𝐺′(𝑉′, 𝐸′) simplified tangent graph and then the 𝑇 treelike graph in order to get better solution. assumption 3: one permutation of nodes and obstacles is created. the original svpp (algorithm 1) uses this assumption. assumption 4: more than one permutation of nodes and obstacles are created. the new algorithm 2 created by applying this assumption. instead of the second and third step of algorithm 1 the following algorithm 2 is used. at first stage four copies of 𝛴 permutation of nodes are created, then for every two nodes the blocking obstacles of all figure 2. an example of blocked tangents are illustrated with dashed line. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 the four tangents are determined. when one of the tangents intersects an obstacle, the obstacle is inserted to the proper position in the feasible permutation determined by the tangent direction. note that when more than one obstacle is intersected, they are inserted to the feasible permutation according to their distance from the previous node. then the duplicate solutions are eliminated, and after this the previous algorithm is repeated as long as there are no new intersected obstacles. while repeating this algorithm, in the first step the 𝛴′ permutations of nodes and blocking obstacles is used instead of the 𝛴 permutation. in first case both of the direct tangents and the edges passing the obstacle safety boundaries are also inserted into the simplified tangent graph. in case one tangent blocked by more than one obstacles, all tangents and tangent points between the two nodes, between any obstacle and the two nodes, and between any two obstacles are inserted into the simplified tangent graph if these tangents are not blocked. the simplified tangent graph contains all of the tangents and tangent points from any permutations. besides, it also contains all of the arcs between the tangent points. 3.4. constructing the tree-like graph the dubins-car moves on tangents or arcs. in case of obstacles it moves on arcs between the arrival and the departure configurations and around the visiting circle while it downloads all the data from the sensor node. because of heading constraint the tangent direction determines the direction around the next object. and the direction around the object determines the available departure tangents. there are two tangents available for a given direction for any two objects. in case of one permutation there are two departure configurations for every object, but if there is more than one permutation the count of departure configurations depend on the permutations and the simplified tangent graph. in paper [12] there is only one permutation, so there are some cases when it causes problems as it was shown it the previous section. the new algorithm 2 proposed in the present article may achieve shorter path and give more general solution, but the tree-like graph become more complex. the new algorithm can select the shorter path from more available options. in this paper algorithm 3 and algorithm 4 are recommended to construct the tree-like graph for cases with one and more than one permutation, respectively. we demonstrate the transformation from simplified tangent graph into tree-like graph with help of example field in figure 3. for constructing the tree-like graph, in the first step two starting and two ending vertices are created for both the positive and negative initial direction. it is illustrated in figure 4 a) picture. algorithm 2: add obstacles to permutations 1. create four copies of 𝛴 permutation 2. for every two nodes determine the blocking obstacles of all the four tangents and insert these obstacles to the proper positions in the feasible permutations according to their distance from the previous node. 3. eliminate the duplicate permutations. 4. jump to 1. and repeat the algorithm while there are intersecting obstacles. in this case instead of 𝛴 we use 𝛴′permutations. algorithm 3: constructing the 𝑇 in case of one 𝛴′ permutation algorithm 2--add obstacles to permutations 1. for both negative and positive direction add starting and ending point as vertices to 𝑇. 2. according to 𝛴′add 4 arrival and 4 departure tangent points for all objects to the 𝑇 3. determine all arc length between all possible arrival and departure configurations taking into consideration the heading constraint and the possible different arc length calculation methods of the nodes and obstacles. add arc lengths as the length of the edges to the 𝑇 between the corresponding vertices. 4. around the visiting circle of the base station determine arc length between the starting point and the arrival and departure configurations. add this as edge length to 𝑇 to the proper position. 5. according to 𝛴′ add all tangents between all consecutive objects to the 𝑇 taking into consideration the heading constraint. 5. create four copies of 𝛴 permutation 6. for every two nodes determine the blocking obstacles of all the four tangents and insert these obstacles to the proper positions in the feasible permutations according to their distance from the previous node. 7. eliminate the duplicate permutations. 8. jump to 1. and repeat the algorithm while there are intersecting obstacles. in this case instead of 𝛴 we use 𝛴′permutations. algorithm 4: constructing the 𝑇 in case of more than one 𝛴′ permutation algorithm 2--add obstacles to permutations 1. apply algorithm 3 to the σ permutation of nodes. add edges only that are the member of 𝐺′(𝑉′, 𝐸′). 2. do for all 𝛴′permutations: 1.) if there are only one obstacle in the 𝑖th place between two nodes: run step 2-4 of algorithm 3 for the 𝜎𝑖−1, 𝜎𝑖 , 𝜎𝑖+1 object. only if the edges are in the 𝐺′(𝑉′, 𝐸′). 2.) if there are more than one obstacles between two nodes. let denote the obstacles by 𝜕𝑂 = {𝜕𝑜1, … , 𝜕𝑜𝑗 }: do step 1.) for all 𝜕𝑜𝑖 ∈ 𝜕𝑂 obstacle in the given permutation and for the nodes before and after the obstacle. run step 2-4 of algorithm 3 for all pair of obstacles 𝜕𝑜𝑖 ∈ 𝜕𝑂. run it in that case also if these are not subsequent elements in the permutation. naturally only in case when the vertices and edges are in 𝐺′(𝑉′, 𝐸′). 9. create four copies of 𝛴 permutation 10. for every two nodes determine the blocking obstacles of all the four tangents and insert these obstacles to the proper positions in the feasible permutations according to their distance from the previous node. 11. eliminate the duplicate permutations. 12. jump to 1. and repeat the algorithm while there are intersecting obstacles. in this case instead of 𝛴 we use 𝛴′permutations. figure 3. an example field of tree-like graph construction with 𝛴 = {𝑪𝟏, 𝑪𝟐, 𝑪𝟑, 𝑪𝟒} permutation of nodes acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 a) add starting and ending points as vertices b) add tangent points of nodes as vertices c) add arc length around the nodes between the arrival and departure configurations as edges d) add tangent length between the departure and arrival configurations as edges e) add tangent points that tangents between the obstacles and nodes in simplified tangents graph as vertices f) add arc length around the obstacles and the connecting nodes as edges g) add tangents between the obstacles and nodes as edges figure 4. an example of tree-like graph construction from figure 3 using algorithm 4. obstacles are denoted by purple. the vertices of the same rectangle denote the tangent points of the same object acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 after this the nodes are inserted into the graph, according to permutation 𝛴. using 𝐺′(𝑉′, 𝐸′) vertices are created from the tangent points (figure 4 b) picture). the next step is to determine the edges between the vertices. the length of edges between the starting point and the departure configurations of the visiting circle of the base node is equal to the arc length between the starting point and the departure configurations. the next step is to determine the arc length between the arrival and departure configurations around the node taking into account the heading constraint. in next step edges are added to 𝑇 between the ending vertices and the arrival configurations of the base node's visiting circle (figure 4 c) picture). finally, between the departure configuration and the arrival configurations of the next node, the length of the edges are the length of the tangents with the proper direction (figure 4 c) picture). this step should be done for all nodes in the order given by 𝛴 permutation. next step is to add obstacles to the tree-like graph according to 𝛴′ permutation and 𝐺′(𝑉′, 𝐸′) simplified tangent graph. in case of figure 3 obstacle-1 blocks tangents between node-2 and node-3. first those departure tangent points are added as vertices of the visiting circle of node-2 which are also on a tangent of obstacle-1. in addition, the arrival configuration of the obstacle coming from node-2 and departure configurations coming from the obstacle to the node-3 are added as vertices to the tree-like graph. finally, the arrival configurations of the node-3 are added to tree-like graph (figure 4 e) picture). adding edges is similar as it was previously (figure 4 f)-g) picture). it might occur that there is more than one obstacle between two nodes. in this case all existing tangent points and tangents between the previous and next nodes to/from all obstacles are added as vertices and edges to the tree-like graph. the tangents and tangent points between all pair of obstacles in the proper edge direction are added as vertices and edges. in paper [12] the svpp algorithm handles obstacles in the same way as nodes when these are added to the tree-like graph. the authors iterate step by step on permutation 𝛴′ by adding all tangent points to the tree-like graph as vertices. then they add arcs and tangents to the tree-like graph as edges taking into consideration the heading constraint. the tree-like graph constructed from the figure 3 sensing field can be seen on figure 5 using both algorithm 3 and algorithm 4. during the construction of the tree-like graph according to algorithm 4 first vertices were created from nodes denoted by black then the edges were added between them. then the tangent points of obstacles were added as vertices denoted by purple and the associated edges were also added. the different objects are separated with rectangles in the figure. it can be seen that in figure 5 on the second picture there are direct tangents between node-2 and node-3, namely edge between vertices n2_pp and n2ton3_pp, but on the first picture of figure 5 there are only paths using edges that passing obstacle-1 (this is because of assumption 1 and assumption 3). theorem using our new assumption 2 and assumption 4 the planned path always better or equal to the path that using the original assumption 1 and assumption 3. proof the simplified tangents graph in case of assumption 2 or 4 always contains all edges and vertices from simplified tangents graph using assumption 1 and 3. therefore the tree-like graph using assumption 2 and 4 always contains the tree-like graph using assumption 1 and 3 as well. so the tree-like graph using assumption 2 and 4 contains all paths that the tree-like graph using assumption 1 and 3 and possibly even more paths. therefore, the shortest path in the tree-like graph using assumption 2 and 4 always shorter or equal to the shortest path of the tree-like graph using assumption 1 and 3. □ figure 5. an example of tree-like graph construction from figure 3 using algorithm 3 and algorithm 4. obstacles are denoted by purple. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 7 3.5. searching the shortest path in a tree-like graph after the tree-like graph was created, it was searched for the shortest path. there are many different way to find the shortest path. here in this paper the dijkstra algorithm [16] were applied. the searching for the shortest path was carried out in both positive and negative initial direction and then the shorter was selected. 3.6. complexity of g-svpp algorithm to end this section, we analyse the time complexity of gsvpp algorithm both of assumption 1 and 3 or assumption 2 and 4 cases. first step is solving tsp with use of miller-tuckerzemlin formulation [17] with 𝒪(𝑛2 + 𝑛) computation effort. in step 2, we check 𝑛 pairs of visiting circles to see whether they are blocked by any boundary of convex hull. in each checking, check all the 𝑚 obstacles and then the time complexity is 𝒪(𝑛𝑚). in step 3, we do a constant number of operations to each permutation each element in σ′, then the time complexity of the simplifying procedure is ∑ 𝒪(𝑛′𝑖 ) 𝑛 σ′ 𝑖=1 , where 𝑛′𝑖 denote the length of the 𝑖th permutation of σ′ for each permutation and 𝑛ς′ denote the number of permutations of σ′. converting 𝐺′(𝑉′, 𝐸′) to 𝑇 costs 𝒪(1) in step 4. the shortest path searching of the 𝑇 is implemented dijkstra algorithm [16]. the computation effort of dijkstra algorithm is 𝒪(|𝐸′| + |𝑉 ′|2) = 𝒪(|𝑉 ′|2) so that depend on the number of vertices of 𝑇 tree like graph. in case assumption 1 and 3 the 𝑇 tree like graph contains maximum 8𝑛′ + 4 vertices, since there are four arrival and four departure configurations all objects, and two starting and two ending vertices of the base station. in case assumption 2 and 4 the tree-like graph in worst case contains ∑ 8𝑛′ 𝑖 𝑛 σ′ 𝑖=1 + 4 vertices. therefore, the worst case computation effort in case assumption 2 and 4 is 𝑛ς′ 2 times larger than the maximum computation error in case assumption 1 and 3. however, in most cases the σ′ permutations differ from each other in only a few elements so the computation error is more smaller than in the worst case. 4. simulation results in the present paper a 200 m × 200 m virtual field was simulated with 40 nodes, of which one is the base station and the other 39 are sensor nodes. the base node is node-1. each sensor node stores 𝑔 = 0.5 mb data and to the base node 𝑔𝐵 = (𝑛 − 1) 0.5 mb = 19.5 mb collected data is uploaded by the robot for further analysis. the data transmission rate at the visiting circle is 𝑟 = 250 kb/s. in a sensing field there are 15 obstacles as well. the robot speed is 𝑣 = 4 m/s and the maximal angular velocity is |𝑢𝑀 | ≤ 1 rad/s, therefore the minimal turning radius and also the visiting circle’s radius is 𝑅𝑚𝑖𝑛 = 𝑣 𝑢𝑀⁄ = 4 m the robot must move at least 𝑑𝑠𝑎𝑓𝑒 = 0.5 m distance from an object in order to avoid to collision. the next step is to construct the 𝐺(𝑉, 𝐸) tangent graph, for this the tangents and the tangent points between the objects are determined and then the edges are deleted according to assumption 1 or assumption 2. the difference between the two assumptions can be seen in figure 6. it can be seen in figure 6 that the proposed assumption 2 produces more tangents, since this allows tangents that intersect visiting circle but not intersect circle with centre of node and radius 𝑑𝑠𝑎𝑓𝑒 . therefore, increases the number of possible paths so it is feasible shorter path planning. but at the same time, it requires more computations as well. the next step is to determine the 𝛴 permutation of nodes and then construct the 𝛴′permutation or permutations with obstacles depending on assumption 3 or assumption 4. using 𝛴′ the 𝐺′(𝑉′, 𝐸′) simplified tangent graph is constructed. the 𝐺′(𝑉′, 𝐸′) using assumption 3 and assumption 4 can be seen in figure 7 and figure 8. in figure 7 the tangents graph figure 6. the difference between the 𝐺(𝑉, 𝐸) tangent graphs using assumption 1 and assumption 2. a) b) figure 7. 𝐺′(𝑉′, 𝐸′) simplified tangent graph using assumption 1 and assumption 3 . acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 8 constructed by assumption 1 is simplified by applying assumption 3. similarly, in figure 8 it can be seen the 𝐺′(𝑉′, 𝐸′) applying assumption 2 and assumption 4. in figure 8 it can be seen that using the assumption 2 and assumption 4 proposed in this article, the tangents that intersect visiting circles are available and more permutations can be constructed so there are more available tangents in a 𝐺′(𝑉′, 𝐸′) and therefore in the tree-like graph as well. in this way, there are more possible paths and therefore the algorithm may plan shorter path. in this case the constructed tree-like graph is more complex than in case of assumption 3. when we use assumption 1 and assumption 3 there can be at most 4 arrival and 4 departure tangents for every object in the order of the permutation. usually there are four-four arrival and departure tangents, but for example in figure 7, between obstacle-13 and obstacle-3 there are just 3 available tangents. in figure 8 it can be seen the case of assumption 2 and assumption 4. since in the second picture the fourth tangent intersects the safety circle of node-10 it is also not available in the simplified tangent graph. in figure 8 there are direct tangents between node-24 and obstacle-3 and one of them intersects the visiting circle of node-26, but in figure 7 the robot must visit obstacle-13 first. in figure 8 there are three available tangents between the visiting circles of node-37 and node-3, but in figure 7 (assumptions 1 and 3) the robot can only move on tangents that passing obstacle-6. the next step is to construct the 𝑇 tree-like graphs for both cases assumption 1 and 3 or assumption 2 and 4. finally, in the tree-like graph a search for the shortest path is carried out both for positive and negative initial direction as well. in figure 9 and figure 10 it can be seen that the planned path using the new assumptions 2 and 4 proposed in the present article is shorter than in the original case. in this example, the planned path with positive and negative initial direction only differs in the tangents that are belonging to the base station. using assumptions 2 and 4 the planned path between the node-24 and obstacle-3 use tangent that intersect the visiting circle of node-26, therefore a shorter path can be achieved using the new assumptions of the present article. applying assumptions 1 and 3 between node-3 and node-37, the planned path uses tangents that pass the obstacle-6. on the contrary, if assumptions 2 and 4 are applied, the planned path uses direct a) b) figure 8. 𝐺′(𝑉′, 𝐸′) simplified tangent graph using assumption 2 and assumption 4 . figure 9. the resulted shortest path with negative initial direction of the 𝑇 tree-like graph both for the cases using assumption 1 and assumption 3 or assumption 2 and assumption 4. the length of the planned path are 2341.94𝑚 and 2290.85𝑚, respectively. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 9 tangents between the two nodes and this way the determined arc length is shorter. each sensor node data downloading requires 𝑙 = 𝑔 𝑣 𝑟 = 8 m arc length. the visiting circles circumference equal to 𝐾 = 25.12 m. therefore the robot must take at least approximately one third of the visiting circle's circumference to have enough time for data transmission. it can be seen that the algorithm preferably chooses tangents between node-31, node-32 and node-33 in such a way that the robot is not required to make extra round trips around these nodes. as a test, the path planning was run for ten different virtual sensing fields. the length of the planned paths can be seen on table 1. the solutions using assumption 1 and 3 are compared with the solutions using assumption 2 and 4. as it was proven in the theorem, using the new assumptions proposed in the present article the planned path always better or equal to the path using the original assumptions presented in [6]. the number of vertices on a 𝑇 tree-like graph also represented in table 1. the maximum difference between the number of vertices of assumption 2 and 4 or assumption 1 or 3 is 54, in test field 5. in this case the computation effort in case of assumptions 2 and 4 is increased by 27 % compared to assumptions 1 and 3. at the same time the planned path in case of assumptions 2 and 4 is 92 m shorter than assumptions 1 and 3. that means, 𝑡 = 92 m 4 m/s = 23 s time for each period, therefore the robot can make 6 extra round trip for a day. 5. conclusions in the present paper a path planning algorithm was developed for unicycle robots between sensor nodes. the task is to collect all the data from the sensor nodes and then upload it to the base node. to increase the effectiveness of data collection, the length and the duration of the trip should be minimized, while also maintaining a collision-free path around the nodes and obstacles. new algorithms were developed for handling the obstacles and new assumptions were applied to reach a collision-free state. a new algorithm for tree-like graph generation was also developed, where the search of the shortest viable path will take place finally. the present paper finished with the detailed presentation of simulation results. the preparation of the field and the steps of path planning algorithm were illustrated with figures. the present article also compared the results of the new algorithm with the results of a previous research. in conclusion, the new algorithm presented in this article proved to achieve shorter paths then the earlier algorithms. figure 10 the resulted shortest path with positive initial direction of the 𝑇 tree-like graph both for the cases using assumption 1 and assumption 3 or assumption 2 and assumption 4. the length of the planned path are 2337.64𝑚 and 2286.55𝑚. table 1 the length of the planned path for different virtual sensing fields using both positive and negative starting directions. |𝑉′| denote the number of vertices of the tree-like graph on which it depends the computation error. test field assumption 1 and 3 assumption 2 and 4 positive negative |𝑽′| positive negative |𝑽′| 1 2326.23m 2233.61m 372 2254.71m 2222.65m 386 2 2393.66m 2387.67m 418 2332.58m 2326.59m 462 3 2305.82m 2313.80m 404 2270.74m 2278.72m 444 4 2372.37m 2359.42m 436 2363.87m 2350.92m 466 5 2413.25m 2403.51m 412 2321.40m 2311.65m 466 6 2294.77m 2288.91m 402 2193.65m 2189.18m 452 7 2335.53m 2317.82m 396 2312.25m 2294.55m 426 8 2330.93m 2324.78m 378 2330.51m 2324.36m 386 9 2247.02m 2262.38m 372 2238.79m 2254.14m 386 10 2301.88m 2311.12m 360 2301.88m 2311.12m 368 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 10 acknowledgement the research reported in this paper and carried out at the budapest university of technology and economics was supported by the “tkp2020, institutional excellence program” of the national research development and innovation office in the field of artificial intelligence (bme ie-mi-sc tkp2020). the research was supported by the efop-3.6.2-16201600014 project financed by the ministry of human capacities of hungary. the research reported in this paper is part of project no. bme-nva-02, implemented with the support provided by the ministry of innovation and technology of hungary from the national research, development and innovation fund, financed under the tkp2021 funding scheme. references [1] a. mainwaring, d. culler, j. polastre, r. szewczyk, j. anderson, wireless sensor networks for habitat monitoring, 1st acm int. workshop on wireless sensor networks and applications, atlanta, georgia, usa, (2002), pp. 88-97. doi: 10.1145/570738.570751 [2] t. he, s. krishnamurthy, j.a. stankovic, t. abdelzaher, l. luo, r. stoleru, t. yan, l. gu, j. hui, b. krogh, energy-efficient surveillance system using wireless sensor networks, 2nd int. conference on mobile systems, applications, and services, acm, boston, massachusetts, usa, (2004), pp. 270-283. doi: 10.1145/990064.990096 [3] j. n. al-karaki, a. e. kamal, routing techniques in wireless sensor networks: a survey, ieee wireless communications, vol. 11, no. 6, 2004, pp. 6-28. doi: 10.1016/j.proeng.2012.06.320 [4] y. gu, f. ren, y. ji, j. li, the evolution of sink mobility management in wireless sensor networks: a survey, ieee commun. surv. tut. 18 (1) 2015, pp. 507–524. doi: 10.1109/comst.2015.2388779 [5] x. ren, w. liang, w. xu, data collection maximization in renewable sensor networks via time-slot scheduling, ieee trans. comput. 64 (7) 2015, pp. 1870–1883. doi: 10.1109/tc.2014.2349521 [6] i. chatzigiannakis, a. kinalis, s. nikoletseas, sink mobility protocols for data collection in wireless sensor networks, 4th acm int. workshop on mobility management and wireless access, acm, terromolinos, spain, 2006, pp. 52-59. doi: 10.1145/1164783.1164793 [7] y. yun, y. xia, maximizing the lifetime of wireless sensor networks with mobile sink in delay-tolerant applications, ieee trans. mob. comput., vol. 9, 2010, pp. 1308-1318. doi: 10.1109/tmc.2010.76 [8] hailong huang, andrey v. savkin, ming ding, chao huang, mobile robots in wireless sensor networks: a survey on tasks, computer networks 148, 2019, pp. 1-19. doi: 10.1016/j.comnet.2018.10.018 [9] huang, hailong, andrey v. savkin, reactive 3d deployment of a flying robotic network for surveillance of mobile targets, computer networks 161, 2019, pp.172-182. doi: 10.1016/j.comnet.2019.06.020 [10] h. huang, a. v. savkin, an energy efficient approach for data collection in wireless sensor networks using public transportation vehicles, aeu-international journal of electronics and communications 75, 2017, pp. 108-118. doi: 10.1016/j.aeue.2017.03.012 [11] y. gu, f. ren, y. ji, j. li, the evolution of sink mobility management in wireless sensor networks: a survey, ieee commun. surv. tut. vol. 17, 2015, pp. 507-524. doi: 10.1109/comst.2015.2388779 [12] hailong huang, andrey v. savkin, viable path planning for data collection robots in a sensing field with obstacles, computer communications 111, 2017, pp. 84-96. doi: 10.1016/j.comcom.2017.07.010 [13] l. e. dubins, on curves of minimal length with a constraint on average curvature, and with prescribed initial and terminal positions and tangents, american journal of mathematics, vol. 79 no. 3,1957, pp. 497–516. doi: 10.2307/2372560 [14] a. m. frieze, g. galbiati, f. maffioli, on the worst-case performance of some algorithms for the asymmetric traveling salesman problem, networks, vol. 12 no. 1, 1982, pp. 23–39. doi: 10.1002/net.3230120103 [15] a. v. savkin, m. hoy, reactive and the shortest path navigation of a wheeled mobile robot in cluttered environments, robotica. vol. 31 issue 2,2013, pp. 323-330. doi: 10.1017/s0263574712000331 [16] e. w. dijkstra, a note on two problems in connexion with graphs, numerische mathematik, vol. 1, number 1, 1959, pp. 269-271. doi: 10.1007/bf01386390 [17] c. e. miller, a. w. tucker, r. a. zemlin, integer programming formulation of traveling salesman problems. j. acm 7, 4, october 1960, pp. 326-329. doi: 10.1145/321043.321046 http://dx.doi.org/10.1145/570738.570751 https://doi.org/10.1145/990064.990096  https://doi.org/10.1016/j.proeng.2012.06.320 https://doi.org/10.1109/comst.2015.2388779 https://doi.org/10.1109/tc.2014.2349521 http://dx.doi.org/10.1145/1164783.1164793 https://doi.org/10.1109/tmc.2010.76 https://doi.org/10.1016/j.comnet.2018.10.018 http://dx.doi.org/10.1016/j.comnet.2019.06.020 http://dx.doi.org/10.1016/j.aeue.2017.03.012 https://doi.org/10.1109/comst.2015.2388779 http://dx.doi.org/10.1016/j.comcom.2017.07.010 https://doi.org/10.2307/2372560 https://doi.org/10.1002/net.3230120103 http://dx.doi.org/10.1017/s0263574712000331 https://doi.org/10.1007/bf01386390 https://doi.org/10.1145/321043.321046 uncertain estimation-based motion planning algorithms for mobile robots acta imeko issn: 2221-870x september 2021, volume 10, number 3, 51 60 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 51 uncertain estimation-based motion-planning algorithms for mobile robots zoltán gyenes1, emese gincsainé szádeczky-kardoss1 1 budapest university of technology and economics, magyar tudósok körútja 2, 1117 budapest, hungary section: research paper keywords: motion planning; mobile robots; cost function; uncertain estimations citation: zoltán gyenes, emese gincsainé szádeczky-kardoss, uncertain estimation-based motion planning algorithms for mobile robots, acta imeko, vol. 10, no. 3, article 9, september 2021, identifier: imeko-acta-10 (2021)-03-09 section editor: bálint kiss, budapest university of technology and economics, hungary received january 15, 2021; in final form august 9, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: zoltán gyenes, e-mail: gyezo12@gmail.com 1. introduction autonomous driving is a highly frequented research area for mobile robots, cars and drones. robots have to generate a collision-free motion towards the target position while maintaining safety with respect to the obstacles that occur in the local environment. motion-planning methods generate both the velocity and the path profiles for the robot using measured information about the velocity vectors and the positions of the obstacles. motion-planning algorithms can be divided into two parts. if all the data for the robot’s environment are known and available at the start, then global motion-planning algorithms can be used to generate a collision-free path [1], [2]. however, if the robot can only use local sensor-based information about its surrounding dnese and dynamic environment, then reactive motion-planning algorithms can provide an acceptable solution for generating the robot’s path and velocity [3], [4]. using a reactive motion-planning algorithm, generating optimal evasive manoeuvers that can ensure a safe motion for the agent and the environment is an np-hard problem [5]. the task is more difficult if the uncertainties of the measured data (velocity vectors and positions) are taken into account. in this paper, a novel reactive motion-planning algorithm is presented that can calculate the uncertainties of every obstacle using their velocity vectors and distance from the agent. the paper is ordered in the following way. section 2 outlines some often-used reactive motion-planning methods that have been introduced in recent decades. in some algorithms, the uncertainties of the measured data have also been considered. at the end of section 2, the basics of the velocity obstacle (vo) and artificial potential field (apf) methods are presented. in section 3, the novel concept for the calculation of obstacle uncertainties is set out. section 4 then presents the introduced motionplanning algorithms, which can generate a safety motion for the agent taking into account the uncertainties. in section 5, the simulation results are presented, and the introduced motionplanning methods are compared. in section 6, the coppeliasim simulation environment is discussed, and section 7 provides a conclusion and sets out plans for future research. 2. previous work in this section, a few reactive motion-planning algorithms are presented. abstract collision-free motion planning for mobile agents is a challenging task, especially when the robot has to move towards a target position in a dynamic environment. the main aim of this paper is to introduce motion-planning algorithms using the changing uncertainties of the sensor-based data of obstacles. two main algorithms are presented in this work. the first is based on the well-known velocity obstacle motion-planning method. in this method, collision-free motion must be achieved by the algorithm using a cost-function-based optimisation method. the second algorithm is an extension of the often-used artificial potential field. for this study, it is assumed that some of the obstacle data (e.g. the positions of static obstacles) are already known at the beginning of the algorithm (e.g. from a map of the enviroment), but other information (e.g. the velocity vectors of moving obstacles) must be measured using sensors. the algorithms are tested in simulations and compared in different situations. mailto:gyezo12@gmail.com acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 52 the inevitable collision states method (ics) calculates all states of the robot where there is no available control command that would result in a collision-free motion between the robot and the environment. the main goal is to ensure that the agent never finds itself in an ics situation. the algorithm is appropriate not only for static but also for dynamic environments [6]-[8]. the main concept behind the dynamic window method [9], [10] is that the agent selects a velocity vector from the reachable and admissible set of the velocity space. the robot executes a collision-free motion by selecting a velocity vector from the admissible velocity set. at the same time, reachable velocities can be generated using the kinematic and dynamic constraints of the agent. the admissible gap is a relatively new concept for motionplanning algorithms [11]. if the robot can move through the gap safely using motion control, then it is admissible, obeying the constraints of the agent. this method is also usable in an unknown environment. the gap-based online motion-planning algorithm has also been used with a lidar sensor by introducing a binary sensing vector (the value of the vector element is equal to 1 if there is an obstacle in that direction) [12]. 2.1. velocity obstacle method the main concept behind our method is based on the vo method [13]. using the positions and the velocities of the obstacles and the agent, the vo method generates a collisionfree motion for the robot. the vo concept has been used in different methods. the steps in the vo method are as follows: 𝐵𝑖 denotes the different obstacles (𝑖 = 1. . . 𝑚, where 𝑚 represents the number of obstacles), and the agent is 𝐴. for every obstacle, a 𝑉𝑂𝑖 cone can be generated that constitutes every robot velocity vector that would result in a collision between the agent (𝐴) and the obstacle (𝐵𝑖 ) at a future time: 𝑉𝑂𝑖 = { 𝐯a | ∃ 𝑡: 𝐩a + 𝐯a𝑡 ∩ 𝐩b𝑖 + 𝐯b𝑖𝑡 ≠ 0} , (1) where 𝐩a and 𝐩b𝑖 are the positions and 𝐯a and 𝐯b𝑖 are the velocity vectors of the robot and the obstacle. the velocities of the obstacles and the robot are assumed to be constant until 𝑡. if there are more obstacles, then the whole 𝑉𝑂 set can be generated: 𝑉𝑂 =∪𝑖=1 𝑛 𝑉𝑂𝑖 . (2) figure 1 provides an example in which a moving obstacle is in position 𝐩b1 and has velocity 𝐯b1 at the actual time step. there is also a static obstacle in the workspace of the agent (𝐩b2 represents its position). the two vo areas are depicted in blue. reachable velocities (rv) can be defined as the velocity area that constitutes every 𝐯a velocity of the agent that is reachable considering the previously selected velocity vector and the motion capabilities of the robot. reachable avoidance velocities (rav) can be received by subtracting the vo from the rv set. figure 2 represents the steps of the motion-planning algorithm. the main difference between the algorithms is the method for selecting the robot’s velocity vector from the rav set. the 𝜖𝐶𝐶𝐴 is an extended version of the reciprocal velocity obstacle (rvo) algorithm [14], which uses the kinodynamic constraints of the robot. the method generates an appropriate solution for the multi-robot collision avoidance problem in a complex environment. the computational time plays an important role in this algorithm. the whole environment of the agent is divided into a grid-based map. the agent selects a collision-free velocity vector using both convex and nonconvex optimisation algorithms [15]. figure 1. velocity obstacle method. figure 2. steps of the whole vo algorithm. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 53 the probabilistic velocity obstacle method is also an extended version of the rvo method [14], which uses the time-scaling method and bayesian decomposition. this method demonstrates better performance in terms of traversal times than the existing bound-based methods. the algorithm was tested using simulation results [16]. the collision avoidance under bounded localisation uncertainty method [17] introduced convex hull peeling, resulting in a limitation in the localisation error. this method results in a better performance than the previously introduced multi-robot collision avoidance with localisation uncertainty method [18] with respect to the tightness of the bound. particle filter is used for robot localisation problems. in this case, convex polygons are generated as the robot footprints. the algorithm ensures that the robot is inside this convex polygon with a probability of 1 − 𝜀. a time truncation is also used in the algorithm because it supports the velocity selection even in a crowded, complex environment. the directive circle (dc) method is an extended version of the vo method [19], [20]. in this algorithm, the velocity of the robot is selected from dc, which is calculated using the maximum velocity of the agent for the radius of the dc. ensuring the kinematic constraints of the agent, the best solution is selected from the dc in the optimal direction for the target position. using the dc method, the local minima situations are preserved. all the presented reactive motion-planning algorithms assume a complete set of information on the position and the velocity vectors of the obstacles that occur in the robot’s workspace. the main advantage of our introduced method is that the uncertainty of the measured sensor information can be taken into consideration, and the novel motion-planning algorithm can generate collision-free target reaching for the agent even when the data is inaccurate. 2.2. artificial potential field method the apf method is an often-used reactive motion-planning algorithm. the main concept is to calculate the summation of the attractive (between the robot and the target) and repelling (between the agent and the obstacles ) forces [21], [22]. one weakness of this algorithm is that sometimes only the local optimum can be found. the algorithm has also been developed for unmanned arial vehicles [23], while human–robot interaction was also simulated using the apf motion-planning method by using the motion characteristics of household animals [24]. the steps of the apf method are as follows: during motion planning, in every sampling time step, ar force will influence the motion of the agent. the ar force depends on the repelling ( 𝐅𝐚𝐫𝐢) and the attractive (frc) forces. the closer the robot is to the obstacle, the larger the volume of the repelling force. the repelling force can be calculated by 𝐅𝐚𝐫𝐢 = 𝜂√ 1 𝐷ra 𝑖 + 1 𝐷ramax 𝐷ra𝑖 2 𝐀𝐑𝐢 , (3) where 𝐷ra 𝑖 denotes the distance between the robot and the obstacle, 𝐀𝐑𝐢 is the vector between the obstacle and the agent, 𝜂 is a specific parameter that identifies the role of the repelling force in the motion-planning algorithm and 𝐷ramax is the largest distance that should be considered in the motion-planning algorithm, which can be calculated as 𝐷ramax = 𝑣max 𝑇s , (4) where 𝑣max is the maximum velocity of the robot and 𝑇s denotes the sampling time. the attractive force can be calculated as 𝐅𝐫𝐜 = 𝜉 𝐑𝐂 , (5) where 𝐑𝐂 is the vector between the robot and the target and 𝜉 is the parameter of the attractive force (depending on the usage of the algorithm). the force that influences the motion of the robot can be calculated (if there is one obstacle in the workspace) with the sum of the repelling and the attractive forces: 𝐀𝐫 = ∑ 𝐅𝐚𝐫𝑖 𝑚 𝑖=1 + 𝐅𝐫𝐜 . (6) figure 3 illustrates the different forces presented. there is one obstacle in the workspace (b1) with the position 𝐩b1. the agent is at the 𝐩a position at this point, and the summation of the forces can be checked. if the mass of the agent is known, then the acceleration can be calculated using newton’s second law: 𝐚 = 𝐀𝐫 𝑚 , (7) where m is the mass of the robot. the changes in the velocity vector can be calculated if the force and the sampling time are known: δ𝐯 = 𝐚 𝑇𝑠 . (8) so the actual velocity can be calculated using the previous velocity and the change in velocity: 𝐯𝐧𝐞𝐰 = 𝐯𝐩𝐫𝐞𝐯 + δ𝐯 . (9) 3. uncertainty calculation using measurement data in previous studies, all the uncertainties of the obstacles were constant throughout the algorithm [25]. in the present study, they will be adjusted using the changes in the velocity vectors of the obstacles, the actual distances of the obstacles from the robot and the magnitudes of the obstacles’ velocity vectors. figure 3. apf method. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 54 the uncertainties can be calculated from the probabilities of the previously introduced parameters. the main concept behind this method is that the measured information has lower reliability if the obstacles are far from the robot. first, the obstacle distance is generated: 𝑃dist𝑖 = { 1 − dist𝑂𝑅𝑖 𝑣max ∙ 𝑇u if dist𝑂𝑅𝑖 < 𝐯max ∙ 𝑇u 0 otherwise , (10) where 𝑇u is the uncertainty time parameter, 𝑃dist𝑖 is the distancebased probability term and dist𝑂𝑅𝑖 is the actual distance between the robot and the obstacle 𝐵𝑖 . the magnitude of the velocity vector of the obstacle also plays a significant role in generating the uncertainties; the higher the velocity of the obstacle, the smaller the reliability of the available information on the obstacle: pmv𝑖 = { 1 − ||𝐯bi|| 𝑣max 𝑖𝑓 ||𝐯bi|| < 𝑣max 0 otherwise , (11) where ||𝐯bi|| refers to the actual magnitude of the velocity of the obstacle 𝐵𝑖 (||. || is the secondary norm (euclidean distance)) and pmv𝑖 is the velocity-based probability term. the change in the obstacle’s velocity vector also influences the volume of the uncertainties. the changes in the velocity of the obstacle can be calculated for each obstacle: 𝐶𝑉𝑖 = ||𝐯b𝑖,new − 𝐯b𝑖,old ||, (12) where 𝐯b𝑖,new is the actual velocity of the obstacle, 𝐯b𝑖,old is the previous velocity of the obstacle and 𝐶𝑉𝑖 denotes the change in the obstacle’s velocity: 𝑃cv𝑖 = { 1 − 𝐶𝑉𝑖 2 𝑣max if 𝐶𝑉𝑖 < 𝑣max 0 otherwise , (13) where 𝑃cv𝑖 is the probability term depending on the change in the velocity vector of the obstacle. the probability for obstacle 𝐵𝑖 can be generated as 𝑃𝑖 = 𝑃dist𝑖 + 𝑃mv𝑖 + 𝑃cv𝑖 3 . (14) the uncertainty parameter can be calculated from the calculated probability: 𝛼𝑖 = 1 − 𝑃𝑖 , (15) where this 𝛼𝑖 uncertainty parameter must be calculated for every obstacle (𝑖 = 1. . . 𝑚, if there are 𝑚 obstacles in the environment; if 𝛼𝑖 = 0, then there is no measurement uncertainty). 4. velocity selection based on motion-planning algorithms 4.1. precheck algorithm the agent has to consider only those obstacles that fulfil the precheck algorithm during the motion-planning algorithm. two obstacle situations are not used: • obstacles that will cross the path of the agent in the distant future, • obstacles that are at a considerable distance from the agent. for all obstacles, the minimum distance and time must be calculated for the point at which the agent and the obstacle are closest to each other during their motion: 𝑡mina,b𝑖 = −( 𝐩a − 𝐩b𝑖 )(𝐯a − 𝐯b𝑖 ) ||𝐯a − 𝐯b𝑖|| , (16) where 𝑡mina,b𝑖 presents the time interval for the point at which the agent and the obstacle are closest to each other. the nearest point is in the past if the value of this parameter is negative. the minimal distance can be calculated as follows: 𝑑mina,b𝑖 = ||( 𝐩a + 𝐯a𝑡mina,b𝑖 ) − (𝐩b𝑖 + 𝐯b𝑖𝑡mina,b𝑖 )|| . (17) so, only those obstacles that fulfill the following inequalities must be considered: 0 < 𝑡mina,b𝑖 < 2 ∙ 𝑇precheck and 𝑑mina,b𝑖 < 𝑣max ∙ 𝑇precheck , (18) where 𝑣max denotes the maximum velocity of the agent and 𝑇precheck is a parameter of the algorithm that must be tuned. the experiments in this study demonstrate that if the value of the 𝑇precheck parameter is too small, the generated path is not smooth enough. the precheck algorithm is illustrated in figure 4. when there is a moving obstacle in the robot’s workspace, the minimal distance and time point can be calculated when the obstacle and the agent are closest to each other. 4.2. cost-function-based velocity selection using the extended vo method the safety velocity obstacle method has been defined in a previous study [26]. in this method, a cost function was used when different aspects influenced the motion-planning method (safety, speed). this algorithm is extended with a heading parameter, which provides information on the orientation of the agent in relation to the target position, and the method is also extended with the changing uncertainty parameter. at every time step, the nearest distance is calculated between the vo cone and the investigated velocities: figure 4. precheck algorithm. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 55 𝐷s(𝐯𝐴, 𝑉𝑂𝑖 ) = min { 𝑚𝑖𝑛 𝒗𝑉𝑂∈𝑉𝑂 ||𝒗𝐴 − 𝒗𝑉𝑂||, 𝐷𝑚𝑎𝑥 }, (19) where 𝐷max is the maximum distance that should be considered and 𝐯vo is the nearest point on the vo cone. the 𝐷s(𝐯a) value must be normalised into the interval of [0,1]. the 𝐶s(𝐯a) can then be calculated, which will be used in the cost function later: 𝐶s(𝐯a, 𝑉𝑂𝑖 ) = 1 − 𝐷s(𝐯a, 𝑉𝑂𝑖 ) 𝐷max . (20) 𝐶g(𝐯a) will also form part of the cost function: 𝐶g(𝐯a) = ||𝐩a + 𝐯a𝑇s − 𝐩goal || ||𝐩a(0) − 𝐩goal|| , (21) where 𝑇s is the sampling time, 𝐩a(0) is the first position of the robot at the beginning of the motion and 𝐩goal is the position of the target. 𝐶g(𝐯a) denotes how far the robot will be from the target if it uses the selected velocity; subsequently, it has to be divided by the distance of the first position and the desired position. in this novel algorithm, the prior method is extended by changing 𝛼𝑖 (𝑡) parameters (calculated at every time step) for the different obstacles with respect to the reliability of the obstacles’ velocity and position information. the orientation of the agent can also play a role in the cost function. the heading parameter of the cost function can be calculated as follows: 𝐶h(𝐯a) = |𝑎𝑛𝑔𝑙𝑒𝑅𝐺 − 𝑎𝑛𝑔𝑙𝑒𝐼𝑉(𝐯a)| π , (22) where 𝑎𝑛𝑔𝑙𝑒𝑅𝐺 refers to the angle of the vector from the robot position to the target position and 𝑎𝑛𝑔𝑙𝑒𝐼𝑉(𝐯a) denotes the angle of the investigated velocity vector of the agent. using the difference in these angles, the heading parameter can be calculated (angles are defined in the global coordinate system). the whole cost function can be determined using different parameters: cost(𝐯𝐴) = ∑ 𝑚 𝑖=1 𝛼𝑖 (𝑡) 𝐶s(𝐯a, 𝑉𝑂𝑖 ) + 𝛽d 𝐶g(𝐯a) + 𝛽h 𝐶h(𝐯a), (23) where 𝛽d is the distance parameter, 𝛽h is the heading parameter and 𝛼𝑖 (𝑡) denotes the actual calculated uncertainty parameter of an obstacle. this velocity vector is selected for the agent, which has minimal cost value. the different parameters of the cost function have a significant impact on the velocity selection, as will be presented in section 5. 4.3. velocity selection based on the extended apf method the apf method can be extended using 𝛼𝑖(𝑡) and 𝛽d parameters, which were introduced in (15) and (23). the repelling forces must be calculated for every obstacle. the constant η parameter must be substituted with the changing uncertainty parameter, which has a value for every obstacle: 𝐅𝐚𝐫𝐢 = 𝛼𝑖 (𝑡)√ 1 𝐷ra𝑖 + 1 𝐷ramax 𝐷ra𝑖 2 𝐀𝐑𝐢 , (24) where the notations are the same (with the extension of the i parameter, which refers to the i-th obstacle) as introduced in (3). the attractive force, 𝐅𝐫𝐜, can be calculated in the same way as in (5) by using the 𝛽d parameter instead of 𝜉: 𝐅𝐫𝐜 = 𝛽d 𝐑𝐂 . (25) the final force that influences the movement of the agent can be calculated as the addition of the attractive force and the summation of the repelling forces, as presented in (6). after calculating the force that influences the actual movement of the agent, the selectable velocity vector can be calculated using (7), (8) and (9). 5. simulation results in this section, the simulation results are discussed based on the changing uncertainties. 5.1. two static obstacles in the first example, there are two static obstacles in the workspace of the agent. initially, using the introduced costfunction-based vo method, the velocity vector that is exactly in the middle of the two obstacles is selected because the two obstacles have the same uncertainties. this situation is presented in figure 5, in which the vos are presented as grey areas, the blue circle is the selected velocity vector, the target is depicted by a black x, each of the agent’s velocity vectors identified through the motion-planning algorithm is represented by a red x and the robot is the red circle. the changes and the magnitude of the velocities of the obstacles do not influence the calculation of the uncertainties because there are two static obstacles in the workspace. so, in this case, only the distances between the obstacles and the agent have an impact on the calculation. the velocity vector between the two obstacles is selected, and the distances between the agent and the two obstacles are the same during the motion, resulting in the same uncertainties for both obstacles, as presented in figure 5. first example: velocity selection based on the extended vo method. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 56 figure 6 (at each time step, the uncertainties for the obstacles can be seen next to each other). this example was also tested using both the original apf method and the extended apf method. using the original apf method that was introduced in section 2, the agent cannot reach the target position. this is because at the beginning of the motion, the apf method results in a velocity vector that causes a motion in the opposite direction from the target position, as can be seen in figure 7 (the values of the constant parameters were 𝜂 = 2 and 𝜉 = 0.1). 𝐅𝐚𝐫1 and 𝐅𝐚𝐫 2 represent the repelling forces for the different obstacles, and 𝐅𝐚𝐫 sum is the summation of the repelling forces (the other notations are the same as those introduced in previous sections). eventually, the summation of the forces will become a force in the direction of the target position. the agent will then move towards the target position. this sequence is repeated, resulting in an oscillation without reaching the target position. the example was also tested with the extended apf method, introduced in section 4.3. in this case, the reactive motionplanning method can generate a collision-free motion towards the target position. the different forces and the selected velocity vector can be seen in figure 8. using this method, the obstacles’ uncertainties change in the same way as seen in figure 6 because the agent executes its motion along the same path between the two obstacles. 5.2. one moving and one static obstacle in this example, the first obstacle is a moving obstacle, and the second obstacle is a static obstacle. if the agent is at a considerable distance, it can select a velocity vector in line with the target position. after that, if it gets closer to the obstacles, it selects a velocity vector that results in a manoeuvre next to the static obstacle because the corresponding probability is higher. the results of the velocity selection, in this case, are depicted in figure 9. the path of the robot is presented as a black line. in this example, the uncertainties of the obstacles are not the same as in the previous example because the static obstacle has a smaller uncertainty throughout the motion, as presented in figure 10. it can be seen that in the first step, the difference between the obstacles’ uncertainties is not significant. this is because the moving obstacle has a small magnitude of velocity and the distances between the obstacles and the robot are the same at the first step. figure 6. first example: two static obstacles; changing uncertainties during motion. the uncertainties are the same for both obstacles. figure 7. first example: original apf method. figure 8. first example: extended apf method. figure 9. second example: velocity selection based on the extended vo method. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 57 this example was also simulated with the extended apf method. because the first obstacle has a nonzero velocity vector, this obstacle has a higher uncertainty when using the motionplanning algorithm. however, because the magnitude of the velocity vector is not a large value compared with the summation of the forces, the agent executes its motion almost directly in line with the target position, as presented in figure 11. so, in this case, there is a difference between the result of the extended vo method and that of the extended apf method, but both of them can result in a collision-free motion between the agent and the environment, and using both methods, the agent can reach the target position. the uncertainties of the obstacles can also be calculated during the motion of the agent with the extended apf method, although the result will be slightly different from the result of the extended vo method. 5.3. three obstacles in front of each other in the next example, there are three obstacles in front of each other with different velocities (the first obstacle is static, and the others are moving). figure 13 shows the velocity selection of the vo method next to the first obstacle, and figure 14 presents the velocity selection next to the second obstacle. it can be seen that a further velocity component is selected at the second obstacle because it has a higher velocity. the higher the velocity of the obstacle, the bigger the uncertainty for the obstacle, as depicted in figure 15, in which the uncertainty parameters are presented as aspects of the three obstacles. after passing the obstacle, the uncertainty is reduced. this figure shows that not all the obstacles need to be considered throughout the motion, only those that influence the motion of the robot and which have fulfilled the precheck algorithm at the time of sampling. the 𝛽h parameter plays a significant role in the cost-functionbased velocity selection as a factor in the target-reaching strategy. in the previous examples, the value of 𝛽h was 0.3. if this parameter has a higher value, it has a larger impact on the motion than the uncertainties of the obstacles, as presented in figure 16. in this case (𝛽h = 0.6), the agent executes the motion as close to the obstacle as the collision-free motion-planning algorithm allows. figure 10. second example: one moving (first) and one static (second) obstacle; changing uncertainties during motion using the extended vo method. the moving obstacle has a higher uncertainty parameter. figure 11. second example: velocity selection based on the extended apf method. figure 12. second example: one moving (first) and one static (second) obstacle; changing uncertainties during motion using the extended apf method. the moving obstacle has a higher uncertainty parameter. figure 13. third example: velocity selection at the first obstacle using the extended vo method. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 58 the values of the parameters depend on the usage of the algorithm; different parameter values generate different results using the collision-free motion-planning algorithm. however, it is not possible to find a solution that takes into account every aspect of the motion-planning problem. a sub-optimal solution must therefore always be calculated. this example can also be tested using the extended apf method. only the first obstacle should be considered for the motion-planning algorithm because (18) is the appropriate equation for the first obstacle (the second and third obstacles are at a distance from the agent). in this case, the algorithm selects a velocity vector for the agent that results in a movement in the exact direction of the first obstacle (because the summation of the forces is in line with the movement of the agent). so, after a few steps, the agent reaches and collides with the first obstacle. in this example, the extended apf method cannot guarantee that the robot will reach the target collision free. 5.4. standard vo method and the novel motion-planning method the introduced novel motion-planning algorithm was also compared with the original vo method because the basic concept of the motion-planning algorithm is based on this algorithm. the comparison used the example of two moving obstacles in the robot’s workspace. one of them has a changing velocity vector that results in a higher uncertainty in the motion of the robot. figure 17 shows the final path of the robot using the different motion-planning algorithms. it can be seen that using the standard vo method, which provides the fastest targetreaching concept, the agent executes a tangential motion next to the first obstacle (this is also presented in a video [27]). however, if the uncertainties in the measurement data are also considered, the target-reaching method will be solved, generating a path that is relatively far from the first obstacle (with changing velocities). the motion of the robot is also presented in a video [28]. figure 18 represents the distances between the agent and the obstacles using the different motion-planning algorithms. as has already been mentioned, when using the standard vo method, there is a time point at which the distance between the robot and the obstacle is zero. in the case of the novel motion-planning algorithm, the uncertainties can be taken into account, so the agent can move safely towards the target position. however, if there is even a tiny measurement or system noise in the process, the tangential movement will immediately cause a collision. so, if this occurs, it is better to use the novel motion-planning algorithm, which generates a collision-free motion for the agent in every situation. figure 14. third example: velocity selection at the second obstacle using the extended vo method. figure 15. third example: three obstacles in front of each other with different velocities; changing uncertainties during motion using the extended vo method. figure 16. the resulting motion paths of the robot with different heading parameters; in the first example 𝜷𝐡 = 0.3, in the second example 𝜷𝐡 = 0.6. figure 17. final paths of the robot using the normal vo method and the novel motion-planning algorithm. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 59 6. coppeliasim simulation environment coppeliasim (vrep version) is suitable for testing robotic arms as well as holonomic and non-holonomic mobile robots using reactive motion-planning algorithms. different types of obstacles can occur in the workspace of the robot, and there are a wide range of obstacles that can be used in the simulation environment the results of the introduced methods were tested in the coppeliasim simulation environment, as presented in [29]. the agent is an omnidirectional robot (blue). this type of mobile robot is often used because it can execute its motion in any direction from an actual position. in the example in figure 19, there are two static obstacles (grey cylinders) in the workspace of the agent. the main goal of the robot is to reach the target position without colliding with the two obstacles, as presented in section 5.1. 7. conclusions in this paper, novel motion-planning methods were introduced using the basics of the vo and the apf methods. the mobile robot was able to execute collision-free motion planning after calculating the changing uncertainties of the obstacles. these uncertainties depend on the magnitudes of the velocity vectors of the obstacles, the distances between the obstacles and the robot, and the changes in the obstacles’ velocities. the vo-based method can generate collision-free motion using a cost-function-based optimisation method. the basic apf method was also extended by using the uncertainty and distance parameters in the algorithm. the extended apf method can generate a better solution than the original apf method, but there are some situations in which it cannot provide a target-reaching solution. in these cases, the cost-function-based vo method was able to guarantee that the target was reached. the parameters for the apf method could also be calculated in another way, thus solving the local minima problem [30], [31]. the introduced algorithm could be implemented in a real robotic system using an omnidirectional mobile robot. the state estimation of the obstacles that occur in the workspace of the robot could be solved using an extended particle filter algorithm. in this case, the position and the velocity vectors of the obstacles could be estimated for every sampling time [32]. to achieve this, a lidar sensor can be used. acknowledgement this paper was supported by the únkp-20-3 new national excellence programme of the ministry for innovation and technology from the national research, development and innovation fund, and the research reported in this paper and carried out at the budapest university of technology and economics has been supported by the national research development and innovation fund (tkp2020 institution excellence sub-programme, grant no. bme-ie-mifm) based on the charter issued by the national research development and innovation office under the auspices of the ministry for innovation and technology. references [1] s. panov, s. koceski, metaheuristic global path planning algorithm for mobile robots, int. j. of reasoning-based intelligent systems 7 (2015), p. 35. doi: 10.1504/ijris.2015.070910 [2] p. m. hsu, c. l. lin, m. y. yang, on the complete coverage path planning for mobile robots, j. of intelligent and robotic systems: theory and applications 74 (2014), pp. 945-963. doi: 10.1007/s10846-013-9856-0 [3] e. masehian, y. katebi, sensor-based motion planning of wheeled mobile robots in unknown dynamic environments, j. of intelligent and robotic systems: theory and applications 74 (2014), pp. 893914. doi: 10.1007/s10846-013-9837-3 [4] m. g. mohanan, a. salgoankar, a survey of robotic motion planning in dynamic environments, robotics and autonomous systems 100 (2018), pp. 171-185. doi: 10.1016/j.robot.2017.10.011 [5] p. raja, s. pugazhenthi, optimal path planning of mobile robots: a review, int. j. of physical sciences 7 (9), february 2012, pp. 13141320. doi: 10.5897/ijps11.1745 [6] s. petti, t. fraichard, safe motion planning in dynamic environments, proc. of the ieee rsj int. conf. intell. robot. syst., edmonton, canada, 2-6 august 2005, pp. 2210-2215. [7] t. fraichard, h. asama, inevitable collision states a step towards safer robots, advanced robotics 18 (2004), pp. 1001-1024. doi: 10.1163/1568553042674662 [8] l. martinez-gomez, t. fraichard, collision avoidance in dynamic environments: an ics-based solution and its comparative evaluation, proc. of the ieee int. conf. on robotics and automation, kobe, japan, 12 – 17 may 2009, pp. 100-105. doi: 10.1109/robot.2009.5152536 [9] d. fox, w. burgard, s. thrun, the dynamic window approach to collision avoidance, ieee robot. autom. mag. 4 (1997), pp. 2333. doi: 10.1109/100.580977 figure 18. distances between the robot and the obstacles using the different motion-planning algorithms. figure 19. coppeliasim simulation environment [29]. https://doi.org/10.1504/ijris.2015.070910 https://doi.org/10.1007/s10846-013-9856-0 https://doi.org/10.1007/s10846-013-9837-3 https://doi.org/10.1016/j.robot.2017.10.011 https://doi.org/10.5897/ijps11.1745 https://doi.org/10.1163/1568553042674662 https://doi.org/10.1109/robot.2009.5152536 https://doi.org/10.1109/100.580977 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 60 [10] m. seder, i. petrovic, dynamic window based approach to mobile robot motion control in the presence of moving obstacles, proc. of the int. conf. on robotics and automation, rome, italy, 10-14 april 2007, pp. 1986-1991. doi: 10.1109/robot.2007.363613 [11] m. mujahed, d. fischer, b. mertsching, admissible gap navigation: a new collision avoidance approach, robotics and autonomous systems 103 (2018), pp. 93-110. doi: 10.1016/j.robot.2018.02.008 [12] n. hacene, b. mendil, autonomous navigation and obstacle avoidance for a wheeled mobile robots: a hybrid approach, int. j. of computer applications 81 (2013), pp. 34-37. online [accessed 20 september 2021] https://research.ijcaonline.org/volume81/number7/pxc3892285 .pdf [13] p. fiorini, z. shiller, motion planning in dynamic environments using velocity obstacles, int. j. of robotics research 17 (1998), pp. 760-772. doi: 10.1177/027836499801700706 [14] j. van den berg, m. lin, d. manocha, reciprocal velocity obstacles for real-time multi-agent navigation, proc. of the ieee int. conf. on robotics and automation, pasadena, usa, 19-23 may 2008, pp. 1928-1935. doi: 10.1109/robot.2008.4543489 [15] j. alonso-mora, p. beardsley, r.. siegwart, cooperative collision avoidance for nonholonomic robots, ieee transactions on robotics 34 (2018), pp. 404-420. doi: 10.1109/tro.2018.2793890 [16] b. gopalakrishnan, a. k. singh, m. kaushik, k. m. krishna, d. manocha, prvo: probabilistic reciprocal velocity obstacle for multi robot navigation under uncertainty, proc. of the ieee int. conf. on intelligent robots and systems, vancouver, canada, 24 – 28 september 2017, pp. 1089-1096. doi: 10.1109/iros.2017.8202279 [17] d. claes, d. hennes, k. tuyls, w. meeussen, collision avoidance under bounded localization uncertainty, proc. of the ieee int. conf. on intelligent robots and systems, vilamoura, portugal, 7 – 12 october 2012, pp. 1192-1198. doi: 10.1109/iros.2012.6386125 [18] d. hennes, d. claes, w. meeussen, k. tuyls, multi-robot collision avoidance with localization uncertainty, proc. of the 11th int. conf. on autonomous agents and multiagent systems, aamas 2012: innovative applications track, valencia, spain, 4-8 june 2012, pp. 672-679. [19] e. masehian, y. katebi, robot motion planning in dynamic environments with moving obstacles and target, int. j. of mechanical systems science and engineering 1 (2007), pp. 107112. online [accessed 20 september 2021] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.193. 6430&rep=rep1&type=pdf [20] e. masehian, y. katebi, sensor-based motion planning of wheeled mobile robots in unknown dynamic environments, j. of intelligent and robotic systems: theory and applications 74 (2014), pp. 893914. doi: 10.1007/s10846-013-9837-3 [21] a. masoud, a harmonic potential approach for simultaneous planning and control of a generic uav platform, j. intell. robot. syst. 65 (2012), pp. 153-173. doi: 10.1007/s10846-011-9570-8 [22] n. malone, h. t. chiang, k. lesser, m. oishi, l. tapia, hybrid dynamic moving obstacle avoidance using a stochastic reachable set-based potential field, ieee transactions on robotics 33 (2017), pp. 1124-1138. doi: 10.1109/tro.2017.2705034s [23] h. chiang, n. malone, k. lesser, m. oishi, l. tapia, path-guided artificial potential fields with stochastic reachable sets for motion planning in highly dynamic environments, proc. of the ieee int. conf. on robotics and automation, seattle, usa, 26-30 may 2015, pp. 2347-2354. doi: 10.1109/icra.2015.7139511 [24] b. kovács, g. szayer, f. tajti, m. burdelis, p. korondi, a novel potential field method for path planning of mobile robots by adapting animal motion attributes, robotics and autonomous systems 82 (2016), pp. 24-34. doi: 10.1016/j.robot.2016.04.007 [25] z. gyenes, e. g. szadeckzy-kardoss, rule-based velocity selection for mobile robots under uncertainties, proc. of the 24th int. conf. on intelligent engineering systems, reykjavík, iceland, 8-10 july 2020, pp. 127-132. doi: 10.1109/ines49302.2020.9147191 [26] z. gyenes, e. g. szadeckzy-kardoss, motion planning for mobile robots using the safety velocity obstacles method, proc. of the 19th international carpathian control conference, szilvásvárad, hungary, 28-31 may 2018, pp. 389-394. doi: 10.1109/carpathiancc.2018.8473397 [27] akta imeko standard vo, online [accessed 17th september 2021] https://youtu.be/jp6m3ngwjpk [28] akta imeko uncertainties, online [accessed 17th september 2021] https://youtu.be/mxz1om3bzys [29] omnirobot moves between two static obstacles, online [accessed 17th september 2021] https://www.youtube.com/watch?v=hjcntdks6o&feature=youtu.be [30] m. g. park, m. c. lee, a new technique to escape local minimum in artificial potential field based path planning, ksme int. j. 17 (2003), pp. 1876-1885. doi: 10.1007/bf02982426 [31] g. guerra, d. efimov, g. zheng, w. perruquetti, avoiding local minima in the potential field method using input-to-state stability, control engineering practice 55 (2016), pp.174-184. doi: 10.1016/j.conengprac.2016.07.008 [32] z. gyenes, e. g. szadeckzy-kardoss, particle filter-based perception method for obstacles in dynamic environment of a mobile robot, proc. of the 25th ieee international conference on methods and models in automation and robotics, miedzyzdroje, poland, 23-26 august 2021, p. 6 (in press). https://doi.org/10.1109/robot.2007.363613 https://doi.org/10.1016/j.robot.2018.02.008 https://research.ijcaonline.org/volume81/number7/pxc3892285.pdf https://research.ijcaonline.org/volume81/number7/pxc3892285.pdf https://doi.org/10.1177/027836499801700706 https://doi.org/10.1109/robot.2008.4543489 https://doi.org/10.1109/tro.2018.2793890 http://doi.org/10.1109/iros.2017.8202279 http://dx.doi.org/10.1109/iros.2012.6386125 http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.193.6430&rep=rep1&type=pdf http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.193.6430&rep=rep1&type=pdf https://doi.org/10.1007/s10846-013-9837-3 https://doi.org/10.1007/s10846-011-9570-8 https://doi.org/10.1109/tro.2017.2705034s https://doi.org/10.1109/icra.2015.7139511 https://doi.org/10.1016/j.robot.2016.04.007 http://dx.doi.org/10.1109/ines49302.2020.9147191 https://doi.org/10.1109/carpathiancc.2018.8473397 https://youtu.be/jp6m3ngwjpk https://youtu.be/mxz1om3bzys https://www.youtube.com/watch?v=hjcntdks6-o&feature=youtu.be https://www.youtube.com/watch?v=hjcntdks6-o&feature=youtu.be https://doi.org/10.1007/bf02982426 https://doi.org/10.1016/j.conengprac.2016.07.008 the contribution of colour measurements to the archaeometric study of pottery assemblages from the archaeological site of adulis, eritrea acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 8 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 the contribution of colour measurements to the archaeometric study of pottery assemblages from the archaeological site of adulis, eritrea abraham zerai gebremariam1,2, patrizia davit3, monica gulmini3, lara maritan4, alessandro re1,2, roberto giustetto2,5, serena massa6, chiara mandelli6, yohannes gebreyesus7, alessandro lo giudice1,2 1 department of physics, university of turin, via pietro giuria 1, 10125 turin, italy 2 national institute of nuclear physics, turin section, via pietro giuria 1, 10125 turin, italy 3 department of chemistry, university of turin, via pietro giuria 7, 10125 turin, italy 4 department of geosciences, university of padua, via giovanni gradenigo 6, 35131 padua, italy 5 department of earth sciences, university of turin, via valperga caluso n. 35, 10125 turin, italy 6 department of archaeology, catholic university of the sacred heart of milan, largo gemelli 1, 20123 milan, italy 7 northern red sea regional museum of massawa, p.o. box 33, massawa, eritrea section: research paper keywords: colorimetry; pottery; fabric; adulis citation: abraham zerai gebremariam, patrizia davit, monica gulmini, lara maritan, alessandro re, roberto giustetto, serena massa, chiara mandelli, yohannes gebreyesus, alessandro lo giudice, the contribution of colour measurements to the archaeometric study of pottery assemblages from the archaeological site of adulis, eritrea, acta imeko, vol. 11, no. 1, article 17, march 2022, identifier: imeko-acta-11 (2022)-01-17 section editor: fabio santaniello, university of trento, italy received march 7, 2021; in final form march 15, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this project has received funding from the european union’s horizon 2020 research and innovation programme under the marie skłodowska-curie grant agreement no 754511 (phd technologies driven sciences: technologies for cultural heritage – t4c) corresponding author: abraham zerai gebremariam, e-mail: abraham.zeraigebremariam@unito.it 1. introduction colour is an important characteristic of archaeological materials and yet quantitative and reproducible measurements are needed for a meaningful analysis and classification of artefacts. review of works on the colorimetric study of ancient ceramics pinpoints to the versatility of this method over traditional and yet solely qualitative description of colour using the munsell colour charts [1]. the international commission on illumination; l*, a*, b* (cielab) colour space has been mentioned as a suitable system aimed at standardizing and comparing colorimetric features, and its utility demonstrated in many studies of different ceramic objects. colorimetric surveys on pottery objects that range from the evaluation of colour of the exposed interior and exterior surfaces as well as their core, and colour measurements on powdered samples and based up on the collection of cielab colour data from digital abstract colorimetric evaluation was applied on archaeological pottery from the ancient port city of adulis in the red sea coast of eritrea. pottery samples belong to the ayla-aksum typology, late roman amphora 1 and dolia classes, which had never been analyzed by means of this approach. the survey consisted of colorimetric measurements from different parts of the ceramic bodies, to comprehend how these data could be related to the overall fabric classification. differences in the colorimetric parameters provided helpful information on both technological manufacturing processes and fabric classification. subtle variations in the colour coordinates were detected and aptly interpreted, so as to ascribe the related differences. such an approach proved that the information provided by colour measurements can be partially correlated to observations from stereomicroscopy and optical microscopy, allowing a more in-depth description of the fabrics in the study of archaeological pottery. mailto:abraham.zeraigebremariam@unito.it acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 photographs, have been extensively reported in the literature [2], [3]. in general, the archaeometric study of ancient pottery mainly aims at locating the production centres, the specific technology involved in pottery production and understanding the distribution patterns. provenance is often tackled by determining the chemical and/or mineralogical compositions. and by comparing them with the composition of reference groups. mirti and davit (2004) demonstrated that colour measurements may help in assessing provenances and/or production technologies. the general agreement is that sherds made from different clays or even from the same clay but processed in different ways would show different colour curves, while ceramic materials obtained from a single clay and processed following a similar procedure should display a similar curve [4]. colour measurements on archaeological pottery have been applied particularly for the evaluation of the original firing conditions [4][16]. on the other hand, the colour of ceramic bodies is not only affected by temperature, but also by firing duration and atmosphere, as well as by the final porosity, the mineralogical composition of the raw materials used and the nature of inclusions. experimental analyses, aimed at evaluating the colour change with temperature by re-firing, allowed the detection of colour alterations in a specific clay body, and thus enabling an evaluation of the possible behaviour of different sherds. [3-4]. it is also noted that colorimetry, when coupled to other characterization techniques, can provide important results for examining raw materials and artefacts variability. the nucleation, growth, and grain size of hematite during firing as well as the redox reactions inherent to firing process and the influence of mineralogical composition, are all parameters that contribute to colour in ceramics, the consequences of which can be inferred by using mineralogical, micro-structural and chemical approaches [7]-[10] and [13]-[16]. therefore, the complexity of mineralogical and physical-chemical factors influencing the colour of ceramics requires that colorimetry should be coupled to other archaeometric techniques. in this respect, the dynamics of firing and changes of microstructures affecting colour variations in ceramics can be understood through detailed analytical approaches, colorimetry can be useful to determine colour parameters of the paste allowing preliminary fabric determination. in this study, pottery assemblages from the archaeological site of adulis, the primary port of the aksumite empire in late antiquity in the red sea coast of eritrea, were investigated by means of colorimetry. the site of adulis was principally involved in the major developments in the history of the northern horn of africa from the first millennium ce [17]-[19]. the long standing trade relations and cultural interaction with the romans and the mediterranean is attested in later periods during first millennium ce, through the red sea. comparative analysis of architectural and ceramic typological sequences attest that the site was continuously inhabited from the first-second up to the sixth and early seventh c. ce and intensely occupied in the 5th 6th c. ce [18], [19]. the ayla-aksum, late roman amphora 1 as well as dolia samples considered in this colorimetric survey indicate imports from the mediterranean world in the later phases of the occupation of the site. studies on these pottery assemblages from the northern horn of africa are rare [20]-[22] and colorimetric studies have never been applied previously. in this work, we highlight the evaluation of colorimetric parameters on different parts of ceramic bodies and their treatment in the form of powdered samples to assess the usefulness of colorimetry in pottery studies. the aim of this survey is to establish a preliminary fabric classification, possibly to be used for provenance determination. the results and limits of colorimetric evaluation are also reported here. the following sections provide a look into colorimetric assessment (considering different sampling procedures) and the use of such data to define the variability existing among the different classes. 2. materials and methods the colorimetric survey adopted for this study included different sampling procedures to evaluate the colorimetric parameters on archaeological pottery representing the ayla-aksum, late roman amphora 1 and dolia classes. all the samples considered in this survey were collected in the 2019 fieldwork of the ongoing italian-eritrean excavations at adulis [23]-[25]. on one hand, a set of samples was selected among a collection of 49 small sherds under archaeometric investigations: part of them were in fact not represented in this survey for being too small or showing colour variations after possibly post-depositional alterations. for the examined samples, colorimetric evaluation was done both on the interior and exterior surfaces of the ceramic bodies. the analysed surface was about 8 mm in diameter; thus, the detected coordinates relate to an average area of about 50 mm2. figure 1. general view and stereo-microscope images of some ayla-aksum: (a), (e) sample 1.3(2); (b), (f) sample 2.0, late roman amphora 1 (c), (g) sample 3.9 and dolia sample (d), (h) sample 4.9. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 table 1. all colorimetric values (l*, a*, b* respectively) for exterior and interior surfaces as well as cross-sections and powders on ayla – aksum and late roman 1 typologies. typology sample powder exterior interior section ayla aksum 1.1 64.16; 10.84; 17.28 ------ 1.2 70.06; 1.88; 14.99 62.20; 3.14; 18.12 61.11; 4.11; 19.23 45.80; 3.36; 11.95 47.64; 2.93; 11.11 -- 1.3(1) 66.96; 6.21; 17.60 55.27; 7.77; 21.53 55.30; 7.82; 21.71 56.87; 6.03; 18.54 53.37; 6.15; 17.69 -- 1.3(2) 65.21; 8.41; 17.25 54.62; 6.10; 17.61 54.11; 5.89; 16.94 53.10; 5.70; 17.61 52.77; 7.90; 18.43 52.85; 7.95; 19.14 54.44; 8.26; 19.51 62.05; 10.65; 19.22 60.25; 10.65; 18.94 59.39; 10.52; 18.78 1.4(1) 68.95; 5.47; 16.53 ------ 1.4(2) 75.10; 1.86; 15.09 63.34; 5.67; 17.02 61.71; 5.39; 17.98 63.21; 5.16; 18.82 66.23; 4.32; 19.75 66.45; 4.74; 20.23 71.91; 1.52; 15.70 67.71; 1.59; 17.04 1.5 76.53; 2.74; 20.06 ------ 1.6 61.39; 15.35; 20.43 ------ 1.7(1) 63.13; 12.57; 18.44 ------ 1.7(2) 68.47; 3.92; 15.35 ------ 1.8 60.49; 14.40; 19.61 ------ 2.0 69.92; 6.89; 17.29 66.61; 5.15; 19.63 69.84; 3.98; 19.98 68.91; 3.83; 19.42 70.78; 3.36; 19.16 68.65; 3.52; 19.06 -- 3.3 71.59; 1.04; 11.49 ------ 3.4 67.73; 1.27; 14.16 ------ 3.5 69.19; 9.18; 17.87 ------ 3.6 68.51; 10.71; 18.45 66.54; 4.69; 19.69 61.90; 4.18; 18.99 63.35; 4.95; 22.04 61.92; 5.96; 20.94 60.44; 5.48; 20.37 -- 3.7 58.55; 19.19; 22.41 ------ 3.8 57.24; 16.01; 19.36 ------ c01 65.68; 8.50; 18.00 ------ c04 67.38; 5.97; 17.24 ------ c05 67.81; 9.43; 18.40 ------ late roman 1 1.9 60.50; 9.70; 16.72 44.77; 5.97 17.34 43.13; 8.32; 17.84 47.34; 8.39; 18.62 51.60; 7.82; 20.66 49.25; 7.93; 16.37 49.69; 8.13; 17.91 -- 2.1 65.58; 6.11; 15.48 53.48; 7.91; 18.10 52.82; 8.30; 17.85 50.37; 8.96; 18.32 45.14; 7.22; 15.89 48.91; 7.64; 17.82 -- 2.2 67.28; 4.56; 13.36 ------ 2.3 65.58; 6.11; 15.48 54.16; 10.87; 21.16 55.82; 11.88; 22.28 55.49; 12.91; 23.16 51.55; 13.11; 24.90 52.26; 13.83; 24.82 53.19; 13.54; 24.78 58.75; 9.16; 18.02 58.38; 9.58; 18.26 58.22; 9.08; 17.70 2.4 63.69; 12.86; 20.75 ------ 3.0 67.49; 9.55; 18.06 58.75; 5.69; 19.91 58.39; 6.38; 18.95 64.28; 6.87; 21.27 63.50; 6.67; 20.81 -- 3.1 67.97; 8.27; 17.38 64.58; 7.63; 21.78 61.31; 8.23; 21.86 63.10; 6.39; 21.61 61.15; 6.68; 21.58 -- 3.2 67.91; 10.44; 18.81 ------ 3.9 64.34; 3.64; 12.06 ------ 4.1 68.64; 6.54; 16.65 58.44; 7.15; 17.93 58.69; 7.08; 17.20 58.67; 7.19; 19.44 56.15; 7.33; 18.59 58.98; 7.68; 19.15 62.56; 8.25; 20.00 62.01; 8.44; 21.01 59.95; 8.66; 20.50 c02 67.28; 9.89; 18.40 ------ c03 67.02; 5.86; 15.75 ------ c06 62.50; 6.36; 14.56 ----58.96; 8.96; 20.01 57.92; 9.61; 19.84 58.22; 9.58; 20.59 c07 65.99; 7.09; 15.06 ----46.31; 9.32; 17.89 47.21; 10.74; 18.50 47.38; 9.91; 18.43 c08 65.92; 6.97; 14.86 ----46.79; 9.10; 17.94 49.51; 8.56; 18.10 48.05; 9.62; 19.02 c09 71.15; 4.76; 13.40 ----46.62; 9.37; 16.14 44.70; 9.26; 15.02 45.52; 9.55; 15.66 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 for some sherds the thickness was also adequate to evaluate colour parameters on their cross-sections. the cleanest areas on the surfaces were selected, to estimate at best the true colour of the ceramic body. in all cases, depending on the dimension of the fragment and on the suitability of the analysed surfaces, 2 or 3 measurements were carried out on different areas of the surfaces to estimate the spread of the data on every considered sample. finally, colorimetric measurements were performed on the powders of all samples representing the ayla-aksum, late roman amphora 1 and dolia classes. a fragment was cut from each sample, polished to avoid contaminations, and crushed using an agate mortar and pestle to obtain 100 mg of powders. by doing this, the resulting powders represent a mix of the components of the exterior, interior and cross-section parts of the ceramic bodies, including temper grains, which can also affect colour measurements due to compositional variability and differences in particle size [10]. the measurements were performed by inserting these powders in specific cylindrical cells of optical fused silica with transmittance ˃ 95 % and no features in the whole visible range of the spectrum. a minolta cm-508i portable spectrophotometer was used, equipped with a pulsed xenon arc lamp and an integrating sphere to diffusely illuminate the specimen surface, which was viewed at an angle of 8° to the normal (d/8 geometry). the light reflected by the sample surface (specular component included) was detected by a silicon photodiode array, which allowed obtaining the reflectance spectrum in the 400 nm – 700 nm range, with wavelength pitches of 20 nm. the spectrophotometer was calibrated to provide the mean values of three consecutive measurements. colour coordinates were expressed in the cie l*a*b* system, using the illuminant d65 (average solar light) and a 10˚ viewing angle. in this system, the l*coordinate is related to colour lightness, while a* and b* are each determined by both hue and saturation respectively [4], [6]. in figure 1, representative ayla-aksum, late roman amphora 1 and dolia samples (scale in cm) and the stereo-microscopic images of their fresh cut are shown. it is worth noting that many factors could contribute to the occurrence of inconsistencies in the data both within the same sample and between different samples, when untreated archaeological materials are analysed. phenomena such as imperfect geometries of the analysed surfaces, presence of contaminants (not easily detectable to the naked eye), alterations due to post-depositional processes, and porosity can contribute to this variability in the obtained measurements (see figure 1). 3. results and discussion 3.1. colorimetric measurements on exterior, interior and cross sections table 1 and table 2 report colorimetric coordinates for all measured samples while in table 3 the largest values for ∆eab (∆eab,max) of each sample are indicated, computed according to the following equation: δ𝐸ab,max = √(𝑎1 ∗ − 𝑎2 ∗ )2 + (𝑏1 ∗ − 𝑏2 ∗)2 , (1) table 2. all colorimetric values (l*, a*, b* respectively) for exterior and interior surfaces as well as cross-sections and powders on dolia typology. typology sample powder exterior interior section dolia 1.0 54.33; 11.66; 13.16 47.46; 12.55; 20.16 47.42; 12.70; 20.20 48.80; 13.01; 20.75 49.76; 12.11; 17.82 49.24; 12.29; 17.32 48.76; 12.61; 17.56 46.20; 15.06; 18.40 47.51; 15.54; 18.76 47.64; 14.99; 17.93 4.0 59.74; 19.19; 22.41 57.39; 9.53; 22.41 54.48; 9.01; 20.38 55.54; 8.82; 20.30 49.81; 15.14; 19.52 50.40; 15.34; 20.34 50.70; 15.34; 21.16 -- 4.8 66.75; 4.66; 12.72 59.01; 6.75; 16.66 55.63; 7.02; 17.79 56.02; 7.76; 18.99 58.46; 6.53; 17.86 52.99; 6.68; 18.90 55.65; 5.74; 16.75 59.10; 6.11; 15.35 59.11; 5.79; 15.32 59.12; 6.26; 15.24 4.9 61.15; 6.96; 14.28 43.06; 7.87; 18.66 48.06; 8.65; 19.25 47.06; 8.24; 16.31 50.67; 8.75; 17.63 47.06; 8.24; 17.63 52.44; 12.23; 19.43 55.23; 11.18; 20.83 table 3. association to fabric groups identified by petrography (fg) and maximum values of eab,max from colorimetry for each sample. the average for fg is also shown when there is more than one measurement. ayla aksum fg sample exterior interior in-out a 1.2 1.3(1) 1.4(2) 1.47 0.19 1.87 0.94 0.86 0.64 8.21 3.34 1.87 b 1.3(2) 2.0 3.6 0.70 1.22 3.15 1.14 0.54 0.75 4.35 1.85 3.15 a average 1.2 0.8 4.5 b average 1.7 0.8 3.1 late roman fg sample exterior interior in-out a 1.9 2.74 4.29 4.29 b 2.1 2.3 3.0 3.1 1.07 2.86 1.18 0.61 1.98 0.72 0.50 0.29 2.99 4.71 2.37 1.86 c 4.1 2.24 0.66 2.24 b average 1.4 0.9 3.0 dolia fg sample exterior interior in-out a 1.0 4.0 0.75 2.23 0.56 1.65 3.50 6.58 a1 4.8 2.54 2.35 3.02 b 4.9 0.98 1.42 2.97 a average 1.49 1.11 5.04 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 where the differences between a1* and a2*, b1* and b2* represent the colorimetric coordinates showing the maximum differences among the set of measurements for the exterior and interior surfaces, for each sample. lightness (l*) is reputed to be a less suitable parameter than hue and saturation (a* and b*) and, due to this, it was not considered in this formulation [3]. the reported values of ∆eab,max are quite low in each dataset, indicating that the surface colours of both the interior and exterior surfaces of the pottery samples appear to be quite homogenous. besides, it seems that data from the interior surfaces are less spread than those from the exterior ones. for example, the average values of ∆eab,max are 1.2-1.7 for exterior surfaces of the ayla-aksum -depending on the fabricand 0.8 for their interior ones, respectively. to highlight this behaviour, the bivariate plot of l*, a* and b* parameters for all measurements is shown in figure 2 for a selected number of samples (one for each fabric). it is quite evident that differences can arise in colour coordinates depending on which side of the ceramic body surface is being measured (table 3). for example, in ayla-aksum samples ∆eab,max is lower than 3.15 for the exterior and less than 1.14 for the interior surface. a similar behaviour is observed in late roman amphora 1 (with the sole exception of an anomalous interior value for sample 1.9) and dolia samples. the measurements on cross-section (not reported in table 3 but shown in figure 2) are particularly homogeneous for all classes. this is probably due to favourable experimental conditions, related to the uncontaminated and flat surfaced of the sections. when comparisons are made based on the values obtained for the exterior and interior surfaces, cross-sections, and powdered samples the colorimetric values for crosssections seem to be closer to those obtained for the powders in many instances. however, some discrepancies in the colorimetric measurements of the cross-sections could also be related to irregular geometries. figure 2. colorimetric values for selected samples (left); all colorimetric values for exterior and interior surfaces as well as cross-sections and powders (centre and right). acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 moreover, in almost all cases, it clearly appears that the colour for the exterior and interior surfaces of the ceramic bodies are darker, i.e., the values of l* are lower than those obtained for their powders. such a survey confirms that discrepancies in measurements on both the exterior and interior surfaces – and, in some cases, on the cross-sections – are quite common when untreated archaeological samples are concerned. this sampling problem can be linked to different phenomena. further treatment of the samples, either by refiring them at different temperatures to evaluate colour changes or by extracting powders from their fragments, however, can sometimes compensate for measurement inconsistencies. 3.2. colorimetric measurements on powdered samples for colorimetric measurements on powders, 21 ayla-aksum, 15 late roman amphora 1 and 4 dolia samples were considered. in particular, the dispersion of the colorimetric values within a specific class of pottery was considered. such a dispersion for ayla-aksum samples (figure 3) clearly indicates that colorimetric information can be useful for discriminating different fabrics within a given class. data dispersion in these samples indicates a colour variation from creamy (lower a* value) to a reddish hue (higher a* values). the former is typical of samples belonging to the fabric a of the ayla-aksum amphorae; the latter pertains to the fabric b. such a differentiation might be related to different technologies of production (morphological and/or microstructural changes) and perhaps compositional differences too. the distinction between these samples enabled by colorimetric parameters further complements the observation of different fabrics identified for the ayla-axum amphorae by petrography. in this respect, colorimetry can thus be useful to allow preliminary fabric classification when coupled to typological classification, petrography, and stereo microscopy. it should be noted, however, that the colour parameters defined for a homogenised paste (powders) only allows a preliminary fabric determination for each sample, rather than providing information about its composition or provenance. however, in some cases the fabric classification extrapolated from petrography does not match the distinction made by colorimetric evaluation. this was observed in the trends discerned from the colorimetric evaluation of powder samples from the late roman amphora 1 and dolia classes (figure 4). such evidence might indicate that, although the original clay used for manufacturing should be similar, the addition of one or more tempers might account for fabric variability, as defined by petrographic analyses. in this respect the mix design in the production of the ceramics can contribute to colour changes [10]. therefore, in order to make reliable deductions it is strictly necessary to link colorimetric observations to textural, microstructural and chemical studies. the results of this study prove that while colorimetry can assist in defining the fabrics and typological classifications, it certainly needs to be complemented with other approaches in order for more detailed information to be achieved. furthermore, colorimetry can be significantly useful to make deductions about the inter-fabric variability of samples belonging to specific classes. the mere collection of colorimetric parameters for each sample considered in this study could not (if considered per se) exclusively ascribe sharp distinctions, when comparison is made among fabrics defined for ayla-axum, late roman amphora 1 and dolia classes. this limitation is characteristic of colorimetric evaluations as can be seen from figure 4, where the overlapping of the colorimetric data for different samples belonging to the three classes of pottery considered in this study prevents any feasible conclusions. nevertheless, the importance of this survey is relevant, as it allowsin many cases a preliminary distinction of fabric variability within a specific typological class of pottery. 4. conclusions colorimetric observations were done on ayla-aksum, late roman amphora 1 and dolia pottery samples collected from excavations at adulis in order to check potential grouping with figure 3. colorimetric values for powdered samples. colours refer to the fabric classification as reported in table 3. figure 4. colorimetric values for powdered samples (ayla-aksum, late roman and dolia classes. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 respect to the fabric description obtained by means of stereomicroscopy and petrography. in this respect, this study showed that it is possible to correlate colorimetric values to specific fabric attributions achieved by means of traditional microscopy approaches. different sampling procedures were adopted to collect colorimetric data, in order to thoroughly understand the limitations and strengths of such an approach. the exterior and interior surfaces of the ceramic bodies, as well as their cross-sections were considered for the untreated samples, while powders were extracted from many samples in order to obtain homogenous samples for the survey. subtle differences in colour measurements due to these different sampling conditions were interpreted as potential discriminating parameters in order to establish fabrics. data variability collected on the exterior or interior surfaces of the ceramic bodies, as well as on their cross-sections or powders, is an indication that various phenomena should be considered while interpreting these data, particularly on untreated samples. the ∆eab,max computation allowed us to understand subtle variations and/or inconsistencies in colorimetric measurements on the exterior and interior surfaces, as well as on the cross-sections. marked differences in these values have been considered in order to make extrapolations, while feeble differences between samples belonging to the same fabric can hardly be used to postulate a differentiation between them. yet, the limited number of analysed samples, coupled to the detected measurement discrepancies (due to several factors, such as imperfect geometries of the surfaces, post-depositional alterations and perhaps also porosity), make it necessary to increase the statistical reliability pertaining to the colorimetric approach applied to the classes considered in this study. however, in many cases the attribution of samples to distinct groups was possible based on colorimetric evaluation, which proved to be consistent with previously determined classification by petrography. this observation pinpoints that colorimetric measurements can be useful to complement petrographic studies for in-depth fabric description. the correspondence of the colorimetric groupings with petrographic observations is a further indicator that the information from colorimetry can be useful to support provenance and/or technological studies on archaeological pottery. on the other hand, when colorimetric information cannot parallel petrographic information (as seen in a few cases in this study), a detailed textural and chemical study as well as a micro-structural understanding of the ceramic body becomes necessary. in conclusion, different parts of the ceramic body offer a variety of sampling decisions for colorimetric evaluation with non-invasive procedures, and thus the objective comparison of colour through a quantitative analysis can overcome issues related to typological classification. moreover, this survey showed that colorimetric measurements could be useful, at least in some cases, to ascribe preliminary fabric determination when coupled to complementary techniques – such as optical microscopy. such an information could be used to understand – at least partly – the technological processes. as well as to help in tracing the provenance of a given artefact (assuming that objectively attained colour measurements could be correlated to information obtained from these complementary techniques). the inherent limitations of this approach have also been highlighted, particularly to deduce colour variations due to mineralogical, chemical, and micro-structural differences a subject that needs to be dealt with, in the future direction of this research. acknowledgments the authors wish to warmly thank the commission of culture and sports of the state of eritrea, northern red sea museum of massawa, and centro ricerche sul deserto orientale (ce.r.d.o.) for supporting this research. we acknowledge here also the funding from the european union’s horizon 2020 research and innovation programme under the marie skłodowska-curie grant agreement no 754511 (phd technologies driven sciences: technologies for cultural heritage – t4c. references [1] m. giardino, r. miller, r. kuzio, d. muirhead, analysis of ceramic colour by spectral reflectance, american antiquity, 63(3), (1998), pp. 477-483. [2] j. r. mcgrath, m. beck, m. e. hill, jr. replicating red: analysis of ceramic slip colour with cielab colour data, journal of archaeological science: reports14(2017), pp.432-438. doi: 10.1016/j.jasrep.2017.06.020 [3] p. mirti, p. davit, new developments in the study of ancient pottery by colour measurement, journal of archaeological science 31(2004), pp. 741–751. doi: 10.1016/j.jas.2003.11.006 [4] p. mirti, on the use of colour coordinates to evaluate firing temperatures of ancient pottery, archaeometry 40 (1998), pp. 45– 57. doi: 10.1111/j.1475-4754.1998.tb00823.x [5] m. daszkiewicz, l. maritan, experimental firing and re-firing, in: the oxford handbook of archaeological ceramic analysis, a. m. w. hunt (ed.), oxford handbooks in archaeology, 2016, isbn: 9780199681532. [6] m. bayazit, i. işık, a. issi, e. genç, spectroscopic and thermal techniques for the characterization of the first millennium ad potteries from kuriki, turkey, ceramics international 40(9) (2014), pp. 14769-14779. doi: 10.1016/j.ceramint.2014.06.068 [7] m. j. feliu, m.c. edreira, j. martin, application of physical– chemical analytical techniques in the study of ancient ceramics, analytica chimica acta 502 (2004), pp. 241–250. doi: 10.1016/j.aca.2003.10.023 [8] l. nodari, e. marcuz l. maritan c. mazzoli, u. russo, hematite nucleation and growth in the firing of carbonate-rich clay for pottery production, journal of the european ceramic society 27 (2007), pp. 4665–4673. doi: 10.1016/j.jeurceramsoc.2007.03.031 [9] c. gernminario, g. cultrone, a. de bonis, f. izzo, a. langella, m. mercurio, v. morra, a. santoriello, s. siano, c. grif, the combined use of spectroscopic techniques for the characterization of late roman common wares from benevento (italy), measurement 114(2018), pp. 515-525. doi: 10.1016/j.measurement.2016.08.005 [10] de bonis, g. cultrone. c. grif, a. langella, a. leone, m. mercurio, v. morra, different shades of red: the complexity of mineralogical and physico-chemical factors influencing the colour of ceramics, ceramics international 43 (2017), pp.8065–8074. doi: 10.1016/j.ceramint.2017.03.127 [11] y. yang, m. feng, x. ling. z. mao, c. wang, x, sun, m. guo, microstructural analysis of the colour-generating mechanism in ru ware, modern copies, and its differentiation with jun ware, journal of archaeological science 32 (2005), pp.301–310. doi: 10.1016/j.jas.2004.09.007 [12] j. molera, t. pradell, m. vendrell-saz, the colours of ca-rich ceramic pastes: origin and characterization, applied clay science 13 (1998), pp.187–202. doi: 10.1016/s0169-1317(98)00024-6 https://doi.org/10.1016/j.jasrep.2017.06.020 https://doi.org/10.1016/j.jas.2003.11.006 https://doi.org/10.1111/j.1475-4754.1998.tb00823.x https://doi.org/10.1016/j.ceramint.2014.06.068 https://doi.org/10.1016/j.aca.2003.10.023 https://doi.org/10.1016/j.jeurceramsoc.2007.03.031 https://doi.org/10.1016/j.measurement.2016.08.005 https://doi.org/10.1016/j.ceramint.2017.03.127 https://doi.org/10.1016/j.jas.2004.09.007 https://doi.org/10.1016/s0169-1317(98)00024-6 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 8 [13] l. nodari, l. maritan, c. mazzoli, u. russo, sandwich structures in the etruscan-padan type pottery, applied clay science 27(2004) pp. 119-128. doi: 10.1016/j.clay.2004.03.003 [14] v. valanciene, r. siauciunas, j. baltusnikaite, the influence of mineralogical composition on the colour of clay body, journal of european ceramic society 30(2010), pp. 1609-1617. doi: 10.1016/j.jeurceramsoc.2010.01.017 [15] r. mentesana, v. kilkogolu, s. todaro, p. m. day, reconstructing change in firing technology during the final neolithic-early bronze age transition in phaistos, crete. just the tip of the iceberg? journal of archaeological and anthropological sciences 11 (2019), pp. 871-894. doi: 10.1007/s12520-017-0572-8 [16] y. maniatis, the emergence of ceramic technology and its evolution as revealed with the use of scientific techniques, in: from mine to microscope: advances in the study of ancient technology. a. shortland, i. freestone, t. rehren (editors), oxbow books, 2009, eisbn: 978-1-78297-279-2, pp. 11-28. [17] c. zazzaro, e. cocca, a. manzo, towards a chronology of the eritrean red sea port of adulis (1st – early 7th century ad). journal of african archaeology 12 (1) (2014), pp.43-73. doi: 10.3213/2191-5784-10253 [18] peacock, d. & blue, l. (eds.) the ancient red sea port of adulis, eritro-british expedition, 2004–5. oxbow books, oxford, 2007, isbn: 9781842173084 [19] c. zazzaro, the ancient red sea port of adulis and the eritrean coastal region, bar international series, vol. 2569, oxford, 2013, isbn 978 1 4073 1190 6. [20] r. k. pedersen, the byzantine-aksumite period shipwreck at black assarca island, eritrea, azania xliii (2008), pp.77-94. doi: 10.1080/00672700809480460 [21] m. m. raith, r. hoffbauer, h. euler, p. a. yule, k. damgaard, the view from zafar an archaeometric study of the aqaba pottery complex and its distribution in the 1st millennium ce, zora 6 (2013), pp. 320-350. [22] s. massa, a. de bonis, v. morra, v. guarino, in s. massa (ed.), adulis project 2015 report (2015), pp. 85-90 (unpublished). [23] adulis project 2018 report (unpublished), s. massa (ed) [24] adulis project 2019 report (unpublished), s. massa (ed). [25] adulis project 2020 report (unpublished), s. massa (ed) https://doi.org/10.1016/j.clay.2004.03.003 https://doi.org/10.1016/j.jeurceramsoc.2010.01.017 https://doi.org/10.1007/s12520-017-0572-8 https://doi.org/10.3213/2191-5784-10253 https://doi.org/10.1080/00672700809480460 a virtual platform for real-time performance analysis of electromagnetic tracking systems for surgical navigation acta imeko issn: 2221-870x december 2021, volume 10, number 4, 103 110 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 103 a virtual platform for real-time performance analysis of electromagnetic tracking systems for surgical navigation mattia alessandro ragolia1, filippo attivissimo1, attilio di nisio1, anna m. l. lanzolla1, marco scarpetta1 1 department of electrical and information engineering, polytechnic of bari, via e.orabona 4, 70125 bari, italy section: research paper keywords: electromagnetic tracking systems; image guided surgery; surgical navigation; real-time virtual platform; system design citation: mattia alessandro ragolia, filippo attivissimo, attilio di nisio, anna maria lucia lanzolla, marco scarpetta, a virtual platform for real-time performance analysis of electromagnetic tracking systems for surgical navigation, acta imeko, vol. 10, no. 4, article 18, december 2021, identifier: imekoacta-10 (2021)-04-18 section editors: umberto cesaro and pasquale arpaia, university of naples federico ii, italy received october 27, 2021; in final form december 5, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: mattia alessandro ragolia, e-mail: mattiaalessandro.ragolia@poliba.it 1. introduction tracking systems are widely used in many applications, providing information about the localization of a target object in a defined area. the medical field is among the ones that are mostly taking advantage form the research on tracking systems. different kinds of technologies are used to develop such systems, depending on accuracy requirements, tracking environment, and tracking distance. the applications range from optical and inertial systems for motion tracking and rehabilitation [1]-[2], up to more complex systems for surgical navigation. the last one is a procedure that relies on tracking systems to guide the surgeon during interventions, it allows to reduce the invasiveness of the intervention, thus improving its accuracy and safety, reducing risks of complications and hospitalization time [3]-[6]. surgical navigation mainly relies on optical and electromagnetic (em) technologies [7]. optical tracking systems are very accurate and reliable, and they are employed in many medical applications [8]-[11], but they constantly require a direct line-of-sight, which prevents their use in presence of obstacles during intracorporal tracking. electromagnetic tracking systems (emtss) overcome this limitation [12]: a very small magnetic sensor, which measures the magnetic field produced by a known field generator (fg), is inserted into the surgical instrument (e.g., a flexible instrument such as an endoscope or a needle [6]), and the position of the sensor is estimated by means of a suitable algorithm. the intraoperative localization of the instruments is shown on a screen in front of the surgeon, and the anatomical area is reconstructed by merging information obtained by different medical imaging techniques, such as computed tomography, ultrasounds, and nuclear magnetic resonance[13], [14], which are acquired during the pre-operative phase. abstract electromagnetic tracking systems (emtss) are widely used in surgical navigation, allowing to improve the outcome of diagnosis and surgical interventions, by providing the surgeon with real-time position of surgical instruments during medical procedures. however, particular effort was dedicated to the development of efficient and robust algorithms, to obtain an accurate estimation of the instrument position for distances from the magnetic field generator beyond 0.5 m. indeed, the main goal is to improve the limited range of current commercial systems, which strongly affects the freedom of movement of the medical team. studies are currently being conducted to optimize the magnetic field generator configuration (both geometrical arrangements and electrical properties) since it affects tracking accuracy. in this paper, we propose a virtual platform for assessing the performance of emtss for surgical navigation, providing realtime results and statistics, and allowing to track instruments both in real and simulated environments. simulations and experimental tests are performed to validate the proposed virtual platform, by employing it to assess the performance of a real emts. the platform offers a real-time tool to analyze emts components and field generator configurations, for a deeper understanding of emts technology, thus supporting engineers during system design and characterization. mailto:mattiaalessandro.ragolia@poliba.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 104 the main limitation of em technology is the short tracking distance, which is generally no longer than 0.5 m from the fg in current commercial systems, due to the reduced amplitude of the magnetic field with distance, and the high sensitivity to em interferences and magnetic field distortions, thus limiting tracking accuracy far from the fg [15]. tracking distance and accuracy are crucial, and they should be taken into account during the development of a surgical navigation systems. many aspects affect system performance, and engineers and manufacturers should consider all of them, since also a small accuracy or distance increase is a valuable achievement in this field. in this paper, we propose a virtual platform for assessing the performance of emtss for surgical navigation, showing in real time how the various sources of error affect the accuracy of tracking distance estimation. this platform provides a useful tool for supporting engineers during design and prototyping of emtss. the paper is structured as follows: the main sources of error in emtss, and the importance of knowing them during system developing, are discussed in section 2; the virtual platform, developed to provide a tool to analyse system performance during the prototyping phase, is illustrated in section 3; in section 4, emts prototype architecture is described and the developed virtual platform is evaluated by simulated and experimental tests performed on the emts prototype; conclusions are drawn in section 5. 2. sources of error modelling the magnetic field and the various sources of error is crucial in many applications, allowing to implement ad-hoc algorithms for automated error compensation [16], [17]. emtss tracking accuracy can be affected by several sources of error, which can be divided into static errors and dynamic errors [7], [12], [15]. static errors occur when the sensor is placed in a given position, maintaining a fixed orientation. they are in turn classified as follows. • systematic errors: they are due to distortions of the magnetic field generated by i) the presence of metal objects in the surrounding environment, which can produce eddy currents induced by the variable field (mainly in ac systems), which generate secondary magnetic fields that add up to the main magnetic field; ii) ferromagnetic materials which, immersed in the main field, orient their domains causing a magnetization that modifies the field lines; and iii) power supply currents of the emts itself or of other electronic medical devices present in the operating room that can also cause a distortion of the magnetic field. these errors can be reduced by appropriate calibration techniques [18]. • random errors (also referred to as jitter [12],[19]): these errors, mainly due to noise, reduce the repeatability of the system. dynamic errors change over time, and they are mainly caused by variations in external em fields, due to the movement of external organs, such as conductive, ferromagnetic, and electrical materials, which cause field distortions that are extremely difficult to compensate. the movement of the sensor itself is also a source of dynamic error, depending on its speed. it must also be considered that tracking accuracy depends on the design of the fg and the choice of the position reconstruction algorithm. moreover, the non-ideality of the electronic components of the tracking system itself affects the performance of the system. in fact, the generated field is never perfectly stable due to the intrinsic limits of the fg, and the measurement and acquisition process is subject to noise, which cannot be totally eliminated. for the reduction of random and dynamic errors, suitable filtering techniques and synchronization of the sampling frequency are particularly useful [20]. the implementation of the kalman filter can also significantly reduce random errors [21]. 3. virtual platform the virtual platform is developed in labview® software (by national instruments corp.), which is largely used to control and monitor industrial equipment and processes, and for the creation of test and measurement systems [22], [23]. it offers a real-time feedback of tracking accuracy (figure 1) and it provides an intuitive and user-friendly interface. it is designed to be used in combination with a robot to move the sensor and provide accurate position references. the platform is composed of six main sections, which are described in the following subsections. the functioning of the platform is illustrated in figure 2. the model of the emts if defined in an external file and imported into the platform, and the user defines the trajectory for sensor movement. two different modalities can be performed: i) in the experimental mode, the platform connects to the daq device and the induced signal in the magnetic sensor is acquired as it is moved by the robot along the defined trajectory, ii) in the simulation mode, the signal of the magnetic sensor is simulated by employing a model of the magnetic field; in both cases, noise can be added to the signal. finally, the position of the sensor is estimated by means of a suitable reconstruction algorithm, providing real-time 3d representation and error statistics. 3.1. 3d view and real-time tracking statistic tracking systems provide the surgeon with the real-time estimate of sensor position, which is shown on a screen in front of the surgeon, where is also displayed the patient’s anatomic area. 2d and 3d views are commonly used; in particular, the latter is more difficult to interpret, but seems to guarantee greater precision [24]. hence, the platform provides a 3d view of tracking, where the actual and estimated position are displayed. moreover, real-time feedback of system performance becomes particularly useful when analyzing how a system responds to different inputs. many design errors can be quickly avoided by real-time feedback. hence, real-time plot and statistics of position tracking errors are provided during experiments. in particular, the position error along each cartesian axis, computed as the difference between the estimated position and the one provided by the robot, is shown on a graph, and its mean value and standard deviation are displayed. for example, the peak error in figure 1 suggests performing a deeper exploration of the correspondent space region. all tracking results and statistics can be easily exported for further elaboration in matlab or other software. 3.2. emts model import • the number and arrangement of the transmitting coils, as well as their electrical properties, highly affect system performance [25]. often the transmitting coils must be placed inside a well-defined space due to practical needs, such as the configuration of the clinical environment, weight acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 105 limitations, or application requirements. moreover, the tracking volume is usually proportional to the dimension of the fg (i.e., the magnetic field intensity) [12]. hence the platform provides the import of a mat-file (binary matlab® file) containing the geometrical arrangement and the electrical properties of the transmitting coils of the fg, in order to test different fg layouts, as well as the parameters of the sensor coil. in section 4.2, two different fg configurations will be compared to illustrate this functionality. • the assessment of tracking accuracy is a mandatory step in emtss developing, and different types of protocols have been defined [12], most commonly employing phantoms such as board and cube phantoms, as well as moving phantoms to assess dynamic performance. in addition, robots are also used to move the sensor, providing accurate position references, and allowing automatic and repeatable test; on the other side, robotic components can cause interference in the tracking volume, and they are quite expensive. in [26], the authors used a carbon fiber rod, held by the robot gripper, with the magnetic sensor positioned at the tip, in order to distance the sensor from the metallic components of the robot. the cinematics of the simulated robot (shown in figure 1) is based on a real robot, model rv-2fb-d from mitsubishi, which was employed in this research to move the sensor; however the platform provides the import of a file containing the model (i.e., the geometry of joints and links) of any robot. both the fg and the robot are displayed in the 3d scene of the platform, by means of the labview robotics toolkit. the fg shown in figure 1 represents the emts described in section 4.3. 3.3. reconstruction algorithm and magnetic field model different techniques can be used to reconstruct the position of the sensor in an emts based on frequency division multiplexing. figure 1. virtual platform developed in labview, during the execution of a simulation. on the left: the model of the melfa robot is shown during the movement; the green point is the position estimate provided by the algorithm, and fg reference system is shown in red. on the top: the noise settings section and the trajectory definition are shown. at the bottom: real-time statistics of position error are provided. figure 2. scheme of the functioning of the virtual platform. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 106 in [27], a suitable interpolation algorithm has been used to reconstruct the position of the sensor in a small space. the sensor is placed in 𝑀 different calibration positions, and the voltage from the sensors are measured; then, position estimation is based on interpolation between calibration points by using delaunay triangulation and linear interpolation. this technique requires measurements of the magnetic field in a dense grid to reach adequate accuracy, it does not allow extrapolation, and it is time-consuming, thus it could be applied only to small regions in the tracking volume. other algorithms are based on i) a model of the magnetic field obtained by approximating the coils as magnetic dipoles, or on ii) a model obtained by considering the mutual inductance between the transmitting coil and the sensor coil, which are considered as circular filaments [28]. both models require the knowledge of the geometrical parameters of the coils, and the electrical quantities (i.e., current and voltage) of the transmitting coils, but do not need as many measurements as the interpolation method. moreover, they could be used to compute the magnetic field (and therefore the induced voltage) in the whole tracking volume, allowing to perform experiments in a simulated environment. hence, the platform provides the possibility to choose an arbitrary reconstruction algorithm (developed in matlab), or to track the sensor simultaneously with two or more reconstruction techniques, to compare their performance in different scenarios. in this paper, to model the magnetic field produced by the fg and to reconstruct sensor position, we employ the dipole model explained in [21], [26], [28]. it is obtained by considering the magnetic moment generated by the i-th transmitting coil, expressed as: 𝒎𝑡𝑥,𝑖 = 𝑚𝑡𝑥,𝑖 �̂�𝑡𝑥,𝑖 , 𝑚𝑡𝑥,𝑖 = 𝑁𝑡𝑥,𝑖 𝑆𝑡𝑥,𝑖 𝐼𝑖𝑖 , 𝑆𝑡𝑥,𝑖 = π 𝑟𝑡𝑥,𝑖 2 , (1) where �̂�𝑡𝑥,𝑖 is the versor orthogonal to the surface 𝑆𝑡𝑥,𝑖 of the i-th transmitting coil, and 𝑟𝑡𝑥,𝑖 , 𝑁𝑡𝑥,𝑖 , and 𝐼𝑖𝑖 are the coil radius, the number of turns, and the rms value of the excitation current, respectively. the subscript i takes into account the differences of real parameters among each transmitting coil [26]. the rms magnetic field generated by the i-th transmitting coil in a generic point 𝒑𝒔 = [𝑥, 𝑦, 𝑧] t is 𝐁𝑖 (𝒑𝒔, 𝐼𝑖𝑖 ) = b𝑖 𝑥 𝒙 + b𝑖 𝑦 �̂� + b𝑖 𝑧 �̂� = µ0 4 π 𝑚𝑡𝑥,𝑖 𝑑𝑖 3 [3(�̂�𝑡𝑥,𝑖 · �̂�𝑑,𝑖 )�̂�𝑑,𝑖 − �̂�𝑡𝑥,𝑖 ] , (2) where 𝑑𝑖 = |𝒅𝒊|, with 𝒅𝒊 = 𝒑𝒔 − 𝒑𝒕𝒙,𝒊 represents the vector distance between 𝒑𝒔 and the center 𝒑𝒕𝒙,𝒊 of the i-th transmitting coil, and �̂�𝑑,𝑖 is its associated versor. if the magnetic flux is considered homogeneous on the surface of sensing coil 𝑆𝑠, the induced voltage related to the i-th coil can be expressed as �̃�𝑖 = 2 π 𝑓𝑖 𝑁𝑠 𝑆𝑠 𝐁𝑖 · �̂�𝑠 (3) where 𝑁𝑠 is the number of sensor coil turns and �̂�𝑠 = [𝑐𝑜𝑠(𝛼𝑠 ) 𝑐𝑜𝑠(𝛽𝑠 ), 𝑠𝑖𝑛( 𝛼𝑠 ) 𝑐𝑜𝑠(𝛽𝑠 ), 𝑠𝑖𝑛(𝛽𝑠 )] 𝑇 is the versor orthogonal to the sensor surface, where 𝛼𝑠 and 𝛽𝑠 define the orientation of the sensor coil. the position is estimated by minimizing the following cost function [29]: 𝐹(𝜽, 𝑰𝒕𝒙) = ‖𝒗 − �̃�(𝜽, 𝑰𝒕𝒙)‖2 2 , (4) which represents the squared error between the induced voltage 𝒗 measured from sensor coil and the voltage �̃� obtained by applying (3); this latter depends on 𝜽 = [𝒑𝒔 t, 𝛼𝑠 , 𝛽𝑠 ] t, and on the vector of the currents 𝑰𝒕𝒙. the minimum of (4), i.e. �̂� = arg min 𝐹(𝜽, 𝑰𝒕𝒙), is obtained by using the levenberg– marquardt algorithm. for the details, see [29]. 3.4. experimental or simulation mode the platform allows to control and perform both simulation and experimental tests. simulation mode the aforementioned models allow to carry out experiments on a simulated environment, resulting in a valuable tool for emtss design. it is possible to define the fg and the sensor coil (section 3.2), the position reconstruction algorithm (section 3.3), and custom trajectories along which to move the sensor. experimental mode – it is possible to define a trajectory, move the robot and acquire data from the data acquisition (daq) device. the 3d scene will show the real-time movement of the robot, along with the tracked position (the green dot in figure 1). hybrid mode – it is possible to import experimental data acquired during past experiments, and to run a simulation test showing the tracking results, also simulating the actual acquisition time of the related experiment. 3.5. noise section the voltage noise in the sensor signal highly affects tracking accuracy. several error sources contribute to sensor voltage noise, and two main contributions can be considered (all noise components are intended as standard deviation of rms quantities): i) the measurement and acquisition noise 𝜎𝑎𝑐𝑞 , and ii) the fg noise 𝜎𝐵 (𝒑𝒔, 𝜎𝐼 )the last one depends on the position 𝒑𝒔 of the sensor relative to the fg and is due to excitation current noise 𝜎𝐼 . the voltage noise 𝜎𝑣 can be expressed as: 𝜎𝑣 = √𝜎𝐵 2 + 𝜎𝑎𝑐𝑞 2 , (5) where it has been assumed that 𝜎𝐵 and 𝜎𝑎𝑐𝑞 both contribute independently. note that • 𝜎𝑎𝑐𝑞 is approximately constant in the whole working volume, since it depends on the measurement devices and on the johnson noise of the sensor. experimentally, it has been measured 𝜎𝑎𝑐𝑞 ≅ 20 nv for each frequency component. • 𝜎𝐵 depends on sensor pose and excitation currents, hence it is related to 𝜎𝐼 , and its contribution is higher when the sensor is closer to the fg. moreover, 𝜎𝐼 can differ between each transmitting coil. experimentally, it has been measured 𝜎𝐼 ≅ 0.07 ma as an average value among transmitting coils. in [29], a technique to compensate the effect of 𝜎𝐼 on the position error has been proposed. the effect of these noise components must be considered during simulations. hence, the platform includes a section to set the noise components (𝜎𝑎𝑐𝑞 is a scalar, 𝜎𝐼 is a 𝑛𝑥1 vector), to be added in simulations and also during real-time experiments, to investigate how a certain source of error affects tracking accuracy. for instance, a discussion about the selection of the daq device depending on the noise is carried out in section 4.1. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 107 4. validation the proposed platform is suitable for the assessment of virtual emtss during simulation, as well as the developed emts prototypes. in sections 4.1 and 4.2 we employ the platform for a practical case, showing its usefulness in assisting engineers during emts design and characterization, and in section 4.3 we illustrate some tests performed on the real emts. 4.1. daq device selection the platform can support engineers during daq device selection by using the noise section described in section 3.5. when setting up the measurement chain to acquire the signal form the sensor coil, it is important to select the daq device according to the accuracy requirements of the system. frequency sampling and noise are two main parameters to be considered [20], which affect system accuracy. in particular, the noise floor indicated in the datasheet of the daq device is added to the induced voltage, thus it directly affects position repeatability and accuracy [20]. in this way, the choice of a low-noise daq device can be evaluated for the purpose of improving performance. in this section, we performed some simulations: the rms induced voltage was simulated by applying (3), then adding a voltage noise component to simulate the noise floor of the daq device. figure 3 shows the results, where the mean of the euclidean position error over a linear trajectory (101 points at 600 mm from the fg) is plotted versus a range of selected 𝜎𝑎𝑐𝑞 , considering a fixed current noise of 𝜎𝐼 = 0.07 ma. as expected, it can be observed an increasing error with 𝜎acq; moreover, the behaviour is quite linear. a mean euclidean error below 2 mm is obtained if using a daq device with 𝜎acq lower than 40 nv. this information could be particularly useful when choosing components, considering the trade-off between increased cost and required accuracy. 4.2. fg configuration optimization as said in section 3.2, the platform allows to test different fg configurations, to evaluate the influence of the number, arrangement, and electrical properties of the transmitting coils on system performance. in this section we compare the performance of two fg configurations: one representing the emts prototype (figure 1), and one flat fg composed of six coplanar transmitting coils (figure 4). the coils of the two configurations are identical in their geometrical and electrical parameters, except for their position and orientation in space. figure 5 shows the position error along each axes, obtained by keeping the sensor with a fixed orientation along the z-axis, and moving it along the x-axis along a linear trajectory of 101 points, with a step of 1 mm, from point (x, y, z)=(-50, 0, 600) to (x, y, z)=(50, 0, 600), considering the reference system of the fgs. the rms induced voltage was simulated by applying (3), assuming an acquisition frequency of 20 hz. current and voltage noise of 𝜎𝐼 = 0.07 ma and 𝜎acq = 20 nv were added to each channel. it can be noted higher accuracy in the 5-coils fg configuration, whereas the 6-coils fg exhibits higher position error, in particular along xand y-axes. this suggests that further investigation should be performed to understand the cause of the error in that configuration, to avoid it during the realization of the fg. figure 3. mean euclidean position error vs. 𝜎𝑎𝑐𝑞 , assuming 𝜎𝐼 =0.07 ma. figure 4. flat fg configuration, obtained by modifying the number of transmitting coils ant their position and orientation. the fg reference system is shown in red. figure 5. comparison of position error of the two fg configurations. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 108 4.3. developed emts and experimental test the results obtained from the simulations performed with the platform must be comparable with the ones obtained with an actual fg, in order to validate it effectiveness in assisting the system designers. in collaboration with the company masmec biomed (modugno, bari, italy), an emts prototype was developed to obtain accurate estimation of sensor pose beyond 0.5 m from the fg, thus improving the state-of-the-art of commercial systems [20], [26], [27], [29]. it consists of three main components (figure 6): i. a magnetic field generator to generate em signals (the same shown in figure 1); ii. a small em sensor coil from aurora system; iii. a control unit for data acquisition and signal processing. the fg is composed of five transmitting coils, whose arrangement minimizes mutual inductances. each coil is powered with a sinusoidal current at different frequencies (approximately from 1 to 5 khz), thus generating an ac magnetic field, whose amplitude does not exceed 0.2 mt, which is the threshold value set in the ieee standard c95.1-2005. the whole magnetic flux generates an induced voltage on the em sensor (to be inserted into the surgical instrument), which is acquired, digitalized, and filtered by means of five band-pass filters obtaining five rms voltage components, related to the different excitation frequencies. these components are used to estimate sensor position by means of a suitable reconstruction algorithm. moreover, the current in each transmitting coil is measured by using five hall effect sensors (la 55-p lem), for the purpose of i) ensuring stability of the magnetic field by means of a current control loop, and ii) reducing error due to variations in the magnetic field, as shown in [29]. the control software was developed in labview®, and the sensor was moved by means of an industrial robot (by mitsubishi), which provided accurate position reference. an experimental test was performed to show the potentialities of the proposed platform when tracking a real sensor coil. the em sensor coil was moved by the robot along the trajectory defined in section 4.2, and the rms induced voltage was measured with a frequency of 20 hz, as suitable for real-time surgical applications (the sampling frequency of the daq device is set to 50 khz, with 2500 samples, thus computing the rms value every 50 ms, i.e., 20 hz [20]). the same trajectory was performed on both simulated and experimental data. for the simulation, 𝜎𝐼 = 0.07 ma and 𝜎𝐼 = 20 nv were considered for each channel, as quantified from the experimental data. figure 7 shows the obtained results. the position error obtained in both simulated and experimental case are comparable, with a mean euclidean position error of about 1 mm and 2 mm, respectively, which is suitable for many surgical procedures [12]. the difference is due to the approximation of the coils with magnetic dipoles, and to uncertainty in parameters. this result validates the performance of the platform in simulating real tracking, providing a valuable tool during system design and prototyping. 5. conclusions several sources of error affect emtss, and the high accuracy required from surgical application is highly influenced by the design and arrangement of the transmitting coils of the fg. many design errors can be quickly identified and avoided by realtime feedback. in this paper we illustrated the main features of a virtual platform, which permits to analyse system performance by adding noise components and simulating error sources, hence the robustness and the accuracy of the system and its weaknesses can be studied. moreover, it can be particularly useful for system prototyping, by investigating the effects of system parameters (geometrical and electrical ones). the usefulness of the platform was demonstrated by performing simulations related to some practical cases. finally, it was validated by performing some tests on a real emts, obtaining a mean euclidean position error of about 2 mm at a distance of 600 mm from the fg, comparable with the position error of 1 mm obtained by simulations, which is suitable for many surgical procedures. further development will regard an improved graphic and user interface, the inclusion of other sources of error (magnetic field distortion, em interferences), as well as a dynamic system model, in order to evaluate position error in fast varying conditions; the kalman filter will also implemented to obtain smooth trajectories. moreover, in this first version, the algorithm is developed in matlab, but other programming languages -e.g., pythonwill be considered in further versions. figure 6. experimental setup for system characterization. figure 7. position error from simulated (blue line) and experimental data (red line) obtained by moving the melfa robot along a linear trajectory at a distance of 600 mm from the fg. the sensor is aligned along z-axis. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 109 references [1] l. romeo, r. marani, m. malosio, a. g. perri, t. d’orazio, performance analysis of body tracking with the microsoft azure kinect, 2021 29th mediterranean conference on control and automation (med), puglia, italy, 22-25 june 2021, pp. 572-577. doi: 10.1109/med51440.2021.9480177 [2] m. parvis, s. corbellini, l. lombardo, l. iannnucci, s. grassini and e. angelini, inertial measurement system for swimming rehabilitation, 2017 ieee international symposium on medical measurements and applications (memea), rochester, mn, usa, 7-10 may 2017, pp. 361-366. doi: 10.1109/memea.2017.7985903 [3] t. peters, k cleary, image-guided interventions: technology and application, springer, boston, ma, 2008, isbn 978-1-4899-97333. doi: 10.1007/978-0-387-73858-1 [4] e. grimson, m. leventon, l. lorigo, t. kapur, r. kikinis, image guided surgery, scientific american, 280(6) (1999), pp. 62-69. doi: 10.1038/scientificamerican0699-62 [5] n. giaquinto, m. scarpetta, m. spadavecchia, g. andria, deep learning-based computer vision for real-time intravenous drip infusion monitoring, ieee sensors journal 21(13) (2021), pp. 14148-14154. doi: 10.1109/jsen.2020.3039009 [6] v. portosi, a. m. loconsole, m. valori, v. marrocco, i. fassi, f. bonelli, g. pascazio, v. lampignano, a. fasano, f. prudenzano, low-cost mini-invasive microwave needle applicator for cancer thermal ablation: feasibility investigation, ieee sensors journal 21(13) (2021), pp. 14027-14034. doi: 10.1109/jsen.2021.3060499 [7] a. sorriento, m. b. porfido, s. mazzoleni, g. calvosa, m. tenucci, g. ciuti, p. dario, optical and electromagnetic tracking systems for biomedical applications: a critical review on potentialities and limitations, ieee reviews in biomedical engineering 13 (2020), pp. 212-232. doi: 10.1109/rbme.2019.2939091 [8] x. chen, n. bao, j. li, y. kang, a review of surgery navigation system based on ultrasound guidance, proceeding of the ieee international conference on information and automation shenyang, china, june 2012. doi: 10.1109/icinfa.2012.6246906 [9] l. m. galantucci, g. percoco, f. lavecchia, e. di gioia, noninvasive computerized scanning method for the correlation between the facial soft and hard tissues for an integrated threedimensional anthropometry and cephalometry, journal of craniofacial surgery 24(3) (2013), pp. 797-804. doi: 10.1097/scs.0b013e31828dcc81 [10] j. sun, m. smith, l smith, l.-p. nolte, simulation of an opticalsensing technique for tracking surgical tools employed in computer-assisted interventions, ieee sensors journal 5 (5) (2005), pp. 1127-1131. doi: 10.1109/jsen.2005.844339 [11] f. ezedine, j.-m. linares, w. m. wan muhamad, j.-m. sprauel, identification of most influential factors in a virtual reality tracking system using hybrid method, acta imeko 2(2) (2013), pp. 20-27. doi: 10.21014/acta_imeko.v2i2.136 [12] alfred m. franz, t. haidegger, w. birkfellner, k. cleary, t. m. peters, l. maier-hein, electromagnetic tracking in medicine – a review of technology, validation and applications, ieee transactions on medical imaging 33(8) (2014), pp. 1702 – 1725. doi: 10.1109/tmi.2014.2321777 [13] f. prada, m. del bene, l. mattei, l. lodigiani, s. debeni, v. kolev, i. vetrano, l. solbiati, g. sakas, f. dimeco, preoperative magnetic resonance and intraoperative ultrasound fusion imaging for real-time neuronavigation in brain tumor surgery. ultraschall med 2015, 36(2). pp. 174-186. doi: 10.1055/s-0034-1385347 [14] g. andria, f. attivissimo, g. cavone, a. m. l. lanzolla acquisition times in magnetic resonance imaging: optimization in clinical use, ieee transactions on instrumentation and measurement 58(9) (2009), pp. 3140-3148. doi: 10.1109/tim.2009.2016888 [15] t. koivukangas, j. p. katisko, j. p. koivukangas, technical accuracy of optical and the electromagnetic tracking systems, springerplus 2(90) (2013). doi: 10.1186/2193-1801-2-90. [16] s. goll, a. borisov, interactive model of magnetic field reconstruction stand for mobile robot navigation algorithms debugging which use magnetometer data, acta imeko 8(4) (2019), pp. 47-53. doi: 10.21014/acta_imeko.v8i4.688 [17] e. petritoli, f. leccese, l. ciani, g. s. spagnolo, probe position error compensation in near-field to far-field pattern measurements (2019) 2019 ieee international workshop on metrology for aerospace, metroaerospace, turin, italy, 19-21 june 2019, pp. 214-217. doi: 10.1109/metroaerospace.2019.8869674 [18] v. v. kindratenko, a survey of electromagnetic position tracker calibration techniques. virtual reality (5) (2000) pp. 169–182. doi: 10.1007/bf01409422 [19] y. qi, h. sadjadi, c. t. yeo, k. hashtrudi-zaad, g. fichtinger, electromagnetic tracking performance analysis and optimization, 2014 36th annual international conference of the ieee engineering in medicine and biology society, chicago, il, usa, 26-30 august 2014, pp. 6534-6538. doi: 10.1109/embc.2014.6945125 [20] g. andria, f. attivissimo, a. di nisio, a. m. l. lanzolla, m. a. ragolia, assessment of position repeatability error in an electromagnetic tracking system for surgical navigation, sensors 20 (2020), art. no. 961. doi: 10.3390/s20040961 [21] f. santoni, a. de angelis, i. skog, a. moschitta, p. carbone, calibration and characterization of a magnetic positioning system using a robotic arm, ieee transactions on instrumentation and measurement 68(5) (2019), pp. 1494-1502. doi: 10.1109/tim.2018.2885590 [22] h. shekhar, j.s.jeba kumar,v.ashok, a.vimala juliet, applied medical informatics using labview, international journal on computer science and engineering 2(2) (2010), pp. 198-203. [23] f. attivissimo, c. guarnieri calò carducci, a. m. l. lanzolla, m. spadavecchia, an extensive unified thermo-electric module characterization method, sensors 16(12) (2016) pp. 1-20. doi: 10.3390/s16122114 [24] p. catala-lehnen, j. v. nüchtern, d. briem, t. klink, j. m. rueger, w. lehmann, comparison of 2d and 3d navigation techniques for percutaneous screw insertion into the scaphoid: results of an experimental cadaver study, computer aided surgery 16(6) (2011), pp. 280-287. doi: 10.3109/10929088.2011.621092 [25] m. li, c. hansen, g. rose, a simulator for advanced analysis of a 5-dof em tracking systems in use for image-guided surgery. int j cars 12, 2217–2229 (2017). doi: 10.1007/s11548-017-1662-x [26] f. attivissimo, a. d. nisio, a. m. l. lanzolla, m. a. ragolia, analysis of position estimation techniques in a surgical em tracking system, ieee sensors journal 21(13) (2021), pp. 1438914396. doi: 10.1109/jsen.2020.3042647 [27] g. andria, f. attivissimo, a. di nisio, a. m. l. lanzolla, p. larizza, s. selicato, development and performance evaluation of an electromagnetic tracking system for surgery navigation, measurement 148 (2019), art. no. 106916. doi: 10.1016/j.measurement.2019.106916 [28] g. de angelis, a. de angelis, a. moschitta, p. carbone, comparison of measurement models for 3d magnetic localization and tracking, sensors 17(11) (2017), art. no. 2527. doi: 10.3390/s17112527 https://doi.org/10.1109/med51440.2021.9480177 https://doi.org/10.1109/memea.2017.7985903 https://doi.org/10.1007/978-0-387-73858-1 https://doi.org/10.1038/scientificamerican0699-62 http://dx.doi.org/10.1109/jsen.2020.3039009 https://doi.org/10.1109/jsen.2021.3060499 https://doi.org/10.1109/rbme.2019.2939091 https://doi.org/10.1109/icinfa.2012.6246906 https://doi.org/10.1097/scs.0b013e31828dcc81 https://doi.org/10.1109/jsen.2005.844339 http://dx.doi.org/10.21014/acta_imeko.v2i2.136 https://doi.org/10.1109/tmi.2014.2321777 https://doi.org/10.1055/s-0034-1385347 https://doi.org/10.1109/tim.2009.2016888 https://doi.org/10.1186/2193-1801-2-90 http://dx.doi.org/10.21014/acta_imeko.v8i4.688 https://doi.org/10.1109/metroaerospace.2019.8869674 https://doi.org/10.1007/bf01409422 https://doi.org/10.1109/embc.2014.6945125 https://doi.org/10.3390/s20040961 https://doi.org/10.1109/tim.2018.2885590 https://doi.org/10.3390/s16122114 https://doi.org/10.3109/10929088.2011.621092 https://doi.org/10.1007/s11548-017-1662-x https://doi.org/10.1109/jsen.2020.3042647 https://doi.org/10.1016/j.measurement.2019.106916 https://doi.org/10.3390/s17112527 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 110 [29] m. a. ragolia, f. attivissimo, a. di nisio, a. m. l. lanzolla and m. scarpetta, reducing effect of magnetic field noise on sensor position estimation in surgical em tracking, 2021 ieee international symposium on medical measurements and applications (memea), 23-25 june 2021, pp. 1-6. doi: 10.1109/memea52024.2021.9478723 https://doi.org/10.1109/memea52024.2021.9478723 metrology in the early days of social sciences acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 6 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 metrology in the early days of social sciences clara monteiro vieira1, elisabeth costa monteiro1 1 pontifical catholic university of rio de janeiro, marquês de são vicente, 225, gávea, rio de janeiro, brazil section: research paper keywords: metrology; social science and humanities; émile durkheim; max weber citation: clara monteiro vieira, elisabeth costa monteiro, metrology in the early days of social sciences, acta imeko, vol. 12, no. 2, article 16, june 2023, identifier: imeko-acta-12 (2023)-02-16 section editor: eric benoit, université savoie mont blanc, france received july 10, 2022; in final form march 31, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this study was supported in part by the coordenação de aperfeiçoamento de pessoal de nível superior – brasil (capes) – finance code 001. corresponding author: elisabeth costa monteiro, e-mail: beth@puc-rio.br 1. introduction providing metrological traceability of measurement results to the international system of units (si) is essential to ensure reliable and comparable quantity values in applications associated with all fields of knowledge. this aspect, however, has been a historical struggle since the early days, when efforts were directed to the elaboration of a metrological framework traditionally focused on promoting advances in the evolution of standards for measuring physical quantities. after the signing of the ‘convention du mètre’ (1875), the 1st ‘conference general de poids et mesures’ (cgpm), which took place in 1889, established international prototypes for physical quantities of length and mass units, respectively, meter and kilogram, also incorporating the second as the unit of time, according to astronomers’ definition [1]. the high complexity of chemical and biological measurements, which also involve quantities belonging to the field of natural sciences, only much more recently received better attention and contributions to meet their metrological infrastructure demands [2]-[5]. metrological authorities’ first initiatives toward meeting demands for chemical measurements took place with the adoption, in 1971, of the unit mole (symbol mol), for the quantity amount of substance, at the 14th cgpm, and the creation of the ‘comité consultatif pour la quantité de matière’ (ccqm), in 1993 [1]. in turn, measurements of biological quantities, which are particularly associated with even more challenging metrological demands, were addressed only at the 20th cgpm (1999) [2]-[4]. however, unlike what happened in the case of chemical quantities, the metrological demands associated with biomeasurements did not receive specific support by creating a particular consultative committee for the area. the responsibility for advancing the reliability of biomeasurements was absorbed by the ccqm, whose name was changed in 2014 to ‘consultative committee for amount of substance: metrology in chemistry and biology’ [3]. equally required and even more challenging is the global metrological framework to provide trustworthiness and comparability for measurements in humanities and social abstract recent studies have been endeavoring to overcome challenges to ensure reliable measurement results in social sciences and humanities facing the complex characteristics of this scientific field. however, the literature indicates that the founding designers of sociology as an academic discipline expressed concerns regarding social measurements more than a century ago. based on a literature review, the present work investigates possible metrological aspects already addressed in the early days of social science, focusing on the methodological conceptions of two of sociology’s early canons – notably max weber and emile durkheim. the present study reveals that the approaches contemporaneously developed by the two social sciences co-founders present diverse but fundamentally complementary configurations, allowing a wide range of social phenomena to be analyzable. although employing their terminologies, both social scientists incorporated fundamental metrological concepts in their procedures’ parameters, seeking to establish a single reference, using statistical analysis or determining measurement standards that resemble what is known today as reference material. the concern with applying metrological concepts since the early days of creating sociology as a science reinforces the need to invest extensive efforts to provide uniformity of measurements in this remarkably relevant field of application of measurement science. mailto:beth@puc-rio.br acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 sciences. nevertheless, this issue has not yet been addressed in cgpm resolutions. the sophistication of measurands associated with more complex areas involving chemical, biological, human and social measurements requires dealing with the development of certified reference materials, creation of arbitrary units, and other alternative strategies to step forward to a metrological structure capable of harmonizing “nonphysical” measurements in all aspects of daily life demands. particularly regarding human and social sciences, the influence of the subjective perceptions of researchers and research participants on the research process [6] and difficulties in defining concepts [7]-[9] are some of the elements of the complexity in the study of social phenomena. such intricacies hinder but do not prevent initiatives to ensure reliability and comparability of measurement results in the social sciences and humanities. recent studies have been endeavoring to meet the challenges associated with the complex characteristics of this scientific field [7]-[37]. among the current academic initiatives, it is worth mentioning the successful incorporation of measurements in social sciences among investigations addressed by the international measurement confederation (imeko) [36]-[40], being evidenced a massive effort of this scientific community to promote metrology in this field, including efforts to lead both physical and "nonphysical" measurement in a single, consistent concept system [34]-[37]. despite the apparent novelty of the actions that are currently emerging to incorporate the concepts of metrology in social sciences aiming at contributing to robust and comparable measurement results in this field of application; the literature indicates that the founding architects of this science as a formal discipline, notably max weber and émile durkheim, already expressed concerns about the adequacy of the approaches employed for measuring social phenomena [41], [42]. this paper explores the fundamental aspects of the measurement methods proposed more than a century ago by those two founding authors of sociology as a scientific field. moreover, the present article seeks to identify the possible connections between these preliminary sociological approaches and the current metrological conceptions. 2. analytical framework of social science founders playing relevant roles on the foundation of sociology as a scholarly discipline, both émile durkheim and max weber devoted a portion of their work to the development of a methodology for the study of social phenomena and were especially interested in reliable strategies and comparable results on social measures. they presented, however, quite different approaches. 2.1. émile durkheim émile durkheim (1858-1917, france) was the first to establish sociology as a formal academic discipline (university of bordeaux, 1895) [43]. influenced by the positivist current of thought, durkheim turned to the natural sciences – especially bioscience – when performing social science investigations [42], [44]. he thought of society as an organism, whose parts (or “organs”) need to function well together to ensure the whole’s healthy functioning [42], [44]. durkheim defined ‘social facts’ as his main object of study. ‘social facts’ would be ways of feeling, acting, and thinking identifiable by three main traits such as generality, being applied to all members of a given society; exteriority from each individual, once they were not created by any particular person’s consciousness, but learned by people, generation after generation, and lasting much longer than the human lifespan; and coercivity, by which individuals are constrained into specific actions, not necessarily in conformity to each person’s intention [42]. with a focus on analyzing social facts and their role in society, durkheim addressed the social phenomena from the macrolevel. just as it is impossible to capture what is going on in someone’s mind by looking at each cell of their nervous system, durkheim states that one wouldn’t be able to explain a social fact simply by looking at its manifestations in the individual level [42]. he emphasized, after all, that a whole is not just the sum of its parts, but a specific reality formed by their association [42]. therefore, in durkheim’s approach, social facts ought to be explained through other social facts [42]. with a marked tendency toward an empirical approach, durkheim used statistical strategies extensively. by increasing the number of cases whenever possible, the variable-oriented model of the comparative analysis performed by durkheim aims to establish generalized connections between variables [45]. the general patterns pursuit guided durkheim’s statistical approach to dealing with the time dimension from a transhistorical perspective [45]. collective behaviors are, then, identified as an average effect of a variable by searching for statistical regularities of social facts [45]. estimating the average effects of independent variables would allow investigating the ‘effects-of-causes’. therefore, with the emphasis on generalizations over details, durkheim establishes causality relationships, associating a phenomenon (social fact) to its cause or its effects (another social fact) [42]. for instance, in his famous study “le suicide: étude de sociologie” [46], performed with three religious’ communities (protestants, catholics, and jews), durkheim demonstrated that a social fact, the suicide rates, presented a statistical correlation with a macrolevel variable constituted by the degrees of social integration, as figure 1. diagram of correlating connections between macro-level variables to analyze causality associations with suicide rates in diverse contexts within the durkheim study. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 illustrated in figure 1. the statistical analysis allowed durkheim, for example, to associate suicide rates to aspects of social context, whereas, contrary to what one might expect, there was no correlation with rates of psychopathology. 2.2. max weber max weber (1864–1920, germany) introduced, in 1919, a sociology department at the ludwig maximilians university of munich, in germany [47]. contrasting to durkheim’s objectivity, weber's approach prioritizes subjective interpretations of social events. these aspects are considered those that provide the underlying sense to the individual’s objective behaviors, explaining them. therefore, weber addressed the social phenomena from the micro-level, considering subjectivity and meanings attributed to social actions [41], [45], [48]-[50]. the approach included the development of the so-called ideal type. this theoretical construct consists of an abstract model with internal logic serving as a measuring standard for evaluating complex cases [45], [48], [51]. the strategy allows for understanding particular historical processes and individual motivations, considering as many variables as possible, and analyzing the kind of relationship among them by the concept of elective affinities, which refers to their mutual contributions [52]. therefore, an in-depth understanding of a complex unity is reached by a case-oriented comparison concentrating on a small number of cases, with a large number of attributes interacting within long-lasting processes [45]. as a result, the roots of a specific event must be rebuilt when performing qualitative investigations involving historical comparisons by weber’s case-oriented strategy. 3. metrology and the analytical framework of social sciences’ founders current proposals for making psychosocial and physical properties measurable, ensuring the quality of measurements associated with both physical and psychosocial properties, consider object-relatedness (objectivity) and subjectindependence (intersubjectivity) as essential attributes to be satisfied [34], [36]. objectivity refers to the connection between the information obtained and the measured property. this characteristic requires an appropriate theory of the property to make insignificant the definitional uncertainty and demands a reduced influence from other phenomena, which renders instrumental uncertainty negligible [35], [36]. for a uniform interpretation by different measurers, the measurement must be intersubjective, which depends on the metrological traceability of results to the same reference scale, if available. as described in [36], this quality dimension can be structured by developing item banks aiming at building reference scales associated with each of the properties, in combination with rasch model fitting [33]-[36]. the rasch model is an approach widely employed to measure latent traits in a variety of disciplines within humanities, social sciences and health [21]-[33], [36], [53], [54]. preceded by the studies developed throughout the 19th century by the german karl marx (1818-1883), also known as one of the founding creators of the social sciences, émile durkheim and max weber were the first to establish this field of research as a formal discipline. the scientific contributions of these two contemporary researchers emerged at the end of the 19th century, after the memorable signing of the intergovernmental treaty of the meter convention, which took place in paris in 1875 and established the bureau international des poids et mesures (bipm), an international organization in which the member states coordinate the harmonization and advances in measurement science and measurement standards. in the case of the french sociologist émile durkheim, this historical space may have paved the way for the interest in the quality of measurement evidenced in his work. weber's contributions to social sciences measurements, in turn, emerged after the creation, in berlin, of the first national metrology institute in 1887, the physikalisch-technische reichsanstalt (ptr), later renamed to physikalisch-technische bundesanstalt (ptb). as berlin was the historical space experienced by max weber, this metrological context may also have influenced the methodological approach this social science co-founder developed. the efforts invested in developing methodologies that sought to achieve comparable results from the measurements of social phenomena were a distinctive feature of durkheim and weber’s scientific production. their proposals, however, were characterized by quite different approaches. their methods were not aimed at the same objects of study, which can commonly lead to a false idea of divergence. instead, their methodological approaches were complementary, dealing with analyses carried out in both dimensions, macro-sociological by durkheim and micro-sociological by weber. driven by the positivist influence, durkheim built natural sciences’ analogies with social sciences. it is worth mentioning that both scientific fields share metrological challenges that still linger to the present time. with highly-complex measurements, the measurement requirements framework in such fields of study is not yet adequately addressed or simply not at all. interestingly, in his book from 1894 “les règles de la méthode sociologique” [42], durkheim already acknowledges such challenges that sociology has in common with biology, but to a greater extent. as he states [42]: “tous ces problèmes qui, déjà en biologie, sont loin d'être clairement résolus, restent encore, pour le sociologue, enveloppés de mystère” (p.39). dealing with general analyses involving a large number of social cases but limiting to few variables, durkheim employed a quantitative approach with statistical techniques, including correlation procedures to define the strength of the association between different social facts, regression analysis to explore the impact of the change in one social variable relative to another, also predicting values of the random social variable based on the fixed social variable values. the measurement reference consisted of the average mathematical relationship between variables [42]. durkheim pursues stable objects as a necessary condition for objectivity. the more detached the “social facts” from the “individual facts” by which they manifest themselves, the more objectively represented as a constant, thus eliminating subjective interference, as he states [42]: “on peut poser en principe que les faits sociaux sont d'autant plus susceptibles d'être objectivement représentés qu'ils sont plus complètement dégagés des faits individuels qui les manifestent. en effet, une sensation est d'autant plus objective que l'objet auquel elle se rapporte a plus de fixité; car la condition de toute objectivité, c'est l'existence d'un point de repère, constant et identique, auquel la représentation peut être rapportée et qui permet d'éliminer tout ce qu'elle a de variable, partant de subjectif” [42]. durkheim’s quest for objectivity can be considered analogous to a pursuit towards minimizing the definitional and instrumental uncertainties of social measurements. as for max weber’s methodology, the social properties under analysis were conceived in a micro-social dimension, acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 concentrating on a few cases but encompassing a large number of variables, thus leading to a significant increase in the complexity of the measurand compared to durkheim’s simplified general model. the reference in weber’s approach is built by an abstract model, consisting of a synthetic “ideal construct” that encompasses multiple essential attributes. this strategy resembles the production of reference materials for chemical or biological measurements, areas for which the realization of si units is still unavailable. in these fields, it is possible to provide metrological traceability by developing reference materials with sufficient homogeneity and stability regarding specified properties, being established to be fit for their intended use in the measurement or examination of nominal properties [55]. weber’s approach considers cases as a whole, constituted of variables that cannot be disassociated. like the procedure using certified reference materials as “primary reference standard,” weber’s conception claims that the produced ideal types should be made available as a reference for further investigations of other cases, which would enable uniformity of interpretation through the intersubjectivity of measurement results. both durkheim’s and weber’s strategies are concerned with establishing a well-defined reference to provide the necessary measurement standard to enable the comparability of results, considering the specific characteristics associated with their main object of study. these features denote the concern with ensuring the intersubjectivity of the measurement results, which, in turn, will be provided only by establishing global metrological traceability of the measurement results to reference properties [34]-[37]. furthermore, these preliminary approaches developed in the foundations of social sciences have been applied up to the present. durkheim’s suicide assessment has been recently implemented and validated in gerontological practice [16], [17]. recent studies regarding metrology for the social sciences have also addressed ideas from durkheim’s immediate predecessor, gabriel tarde (1843-1904) [18], [19]. considering quantification in the psychosocial sciences more demanding than in the natural sciences, tarde qualified this measurement challenge as a new level of intellectual achievement [18]. as for max weber, his concept of “ideal type” came to serve as the basis for later developed measurement models addressing psychosocial properties [20]. that was the case of the guttman scale, with its’ proposal of “perfect scale” requirements to yield invariant measurement. weber’s concept of “ideal type” is also linked to principles underlying the rasch measurement approach a “probabilistic realization” of the guttman scale, as described in the literature [20]. according to duncan (1984), "social measurement should be brought within the scope of historical metrology" [56]. duncan's suggestion may become a reality as soon as the cgpm resolutions start addressing the demands for the development of global metrological infrastructure aimed at ensuring the reliability and the comparability of measurement results in humanities and social sciences, consequently promoting the integration of this complex scientific field into the international system of units [3]. 4. conclusion despite never formally being addressed by international metrological organizations, social sciences were established as a discipline shortly after the intergovernmental metrological structure creation, in 1875, by the metre convention signature. the present study explored the concepts potentially associated with the reliable framework provided by metrology among the preliminary measuring strategies developed by two founding designers of the social sciences. sharing the challenging aspect of measurement complexity and unavailability of a corresponding si unit traceability of measuring results with chemical and biological properties, since its early years, the social sciences founding authors embodied ideas close to metrological concepts to ensure comparability as much as possible. the two major methods developed when the social sciences discipline was born conceived different levels of measurement dimensions but equally looked for defining a robust measurement standard to be employed for comparative analysis. emile durkheim’s objective and quantitative approach was directed to generalizations, using statistics to study numerous cases, focusing on a few variables, and defining reference by a mathematical-statistical average type, as well as pathological cases according to their corresponding deviations. such strategy points toward the possibility of stepping forward to the measurement quality attribute of objectivity, minimizing definitional and instrumental measurement uncertainty components. in turn, max weber’s subjective and qualitative method examined the social phenomenon from a micro-dimension perspective, dealing with few cases and multiple variables, by means of which it aims at the highly-complex feature of social events. ‘ideal type’ constructs, which were defined as standard references, would, then, embody the essential social variables for the appropriate description of a social phenomenon. in this sense, weber’s ideal type can be interpreted as a reference material designed to allow the comparability of the results obtained by evaluating a specific construct by several researchers, which indicates a tendency towards an intersubjectivity attribute of measurement quality. the efforts implemented by the founding architects of the social sciences from the earliest moments when it was still being established as a science reinforce the present calls for scientific advances to meet the multiple demands for metrological traceability stemming from all areas of knowledge. complying with these requests constitutes an essential endeavor for establishing the worldwide uniformity of measurements. acknowledgement authors acknowledge the support provided by the brazilian agency capes (coordenação de aperfeiçoamento de pessoal de nível superior) brazil – finance code 001. references [1] bipm, the international system of units, 9th ed., sèvres: international bureau of weights and measures (2019). online [accessed 24 april 2023] https://www.bipm.org/en/publications/si-brochure [2] e. costa monteiro, l. f. leon, metrological reliability of medical devices, j. phys.: conf. ser. 588 (2015), pp. 012032. doi: 10.1088/1742-6596/588/1/012032 [3] e. costa monteiro, bridging the boundaries between sciences to overcome measurement challenges, measurement: interdisciplinary research and perspectives 15(1) (2017), pp. 3436. doi: 10.1080/15366367.2017.1358974 [4] e. costa monteiro, magnetic quantities: healthcare sector measuring demands and international infrastructure for providing https://www.bipm.org/en/publications/si-brochure https://doi.org/10.1088/1742-6596/588/1/012032 https://doi.org/10.1080/15366367.2017.1358974 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 metrological traceability, tmq – techniques, methodologies and quality 1 (2019), pp. 42-50. online [accessed 24 april 2023] https://publicacoes.riqual.org/wpcontent/uploads/2023/01/edesp1_19_42_50.pdf [5] a. bristow, assignment of quantities to biological medicines: an old problem re-discovered, phillos. trans. royal society a: mathematical, physical and engineering sciences 369 (2011), pp. 4004-4013. doi: 10.1098/rsta.2011.0175 [6] r. damatta, relativizando: uma introdução a antropologia social. rocco, rio de janeiro, 2010, isbn 8532501540. [in portuguese] [7] t. l. kelley, interpretation of educational measurements. macmillan, new york, 1927. [8] t. salzberger, s. cano, l. abetz-webb, e. afolalu, c. chrea, r. weitkunat, k. fagerström, addressing traceability in social measurement: establishing a common metric for dependence, journal of physics: conf. ser. iop publishing, 1379(1) (2019) art. no. 012024 doi: 10.1088/1742-6596/1379/1/012024 [9] p. h. pollock iii, b. c. edwards, the definition and measurement of concepts, in: the essentials of political analysis. cq press, 2019, isbn 9781506379593, pp. 1-33. [10] l. mari, e, ugazio, preliminary analysis of validation of measurement in soft systems, j. physics: conf. ser. 238(1) (2010) art. no. 012026. doi: 10.1088/1742-6596/238/1/012026 [11] w. p. fisher jr, a. j. stenner, metrology for the social, behavioral, and economic sciences, national sciences foundation social, behavioral, and economic sciences white paper (2011). online [accessed 24 april 2023] http://www.truevaluemetrics.org/dbpdfs/metrics/william-pfisher/fisherjr_william_metrology-for-the-social-behavioraland-economic-sciences.pdf [12] l. mari, p. carbone, d. petri, fundamentals of hard and soft measurement, in: modern measurements: fundamentals and applications. a. ferrero, d. petri, p. carbone, m. catelani (editors). wiley-ieee press, 2015, isbn 978-1-118-17131-8, pp. 203-262. [13] m. djuric, j. filipovic, s. komazec, reshaping the future of social metrology: utilizing quality indicators to develop complexity-based scientific human and social capital measurement model, social indicators research 148(2) (2020), pp. 535-567. doi: 10.1007/s11205-019-02217-6 [14] t. salzberger, s. cano, l. abetz-webb, e. afolalu, c. chrea, r. weitkunat, j. rose, addressing traceability of self-reported dependence measurement through the use of crosswalks, measurement 181 (2021) art. no. 109593. doi: 10.1016/j.measurement.2021.109593 [15] m. delmastro, on the measurement of social phenomena: a methodological approach. springer international publishing, 2021, isbn 978-3030775353. [16] s. m. marson, r. m. powell, suicide among elders: a durkheimian proposal, international journal of aging and later life 6(1) (2011), pp. 59-79. doi: 10.3384/ijal.1652-8670.116159 [17] s.m. marson, m. hong, j. bullard, the measurement of suicide assessment and the development of a treatment strategy for elders: durkheim an approach, journal of sociology and social work 5(1) (2017), pp. 99-114. doi: 10.15640/jssw.v5n1a10 [18] w. p. fisher jr, almost the tarde model?, rasch measurement transactions, 28(1) (2014), pp. 1459-1461. online [accessed 24 april 2023] https://www.rasch.org/rmt/rmt281.pdf [19] w. p. fisher jr, the central theoretical problem of the social sciences, rasch measurement transactions 28(2) (2014), pp. 14641466. online [accessed 24 april 2023] http://www.rasch.org/rmt/rmt282.pdf [20] g. engelhard jr, invariant measurement: using rasch models in the social, behavioral, and health sciences. routledge, 2013, isbn 978-0415871259. [21] n. kærgård, georg rasch and modern econometrics, presented at the seventh scandinavian history of economic thought meeting, molde university college, molde, norway, 2003. [22] w. p. fisher jr, invariance and traceability for measures of human, social, and natural capital: theory and application, measurement, 42(9) (2009), pp. 1278-1287. doi: 10.1016/j.measurement.2009.03.014 [23] h. zhong, j. xu, a. piquero, internal migration, social exclusion, and victimization: an analysis of chinese rural-to-urban migrants, j. res. crime & delinquency 54(4) (2017), pp. 479-514. doi: 10.1177/0022427816676861 [24] j. melin, s. j. cano, a. flöel, l. göschel, l. r. pendrill, construct specification equations: ‘recipes’ for certified reference materials in cognitive measurement, measurement: sensors 18 (2021) art. no. 100290. doi: 10.1016/j.measen.2021.100290 [25] l. pendrill, n. petersson, metrology of human-based and other qualitative measurements, measurement science and technology, 27(9) (2016) 094003. doi: 10.1088/0957-0233/27/9/094003 [26] t. g. bond, c. fox, applying the rasch model: fundamental measurement in the human sciences. psychology press, 2013, isbn 9780429030499. [27] n. s. da rocha, e. chachamovich, m. p. de almeida fleck, a. tennant, an introduction to rasch analysis for psychiatric practice and research, journal of psychiatric research 47(2) (2013), pp. 141148. doi: 10.1016/j.jpsychires.2012.09.014 [28] j. uher, measurement in metrology, psychology and social sciences: data generation traceability and numerical traceability as basic methodological principles applicable across sciences, quality & quantity, 54(3) (2020), pp. 975-1004. doi: 10.1007/s11135-020-00970-2 [29] b. d. wright, a history of social science measurement, educational measurement: issues and practice 16(4) (1997), pp.3345. doi: 10.1111/j.1745-3992.1997.tb00606.x [30] w. p. fisher jr, a. j. stenner, theory-based metrological traceability in education: a reading measurement network, measurement, 92 (2016), pp. 489-496. doi: 10.1016/j.measurement.2016.06.036 [31] j. a. baird, d. andrich, t. n. hopfenbeck, g. stobart, metrology of education, assessment in education: principles, policy & practice 24(3) (2017), pp. 463-470. doi: 10.1080/0969594x.2017.1337628 [32] g. rasch, probabilistic models for some intelligence and attainment tests. university of chicago press, chicago, 1980, isbn 978-0226705538. [33] g. rasch, on general laws and meaning of measurement in psychology, proc. of the fourth berkeley symposium on mathematical statistics and probability: held at the statistical laboratory, berkeley, united states, 1961, pp. 321-334. [34] l. pendrill, quality assured measurement: unification across social and physical sciences. springer, 2020, isbn 9783030286972. [35] a. maul, l. mari, m. wilson, intersubjectivity of measurement across the sciences, measurement 131 (2019), pp. 764-770. doi: 10.1016/j.measurement.2018.08.068 [36] l. mari, m. wilson, a. maul, measurement across the sciences. springer ser. meas. science and technology, 2021, isbn 9783030655587. [37] l. mari, is our understanding of measurement evolving?, acta imeko 10(4) (2021), pp. 209-213. doi: 10.21014/acta_imeko.v10i4.1169 https://publicacoes.riqual.org/wp-content/uploads/2023/01/edesp1_19_42_50.pdf https://publicacoes.riqual.org/wp-content/uploads/2023/01/edesp1_19_42_50.pdf https://doi.org/10.1098/rsta.2011.0175 https://doi.org/10.1088/1742-6596/1379/1/012024 https://doi.org/10.1088/1742-6596/238/1/012026 http://www.truevaluemetrics.org/dbpdfs/metrics/william-p-fisher/fisherjr_william_metrology-for-the-social-behavioral-and-economic-sciences.pdf http://www.truevaluemetrics.org/dbpdfs/metrics/william-p-fisher/fisherjr_william_metrology-for-the-social-behavioral-and-economic-sciences.pdf http://www.truevaluemetrics.org/dbpdfs/metrics/william-p-fisher/fisherjr_william_metrology-for-the-social-behavioral-and-economic-sciences.pdf https://doi.org/10.1007/s11205-019-02217-6 https://doi.org/10.1016/j.measurement.2021.109593 https://doi.org/10.3384/ijal.1652-8670.116159 https://doi.org/10.15640/jssw.v5n1a10 https://www.rasch.org/rmt/rmt281.pdf http://www.rasch.org/rmt/rmt282.pdf https://doi.org/10.1016/j.measurement.2009.03.014 https://doi.org/10.1177/0022427816676861 https://doi.org/10.1016/j.measen.2021.100290 http://dx.doi.org/10.1088/0957-0233/27/9/094003 https://doi.org/10.1016/j.jpsychires.2012.09.014 https://doi.org/10.1007/s11135-020-00970-2 https://doi.org/10.1111/j.1745-3992.1997.tb00606.x https://doi.org/10.1016/j.measurement.2016.06.036 https://doi.org/10.1080/0969594x.2017.1337628 https://doi.org/10.1016/j.measurement.2018.08.068 http://dx.doi.org/10.21014/acta_imeko.v10i4.1169 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 [38] m. wilson, w. fisher, preface, journal of physics: conference series 772 (2016) 011001. doi: 10.1088/1742-6596/772/1/011001 [39] e. costa monteiro, measurement science challenges in natural and social sciences, iop conf. series: journal of physics: conf. series 1044 (2018) 011001. doi: 10.1088/1742-6596/1044/1/011001 [40] m. wilson, w. fisher, preface of the special issue, psychometric metrology, measurement 145 (2019) p. 190. doi: 10.1016/j.measurement.2019.05.077 [41] m. weber, methodology of social sciences (1903-1917), routledge, 2017, isbn 978-1138528048. [42] e. durkheim, les règles de la méthode sociologique (1894). ultraletters, 2013, isbn 978-2930718408. [in french] [43] h. alpert, emile durkheim and his sociology. columbia university press, 1939, isbn 9780231909983. [44] e. durkheim, de la division du travail social (1893), presses universitaires france, 2007, isbn 978-2130563297. [in french] [45] d. della porta, m. keating, approaches and methodologies in the social sciences: a pluralist perspective. cambridge university press, 2008, isbn 978-0521709668. [46] e. durkheim, le suicide: étude de sociologie (1897), hachette livre bnf, 2013, isbn 978-2012895508. [in french] [47] a. anter, s. breuer, max webers staatssoziologie: positionen und perspektiven. nomos verlagsgesellschaft, 2007, isbn 9783832927738. [in german] [48] m. llanque, max weber, wirtschaft und gesellschaft. grundriss der verstehenden soziologie, tübingen 1922, in: schlüsselwerke der politikwissenschaft. s. kailitz (editor). vs verlag für sozialwissenschaften, 2007, isbn 978-3-531-90400-9, pp. 489493. [in german] [49] r. holton, max weber and the interpretative tradition, in: handbook of historical sociology. g. delanty, e. f. isin (editors). sage, london, 2003, isbn 978-0761971733, pp. 27‐38. [50] m. weber, the protestant ethic and the spirit of capitalism (1905), merchant books, 2013, isbn 9781603866040. [51] l. a. coser, masters of sociological thought: ideas in historical and social context, 2nd ed. harcourt brace jovanovich, new york, 1977, isbn 0155551302 9780155551305. [52] e. klüger, análise de correspondências múltiplas: fundamentos, elaboração e interpretação. bib rev bras inf bibl em ciências sociais, (86) (2018), pp. 68-97. online [accessed 24 april 2023] [in portuguese] https://bibanpocs.emnuvens.com.br/revista/article/view/452 [53] s. f. suglia, l. ryan, r. wright, creation of a community violence exposure scale: accounting for what, who, where, and how often, journal of traumatic stress 21(5) (2008), pp. 479-486. doi: 10.1002/jts.20362 [54] s. l. belvedere, n. a. de morton, application of rasch analysis in health care is increasing and is applied for variable reasons in mobility instruments, journal of clinical epidemiology 63(12) (2010), pp. 1287-1297. doi: 10.1016/j.jclinepi.2010.02.012 [55] jcgm 200:2012, international vocabulary of metrology – basic and general concepts and associated terms, 3rd ed., paris: joint committee for guides in metrology, 2012. online [accessed 24 april 2023] https://www.bipm.org/en/committees/jc/jcgm/publications [56] o. d. duncan, notes on social measurement: historical and critical, russell sage foundation, new york, 1984, pp. 38-39. https://doi.org/10.1088/1742-6596/772/1/011001 https://doi.org/10.1088/1742-6596/1044/1/011001 https://doi.org/10.1016/j.acvdsp.2019.05.077 https://bibanpocs.emnuvens.com.br/revista/article/view/452 https://doi.org/10.1002/jts.20362 https://doi.org/10.1016/j.jclinepi.2010.02.012 https://www.bipm.org/en/committees/jc/jcgm/publications beamforming in cognitive radio networks using partial update adaptive learning algorithm acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 8 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 beamforming in cognitive radio networks using partial update adaptive learning algorithm md zia ur rahman1, p. v. s. aswitha1, d. sriprathyusha1, s. k. sameera farheen1 1 dept. of electronics and communication engineering, koneru lakshmaiah education foundation, vaddeswaram, guntur-522502, andhra pradesh, india section: research paper keywords: adaptive learning, bandwidth, cognitive radio, frequency, power transmission citation: md zia ur rahman, p. v. s. aswitha, d. sriprathyusha, s. k. sameera farheen, beamforming in cognitive radio networks using partial update adaptive learning algorithm, acta imeko, vol. 11, no. 1, article 30, march 2022, identifier: imeko-acta-11 (2022)-01-30 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received december 4, 2021; in final form february 18, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: s. k. sameera farheen, e-mail: sksameera667@gmail.com 1. introduction in medical telemetry wireless communication techniques are used widely in the present generation. generally, these types are used to monitor the condition of a person via pulse, respiration, etc., this wireless medical telemetry service was first established by the federal communications commission by allocating some of the frequency bands separately for this wireless purpose so that it will not have any discrepancies while allocating frequency bands while cross-checking the patient's condition [1]. instead of that, we can use the existing spectrum for a medical telemetry application by using cognitive radio. in this cognitive radio spectrum sensing technique would be there for this spectrum sense we use the beamforming technique. the efficient utilization of spectrum is the most important thing in cognitive radio. cognitive radio offers a spectrum allocation for unlicensed users simultaneously with licensed users. so, the secondary users need to detect the spectrum availability using spectrum sensing by sensing the primary user in the same frequency [2], [3]. the main purpose to introduce beamforming technique is to remove the unwanted noise in different systems which we are using in this process like military sonar and radar systems. this will be separating the sources when the overlapping frequency content which originates at different spatial locations [4]-[6]. this technique is mainly used for sensing purposes, in this phase, the licensed user transmitter will be estimated by the secondary user transmitter according to that estimation the spectrum allocation will be done [5]-[8]. this technique is used to provide an exchange of information within the cells and the remaining near cells will be in a secured manner. the interference which is occurred in signals is mainly due to the imperfection of spectrum sensing, due to that secondary user are freely accessing the primary user channels [9], [10]. at the same time, this technique should return to the channel before the secondary users. this beamforming would be very effective while sensing the spectrum without any interference. a cognitive radio network is one of the important systems in the broadband communication system. the beamforming technique [11], [12] which we are proposing in this paper is used to connect the information inside the cells and the outer cells which are in use abstract cognitive radio technology is a promising way to improve bandwidth efficiency. frequency which is not used in any aspect will be utilized by using some of the most powerful resources in this cognitive radio. one of the main advantages of cognitive radio signal is to detect the different channels which are there in the spectrum and it can modify the frequencies which is utilized frequently. it allows the licensed users to gain the licensed bandwidth under the condition to protect the licensed users from harmful interference i.e., from secondary users. in this paper, we would like to implement cognitive radio using the beamforming technique, by using power allocation as a strategy for the unlicensed transmitter which is purely form on the result of sensing. it is on the state of the primary user in a various cognitive radio network whereas the unlicensed transmitter gives a single antenna and it modify its power transmission. for the cognitive radio setup, we have used normalized adaptive learning algorithms. this application would be very useful in medical telemetry applications. nowadays wireless communication plays a vital role in healthcare applications for that we have to build a separate base. it reduces the effort of the building of separate infrastructure for medical telemetry applications. mailto:sksameera667@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 from all the users. it has proved that the beamforming technique is the smooth technique for spectrum sensing analytically too. in that paper, they explained some statistical tests between maximum-to-minimum eigenvalue and maximum-to-minimum beam energy algorithms based on the statistical values they conclude that beamforming is a smooth sensing technique [13]. the sensors in secondary users are used to assume their particular pu doas. always the secondary users access the primary user's accounts freely and simultaneously, licensed users also intend to get back the mechanism before the unlicensed user acquire expires. at last, the binding centre collaborate with doas into a primary user localization estimation. the result of our final implementation gives that to make possible to make a localization system with a poor complexity and a good primary user localization capability in this cognitive network. the capabilities which are present in pu localization are used to enhance the pu interference [14]. generally, in services of wireless communication, it has rectifiers at every junction point to manage every event of the upcoming trending technologies which gives profit from abstraction property. it has gained from different latest antenna sets and also from algorithms that form adaptive beams [15]. the process which we have used is purely based on least mean square (lms) algorithm, which gives a brief and proper care to the signal model which we have used for the beamforming technique. to get the accuracy convergence rate more than the expected one for the lms which we have used in the smart antenna system, we have used bbnlms (block-based normalized least mean square) algorithm. the presence of this algorithm gives a lot of difference in convergence rate. mainly this algorithm will be performed only in the existence of different effects and many users which is scanned using matlab simulations using different white signals. the lms formula was used only in the sensible antenna systems in between the adjustive beam forming algorithms [16]. in place of quick exploitation, the traditional square of the error vector is used [17]. although many methods have been implemented and implemented with the new ideas for the doa (direction of arrival) of signal. the adaptive array was first discovered by van atta in the year 1959 which is defined as a self-phased array [18]. it reflects all the incident signals in the arrival direction using a clever phasing scheme. these beamforming algorithms are divided into various methods they are mainly fixed weight beamforming and adaptive beamforming algorithm. this adaptive algorithm will update the array weights continuously which is based on optimization of changing signal environment. this section briefly discusses lms, recursive least square, conjugate gradient algorithm, and quasi-newton algorithm [19]. data transmission in a communication channel with a low probability of bit error is possible up to a certain bit rate with a given snr (signal-to-noise ratio). therefore, in addition to the adaptive algorithm, to increase the noise immunity of the wireless system, let us consider the use of channel coding schemes. such coding is a series of transformations of the original bit sequence, as a result of which the transmission of information flow becomes more resistant to the deterioration of the quality of the transmitted information caused by noise, interference, and signal fading. channel coding allows reaching a compromise between bandwidth and the probability of bit error [20]. this paper gives a different aspect in the direction of arrival of signals with a different error rate and at last, in this paper, we provide the simulation results which provides better evidence for the technique which we have provided in this paper. 2. methodology the networks that we are using in this paper are cognitive radio network (crn) and non-orthogonal multiple access (noma) are widely used system in the 5g broadband communication system. there is one advantage for the users who are using a cognitive radio network is to protect the information from different devices like multiple-input multipleoutput-noma. in our paper, we are trying to implement the substitute of beam forming technique which is already available to protect the data exchange inside the cells and outer cells from different users and their system model is shown in figure 1. the intervention was caused by a faulty signal of the unlicensed users. the unlicensed users are intended to get the availability of licensed users. though the licensed user repays the channel before the licensed user access terminates. adaptive antenna systems include a mixture of different antenna components with a signal-processing ability to improve its radiation or acceptance pattern instinctively in response to the signal environment. the method which we are proposing in our paper on a partial update least mean square (pu-lms) algorithm is an algorithm that is used to control the overload and less power consumption in the implementation of an adaptive filter. thus, the problem of adaptive filtering algorithm improves the filter coefficients is shown in figure 2. hence, we provide a better result from our proposed technique. lms algorithm is an algorithm admired in adaptive beam formers in which we use the antenna arrays. it is also used for channel levelling to conflict the inter-symbol interference. simultaneously, few applications of lms can incorporate interference, echo cancellation, space-time modulation, and coding, and the signals which are in observation. whereas, the already algorithms have high faster convergence rate like recursive least square and least mean square which is already admired because of their implementation and its computational costs. it is one of the effective methods for power consumption and for reducing the computational load in the adaptive filter implementations, which is appealing in mobile communications. figure 1. cognitive system mode. figure 2. overview of project implementation. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 block diagram and overview of proposed algorithm is shown in figure 3 and figure 4, respectively. there are many mobile communication devices and those applications like channel equalization as well as echo cancellation which requires the adaptive filter to get a high number of coefficients. to modernize the entire signal vector which is costly regarding ram, capacity, and computation and sometimes it does not fit mobile units. finally, in this paper, we present the analysis of the confluence of partial update adaptive learning (pual) algorithm lesser than different types of suppositions, and when the diversions are visible then it can be prevented by different coefficient updates accidentally, which is also known as sequential partial update adaptive learning (spu-al). 2.1. cyclostationary input signals cyclostationary is one of the processes which is having different analytics which differ cyclically with respect to meter. it is the one that can be showed as a various interleaved stationary process. it is also having a detection method for estimating and for spectral autocorrelation function technique to analyse the spectrum sensing. it can be detected whether it is active or not and whether the signal is used by licensed users or not it can sense these easily because this technique is robust in the sense. sequential pu-al algorithm, shows how the coefficients are updating in simulation and associated samples of the signals which are used in every overhaul (x power 1/4)(n) being the value of the retreat signal at the present instant and its flow chart is shown in figure 4. it simulates as a higher bound for each step size of pu al compatible algorithm when the signal is given as an input signal as a periodic reference which consists of many euphonious 𝐸{ℎ(𝑙 + 𝑇)} = 𝐸{ℎ(𝑙)𝐸{ℎ(𝑙 + 𝑇)ℎ(𝑙 + 𝑇 + 𝑛)} = 𝐸{ℎ(𝑙)ℎ(𝑙 + 𝑛)} (1) 𝑦(𝑙) = [𝑦(𝑙)𝑦(𝑙 − 1)𝑦(𝑙 − 2) … . , 𝑦(𝑙 − 𝑁 + 1)]t 𝐵𝑦 (𝑙) = 𝐸{𝑦(𝑙)𝑦 t(𝑙)} = diag[𝜎𝑦 2(𝑙), … . , 𝜎𝑦 2(𝑙 − 𝑁 + 1)] (2) 𝜎𝑦 2 = { 𝑀1 for 𝑖 𝑇 < 𝑙 ≤ 𝑖 𝑇 + 𝛼 𝑇 𝑀2 for 𝑖 𝑇 + 𝛼 𝑇 ≤ (𝑖 + 1) 𝑇 (3) or 0 < 𝛼 < 1 and 𝑖 = ⋯ , −4, −3, −2, −1, 0, 1, 2, 3, 4, … and sinusoidal power of a variation in time. 𝜎𝑦 2 = 𝛽(1 + sin(𝜔𝑜 𝑙)) . (4) here, is larger than zero, and 𝜔o falls between 0 and , with 𝜔o/(2 ) being a rational integer. if the sinusoidal power of a variation in time period is more than three, then 𝑦(𝑙) is not an input signal. 2.2. sequential partial update adaptive learning algorithm the needed signal is denoted by 𝑑(𝑙), the desirable weight vectors are denoted by 𝜔o, and the noise measurement is denoted by 𝑣(𝑙) 𝑑(𝑙) = 𝑦t(𝑙)𝜔o + 𝑣(𝑙) (5) 𝑒(𝑙) = 𝑒(𝑙) − 𝑦t(𝑙) 𝑢(𝑙) (6) 𝑢(𝑙 + 1) = 𝑢(𝑙) + 𝜇𝑒 (𝑙)𝐼𝐾 (𝑙)𝑦(𝑙) . (7) here 𝑢(𝑙) is the weight vector (adjustable filter coefficient vector), 𝑑(𝑙) = 𝑦t(𝑙) 𝑢(𝑙) indicates fallacy of method (whatever algorithm we have taken) and 𝐼 coefficient preference matrix is: 𝐼k(𝑙) diag[𝑖1(𝑙), 𝑖2(𝑙), … … , 𝑖𝑁 (𝑙) (8) with 𝐴 = 𝑁 𝐾⁄ . here, 𝐴 is taken as integer %(𝑙, 𝐴) represents the ‘%’ operation which results the remainder after making division 𝑙 by 𝐴. ∑ 𝑖𝑗 (𝑙) 𝑁 𝑖=1 = 𝐾, 𝑖𝑗 ∈ {0,1} the coefficient subsets 𝐸i are different up to they reach the following requirements: 1. 𝑈i=1 a 𝐸i = 𝑍 where 𝑍 = {1,2, … , 𝑁} 2. 𝐸𝑖 ∩ 𝐸𝑗 = ∅, ∀𝑗 , 𝑗 ∈ {1,2,3, … , 𝐴} and 𝑖 ≠ 𝑗 . 2.3. performance analysis for the input signal �̌�(𝑙 + 1) = �̌�(𝑙) − 𝜇𝑒 (𝑙)𝐼𝐾 (𝑙)𝑦(𝑙) (9) here is �̃�(𝑙) = 𝜔o − 𝑢(𝑙). (10) using (11), 𝑔(𝑙) = 𝑦t�̃�(𝑙) + 𝑣(𝑙) (11) is obtained. putting (11) into (9), then �̃�(𝑙 + 1) = 1 − 𝜇 𝐼𝑘 (𝑙) 𝑦(𝑙) 𝑦 t(𝑙) �̃�(𝑙) − 𝜇 𝑣(𝑙) 𝐼𝐾 (𝑙) 𝑦(𝑙) (12) taking the exception on both sides of (12), we get 𝐸{�̃�(𝑙 + 1)} = (𝐼 − 𝜇 𝐸{𝐼𝐾 (𝑙) 𝑦(𝑙) 𝑦 t(𝑙)}) 𝐸{�̃�(𝑙)} (13) time varying variance model 𝜎𝑦 2 = { 𝑀1 if mod(1, 𝑇) = 1 … 𝑀𝑇−1 if mod(1, 𝑇) = 𝑇 − 1 𝑀𝑇 if mod(1, 𝑇) = 0 . (14) we took the set 𝑀1, 𝑀2, …, 𝑀𝑇 , which has one large value (like 1) and one tiny value (like 0.001). it's to make sure that (3) and (4) are both true (14). between the su-partial update parameter 𝐴 and the input signal period 𝑇, the six instances reflect all potential scenarios. 2.3.1. case study 1 𝑇 ≤ 𝐴 and %(1, 𝑇) = 0. in case of (13) can be rework as 𝐸{�̃�(𝑙 + 1)} = 1 − 𝜇𝑖𝑗 (𝑙)𝜎𝑦 2(𝑙 − 𝑗 + 1)𝐸{𝑢�̃�(𝑙)} (15) figure 3. overview of partial update algorithm. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 here 𝑢�̃�(𝑙) is the j th entry of �̃�𝑗 (𝑙). by verifying (7), the rechecking (15) is changed for every 𝐴 iteration. by adding the 𝐴 iterations of (15) 𝐸{𝑢�̃�(𝑙 + 𝐵)} = (1 − 𝜇𝜎𝑦 2(𝑙𝑗 − 𝑗 + 1)) 𝐸{𝑢�̃�(𝑙)} (16) here 𝑙 is taking as integer satisfying 𝑙 where 𝑙𝑗 < 𝑙 + 𝐴. let declare the parameter 𝑑𝑗 = 𝑙𝑗 − [𝑙𝑗 𝐴⁄ ]𝐴 to indicate which entry of 𝐸{𝑢�̃�(𝑙)} where 𝑙𝑗 is satisfies 𝑙𝑗 < 𝑙 + 𝐴. let declare the parameter like 𝑑𝑗 = 𝑙𝑗 − [𝑙𝑗 𝐴⁄ ]𝐴, to represents the coming of 𝐸{𝑢�̃�(𝑙)} is upgraded. the function [𝑦] converts 𝑦 to largest integer ≤ 𝑦. by giving the sequential pu variable 𝐴, 𝑑𝑗 only depends on 𝑗 value. here %(𝐴, 𝑇) = 0, we have the equation 𝜎𝑦 2(𝑙𝑗 − 𝑗 + 1) = 𝑀𝑡𝑗 here 𝑡�̃� = 𝑐(𝑡�̃�) and 𝑑𝑗 + 𝑁 − 𝑗 + 1 − [𝑑𝑗 + 𝑁 − 𝑗 + 1 𝑇 ] and 𝑐(𝑦) = 𝑦 − 𝑇|sign(𝑦)| + 𝑇. by putting this 𝜎𝑦 2(𝑙𝑗 − 𝑗 + 1) = 𝑀𝑡𝑗 in to (16), we get 𝐸{𝑢�̃�(𝑙 + 𝐵)} = (1 − 𝜇𝜎𝑦 2𝑀𝑡𝑗 ) 𝐸{𝑢�̃�(𝑙)} (17) here %(𝐴, 𝑇) = 0 and 𝑡𝑗 is like take as an integer, by looping (17), we get 𝐸{𝑢�̃�(𝑙 + 𝑏 + 1)𝐴} = (1 − 𝜇𝑀𝑡𝑗 ) 𝐸{𝑢�̃�(𝑙 + 𝑏𝐴)} . (18) in this instance, 𝒃 is a positive integer. it would be the combination of (17) and (18), according to (18). 𝑬{(𝒖�̃�(𝒍 + (𝒃 + 𝟏)𝑨} depends on 𝑴𝒕𝒋 . if 𝑴𝒕𝒋 in the input signal which is cyclostationary is very small, is explained in [11]. input signal will change partially for every iteration, it effects the 𝒖�̃�(𝒍)to converge and moving towards update. hence, in this sequential pu-al will certainly met with those tough conditions. 2.3.2. case study 2 𝑇 ≤ 𝐴 and %(𝐴, 𝑇) ≠ 0 and greatest common divisor (gcd) (𝐴, 𝑇) is equal to 1. here gcd(𝐴, 𝑇) represents the gcd of 𝐴 by 𝜎𝑦 2(𝑙𝑗 − 𝑗 + 1) = 𝑀𝑡𝑗(𝑙) here 𝑡𝑗 (𝑙) is taken like an integer declared as 𝑇𝑗 (𝑙) = 𝑐(𝑡�̃�(𝑙)) and 𝑡�̃�(𝑙) = 𝑙𝑗 − [𝑙𝑗 𝑇⁄ ]𝑇. then, 𝑇𝑗 (𝑙) is depends on the values of both 𝑗 and 𝑙. hence, (17) becomes 𝐸{𝑢�̃�(𝑙 + 𝐵)} = (1 − 𝜇𝑀𝑇𝑗(𝑙) ) 𝐸 {1 − 𝜇𝑀𝑇𝑗(𝑙)) 𝐸{𝑢�̃�(𝑙)} (19) looping the equation (19) for 𝐴 number of times, we get 𝐸{𝑢�̃�(𝑙 + 2𝐵)} = (1 − 𝜇𝑀𝑓(𝑇𝑗(𝑙)) ) 𝐸{𝑢�̃�(𝑙 + 𝐴)} (20) 𝑓 (𝑇𝑗 (𝑙)) = { 𝑇𝑗 (𝑙) + (%(𝐴, 𝑇)), 𝑇𝑗 (𝑙) + (%(𝐴, 𝑇) ≤ 𝑇 𝑇𝑗 (𝑙) + (%(𝐴, 𝑇) − 𝑇, otherwise . (21) since 𝑇 ≤ 𝐴, %(𝐴, 𝑇) is not equal to 0 and gcd(𝐴, 𝑇) is equal to 1, looping (21) for 𝑇 times, we get 𝐸{𝑢�̃�(𝑙 + 𝑇𝐵)} = ⟦1 − 𝜇𝑀 𝑓… 𝑓(𝑇𝑗(𝑙)) ⟧ 𝐸{𝑢�̃�(𝑙 + (𝑇 − 1)𝐴} = (1 − 𝜇𝑀1) … (1 − 𝜇𝑀𝑇 )𝐸{𝑢�̃�(𝑙)} (22) where 𝑓 … 𝑓 (𝑇𝑗 (𝑙)) represents the configuration of 𝑓(. ) in 𝑇 times. in (22), here we can observe the updates the process of input signal is declared by 𝑇 variances {𝑀1, 𝑀2, … 𝑀𝑇 }. as a consequence, the strategy (serial pu-al) will not interfere with toughness in this scenario. if the step-size meets the requirements, the pu-lms is stable. 0 < 𝜇 ≤ 2/max (𝑀1, 𝑀2, … , 𝑀𝑇 ) (23) 2.3.3. case study 3 𝑇 ≤ 𝐴, %(𝐴, 𝑇) ≠ 0 and gcd(𝐴, 𝑇) is greater than one. the least common multiple (lcm) of 𝑇 and 𝐴 is represented by lcm(𝐴, 𝑇). clearly lcm(𝐴, 𝑇) < 𝐴, 𝑇. here, the variance would become, 𝜎𝑦 2(𝑙𝑗 − 𝑗 + 1) is given that 𝜎𝑦 2(𝑙𝑗 − 𝑗 + 1) = 𝑀𝜉𝑗(𝑙), where 𝜉𝑗 (𝑙) = 𝑐 (𝜉�̃�(𝑙)) and 𝜉�̃�(𝑙) = 𝑙𝑗 − [ 𝑘𝑗 𝑇 ] 𝑇. then, (17) becomes figure 4. flow chart of a partial update adaptive learning algorithm. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 𝐸{𝑢�̃�(𝑙 + 𝐵)} = (1 − 𝜇 𝑀𝜉𝑗(𝑙) ) 𝐸{𝑢�̃�(𝑙)} (24) looping equation (24), then we get 𝐸{𝑢�̃�(𝑙 + 2𝐵)} = (1 − 𝜇 𝑀𝑓(𝜉𝑗(𝑙)) ) 𝐸{𝑢�̃�(𝑙)} (25) though, 𝑇 ≤ 𝐴 and %(𝐴, 𝑇) is not equal to zero and gcd(𝐴, 𝑇) is greater than one, looping the equation (25) lcm(𝐴, 𝑇) times, then 𝐸{𝑢�̃�(𝑙 + lcm(𝐴, 𝑇))} = (1 − 𝜇 𝑀 𝑓……𝑓(𝜉𝑗(𝑙)) ) × … × (1 − 𝜇 𝑀𝜉𝑗(𝑙)) 𝐸{𝑢�̃�(𝑙)} , (26) where lcm(𝐴, 𝑇) 𝐴⁄ is less than 𝑇, there is a one or more than one parameter in the set {𝑀1, 𝑀2, … … , 𝑀𝑇 } that is not matched to input signal i.e., 𝐸{𝑢�̃�(𝑙)}. the parameters is with the input signal i.e., 𝐸{𝑢�̃�(𝑙)} all are tiny values, the up-dation purpose of the input signal of 𝐸{𝑢�̃�(𝑙)} will be very slow, while resulting the interference. 2.3.4. case study 4 𝑇 > 𝐴 and %(𝐴, 𝑇) = 0 in this case the divergence would become 𝜎𝑦 2(𝑙𝑗 − 𝑗 + 1) = 𝑀𝜉𝑗(𝑙) in this equation 𝜉𝑗 (𝑙) would be taken as positive integer and having an equal probability for taking on those values of [𝑠�̃� , 𝑠�̃� + 𝐴, … , 𝑠�̃� + 𝑇 − 𝐴] where 0 < 𝑑�̃� < 𝐴 is related to 𝑑𝑗 through 𝑑�̃� − 𝑑𝑗 = 𝑧, where 𝑧=0, 1, 2,…. then we get 𝐸{𝑢�̃�(𝑙 + 𝐵)} = (1 − 𝜇𝑀(𝜉𝑗(𝑙)) ) 𝐸{𝑢�̃�(𝑙)} (27) looping equation (27), we get 𝐸{𝑢�̃�(𝑙 + 2𝐵)} = (1 − 𝜇𝑀𝑞(𝜉𝑗(𝑙)) ) 𝐸{𝑢�̃�(𝑙)} (28) 𝑞 (𝜉𝑗 (𝑙)) = { 𝜉𝑗 (𝑙) + 𝐴 𝜉𝑗 (𝑙) + 𝐴 ≤ 𝑇 𝜉𝑗 (𝑙) + 𝐴 − 𝑇 otherwise . here 𝑇 is greater than 𝐴 and %(𝐴, 𝑇) is equal to zero, looping equation (28) 𝑇 𝐴⁄ number of times, we get 𝐸{𝑢�̃�(𝑙 + 𝐵)} = (1 − 𝜇𝑀 𝑞…..𝑞(𝜉𝑗(𝑙)) ) 𝐸{𝑢�̃�(𝑙 + 𝑇 − 𝐴)} = (1 − 𝜇𝑀 𝑔……..𝑔(𝜉𝑗(𝑙)) ) 𝐸{𝑢�̃�(𝑙 + 𝑇 − 𝐴)} (29) if all values in the taken set {𝑀𝑞(𝜉𝑗(𝑙),… ,𝑀𝑔…𝑔(𝜉𝑗(𝑙)) } have very tiny values, up-dation for input signal i.e., 𝐸{𝑢�̃�(𝑙)} might be slightly slow, sequential pu-lms might show very low interference. 2.3.5. case study 5 𝑇 > 𝐴, %(𝐴, 𝑇) ≠ 0 and gcd(𝐴, 𝑇) is equal to one. likewise coming to case 2, the sequential pu-lms is stable if equation-24 obeys the step-size. 𝑇 > 𝐴, %(𝐴, 𝑇) ≠ 0 and gcd(𝐴, 𝑇) is greater than one. likewise looking to case-3, the sequential pu-lms faces the low interference. note: we learned from this that for input signals with regularly time-varying variance (observe equations-2, 3, 4, and 15), the sequential partial update-lms method would not confront the low intersection challenging condition in case-2 or case-5. 𝐴 and 𝑇 become coprime integers in just two situations, with the gcd(𝐴, 𝑇) equal to 1. 𝐴 and 𝑇 are not co-primes, and the sequential partial update-lms algorithm may demonstrate a very sluggish intersection, depending on how the repeating power levels were balanced. 3. simulation results in this simulation part, the response of a signal with one doa comes at the base station with an angle of 60 degrees. threshold values of 0.1, 0.5 and 1 simulation results are conducted. for each threshold point interference rate is considered in terms of the samples which we have used to reach the steady state. from the simulation results clearly, we can know that the received signal converges at a mean square error of 0.0007. generally, if we observe the simulation results for 𝛼 = 0.5 and 1, steady state converged faster when compared to the conventional lms algorithm and also, we can clearly observe the delay period for these threshold points. this delay corresponds to the samples which are observed before the adaptive antenna is ready to adapt. the improvement in interference rate purely depends on the number of taps adapted. 3.1. one white signal three doas in this module effects of multipath in antenna systems are studied for various threshold conditions. three multipaths with the three different directions of arrivals of 60, 30 and -20 degrees are transmitted to a base station with the different sampling periods. hence, three signals are arrived with time differences of 𝑡, 𝑡 − 1, and 𝑡 − 2 each with amplitudes of 0.6, 0.75 and 1.0. using the pu-al algorithm three different weight updating equations are used for processing each multipath signal. these figure 5. beam pattern of one white signal with 3 doas using pu-al for α = 0.1. figure 6. received error signal of one white signal with three doas using pu-al for α = 0.1. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 simulation results are conducted at threshold values of 0.1, 0.2 and 1. interference tables are for each multipath signal is shown in terms of samples to reach the steady state. for each multipath signal, the mean square error is approximately lies between 0.006, 0.0014 and 0.00036 respectively and their simulation output is shown in figure 5. beam patterns of the proposed pu-al algorithm are shown in figure 6. from that, we can clearly say that the proposed algorithm is having a better ability to steer the beams in different directions, and nulls are placed in the place of interferences. gain of the beam is obtained corresponding to gain introduced by each multipath signal. 3.2. two white signals with one doa each transmitting of one signal with two multipath signals and two white signals with one doa effect which is similar. in those cases, two signals are uncorrelated to each other, and it is divided by one sample period. firstly, the signal with an amplitude of 0.5 and the second signal with an amplitude of 1.0 is considered. the interference table of each signal in terms of the number of samples is shown. from this, we can know that smaller gain amplitudes lead to longer responses to adapt taps and for estimated signals. for 𝛼 = 0.1 value, it will converge faster when compared to 0.2 and 1. from this delay is created for these values so it takes a longer time to the response before adaption of taps. these threshold points can adapt to the limited number of taps and it affects the system performance and its simulation output in figure 7. the beam pattern is shown in figure 8 with two beams corresponding to the doas at 60 and -25 degrees. this demonstrates a smart antenna system with desired signals and interfering ones. beamforming technique as sensing technique in cognitive radio network inside this work, we attempted to explain the beamforming approach in cognitive radio as spectrum sensing. this beamforming approach may be used in two ways: centralized or distributed. we've used a dispersed strategy in this case. cognitive radio may use the beamforming approach to target a specific beam onto a specific receiver while reducing involvement in surrounding directions, improving network performance. this distributed method gives each user a separate antenna, and several of them broadcast the signal together by manipulating the transmitter's carriers. the authorized users' interference is lessened when this method is used. cognitive radio would be able to expand the range of communication by employing this beamforming approach in the signal beam is vigorous to the appropriate direction by this spectrum. another benefit of this beamforming technology is that it reduces delay spread, multipath fading, and radio transmission on the same frequency channel as other interferences, among other things. the diagram depicted in the illustration is a geometrical representation of a cognitive radio network at the intended receiver location, which includes licensed users. there are k cognitive users who are uniformly scattered over a disc with a radius of r and a centre of p. assume the cognitive radio users' position is (sk, $k), which is polar coordinates. similarly, use the specified spherical coordinates to represent the receiver. we made the assumption that cognitive radio nodes are uniformly dispersed across a disc with a radius of r. a single antenna is provided for each node. because the channel between the users and the receiver is in line of sight, there is no shadowing. the principal users, also known as licensed users, the transmitters are in the far zone of the beam pattern, while the receivers are in the near zone. the simulation depicts the statistical distribution of the radiation pattern in the lobe that includes the direction in which radiation strength is greatest (major lobe) and lobes that do not contain the main lobe (sidelobes). these are generally rays that are aimed in an unfavourable direction. it is used to examine these power levels by running 10,000 trials to create a beam pattern. the radius of the disc is r/(lambda) = 2, azimuth angle = 0 degrees, and elevation angle = pi/2, all of which are regularised by wavelength. these cognitive users employ uniform distribution, with numbers such as 4, 7, 16, 100, and 256 being used. the signal-to-noise ratio in the loop (snr) of the phase loop locked output variance is expected to be 2 db, 3 db, and 10 db as shown in figure 9. at an angle of 20 degrees and 30 degrees, two licensed users or principal users are presumed to be present. figure 7. received error signal of two white signal with one doa using pu-al for α = 0.1. figure 8. beam pattern of two white signals with one doa using pu-al for α = 0.1. figure 9. average power vs. direction of antenna. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 the beampattern phase-only distributed beamforming (podb) approach does not have a perfect phase in the diagram above. in this situation, the number of users is 100. the average gain for loop snr 10 db is 0 db at an angle of 0 degrees, whereas the gain marginally reduces for loop snr 2 db and 3 db. because of the incomplete phase, the main lobe is at its apex. the users are at a 20-degree and 30-degree angle, respectively, therefore the sidelobe power is about equal to -20 db. the cumulative complementary distribution function (ccdf) of podb with phase offset is shown in the figure above for k values of 3, 8 and 15. the proportion of equal to or more than a certain power level beampattern is shown in figure 10. for the ccdf computation, we ran 10,000 simulations. for a loop snr of 10 db, the angle would be set to 0 degrees. for values of 4, 7 and 16, the major lobe is at its highest. finally, we employed the sequential partial update in this case. the leastsquare technique is used to repair errors caused by altering antenna element placements. the error between the actual and expected replies is reduced with this approach. 4. conclusion this paper has analysed the spectrum sensing issue using the beamforming technique in that we have used an algorithm for the input signal is sequential pu-al. here we have taken the input signal as cyclostationary signal with white gaussian noise and we have analysed the input signal using pu-al in different case studies and these case studies also represent how the sequential pu-al tolerate the convergence related issues. we have implemented many new things as a result of our project, to keep up with technological advancements. nowadays, we rely on online sources for everything instead of utilizing the spectrum that is available not only now but also in the future to save money. by employing this approach in the future, stricter bounds on the rate of convergence can be achieved. for stationary signals, a mean update equation of pu-al can be developed. it can also be looked at the techniques that can be used to analyse the performance of the max pu-al algorithm. referrences [1] s. surekha, md zia ur rahman, navarun gupta, a low complex spectrum sensing technique for medical telemetry system, journal of scientific & industrial research, vol. 80, pp. 449-456, may 2021. [2] j. divya lakshmi, rangaiah. l, cognitive radio principles and spectrum sensing, blue eyes intelligence engineering & sciences publication, volume-8, august 2019, pp. 2249 – 8958. [3] omi sunuvar, a study of distributed beamforming in cognitive radio networks, 2013. [4] numa joy, anu abraham mathew, beamforming for sensing based spectrum sharing in cognitive radio network, international journal of engineering research & technology (ijert), 2015. [5] hyils sharon magdalene antony, thulasimani lakshmanan, secure beamforming in 5g-based cognitive radio network, symmetry, 2019. doi: 10.3990/sym11101260 [6] kais bouallegue, matthieu crussiere, iyad dayoub, on the impact of the covariance matrix size for spectrum sensing methods: beamforming versus eigen values, ieee, 2019. doi: 10.1109/iscc47284.2019.8969741 [7] aki hakkarainen, janis werner, nikhil gulati, damiano patron, doug pfeil, henna paaso, aarne mammela, kapil dandekar, mikko valkama, reconfifigurable antenna based doa estimation and localization in cognitive radios: low complexity algorithms and practical measurements, 2014. [8] yu sing xiao, danny h. k. tsang, interference alignment beamforming and power allocation for cognitive mimonoma downlink networks, ieee, 2019. doi: 10.1109/wcnc.2019.8885714 [9] md. zia ur rahman, v. ajay kumar and g v s karthik, a low complex adaptive algorithm for antenna beam steering, ieee, 2011. doi: 10.1109/icsccn.2011.6024567 [10] janis werner, jun wang, aki hakkarainen, danijela cabric,mikko valkama, performance and cramer-rao bounds for doa/rss estimation and transmitter localization using sectorized antennas, ieee transactions on vehicular technology,volume: 65, may 2016. doi: 10.1109/tvt.2015.2445317 [11] n. j. bershad, e. eweda, and j. c.m. bermudez, stochastic analysis of the lms and nlms algorithms for cyclostationary white gaussian inputs, ieee trans. signal process., vol. 62, no. 9, pp. 2238–2249, may 2014. [12] m. z. u. rahman, s. surekha, k. p. satamraju, s. s. mirza, a. layekuakille, a collateral sensor data sharing framework for decentralized healthcare systems, ieee sensors journal, november 2021. doi: 10.1109/jsen.2021.3125529 [13] s. surekha, md zia ur rahman, spectrum sensing for wireless medical telemetrysystems using a bias compensated normalized adaptive algorithm, international journal of microwave and optical technology, vol. 16, 2021, no.2, pp. 1-10 [14] s. surekha, a. lay-ekuakille, a. pietrosanto, m. a. ugwiri, energy detection for spectrum sensing in medical telemetry networks using modified nlms algorithm, 2020 ieee international instrumentation and measurement technology conference (i2mtc), 2020, pp. 1-5, doi: 10.1109/i2mtc43012.2020.9129107 [15] shafi shahsavar mirza, nagesh mantravadi, sala surekha, md zia ur rahman, adaptive learning based beamforming for spectrum sensing in wireless communications, international journal of microwave and optical technology, vol. 16, 2021, no.5. [16] armando coccia, federica amitrano, leandro donisi, giuseppe cesarelli, gaetano pagano, mario cesarelli, giovanni d'addio, design and validation of an e-textile-based wearable system for remote health monitoring, acta imeko, vol.10, no.2, pp. 1-10, 2021. doi: 10.21014/acta_imeko.v10i2.912 [17] ayesha tarannum, zia ur rahman, l. koteswara rao, t. srinivasulu, aimé lay-ekuakille, an efficient multi-modal biometric sensing and authentication framework for distributed applications, ieee sensor journal, vol. 20, no. 24, 2020, pp. figure 10. ccdf vs. instantaneous power. https://doi.org/10.3990/sym11101260 http://dx.doi.org/10.1109/iscc47284.2019.8969741 https://doi.org/10.1109/wcnc.2019.8885714 https://doi.org/10.1109/icsccn.2011.6024567 https://doi.org/10.1109/tvt.2015.2445317 https://doi.org/10.1109/jsen.2021.3125529 https://doi.org/10.1109/i2mtc43012.2020.9129107 http://dx.doi.org/10.21014/acta_imeko.v10i2.912 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 8 15014-1515025. doi: 10.1109/jsen.2020.3012536 [18] s. y. fathima, k. murali krishna, shakira bhanu, s. s. mirza, side lobe suppression in nc-ofdm systems using variable cancellation basis function, ieee access, vol.5, no.1, 2017, pp. 9415-9421. doi: 10.1109/access.2017.2705351 [19] imran ahmed, eulalia balestrieri, francesco lamonaca, iomtbased biomedical measurement systems for healthcare monitoring: a review, acta imeko, vol. 10, 2021, no.2, pp. 174 184. doi: 10.21014/acta_imeko.v10i2.1080 [20] k. murali krishna, k. krishna reddy, m. vasim babu, s.s. mirza, s.y. fathima, ultra-wide band band-pass filters using plasmonic mim wave guide based ring resonators, ieee photonics technology letters, vol.30, 2018, no.9, pp. 1715-1718. doi: 10.48084/etasr.4194 https://doi.org/10.1109/jsen.2020.3012536 https://doi.org/10.1109/access.2017.2705351 http://dx.doi.org/10.21014/acta_imeko.v10i2.1080 https://doi.org/10.48084/etasr.4194 vision-based reinforcement learning for lane-tracking control acta imeko issn: 2221-870x september 2021, volume 10, number 3, 7 14 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 7 vision-based reinforcement learning for lane-tracking control andrás kalapos1, csaba gór2, róbert moni3, istván harmati1 1 bme, dept. of control engineering and information technology, budapest, hungary 2 continental adas ai, budapest, hungary 3 bme, dept. of telecommunications and media informatics, budapest, hungary section: research paper keywords: artificial intelligence, machine learning, mobile robot, reinforcement learning, simulation-to-reality, transfer learning citation: andrás kalapos, csaba gór, róbert moni, istván harmati, vision-based reinforcement learning for lane-tracking control, acta imeko, vol. 10, no. 3, article 4, september 2021, identifier: imeko-acta-10 (2021)-03-04 section editor: bálint kiss, budapest university of technology and economics, hungary received january 17, 2021; in final form september 22, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: andrás kalapos, e-mail: andras.kalapos.research@gmail.com 1. introduction reinforcement learning has been used to solve many control and robotics tasks. however, only a handful of papers have been published that apply this technique to end-to-end driving [1]-[7], and even fewer studies have focused on reinforcement learningbased driving, trained only in simulations and then applied to real-world problems. generally, bridging the gap between simulation and the real world is an important transfer-learning problem related to reinforcement learning, and it is an unresolved task for researchers. mnih et al. [1] proposed a method to train vehicle controller policies that predict discrete control actions based on a single image of a forward-facing camera. jaritz et al. [2] used wrc6, a realistic racing simulator, to train a vision-based road-following policy. they assessed the policy's generalisation capability by testing it on previously unseen tracks and on real driving videos in an open-loop configuration; but their work did not extend to an evaluation of real vehicles in closed-loop control. kendall et al. [3] demonstrated real-world driving by training a lanefollowing policy exclusively on a real vehicle under the supervision of a safety driver. shi et al. [4] presented research that involved training reinforcement learning agents in duckietown, in a similar way to that presented here; however, the focus was mainly on presenting a method that explained the reasoning behind the trained agents rather than the training methods. also similar to the present study, balaji et al. [5] presented a method for training a road-following policy in a simulator using reinforcement learning and tested the trained agent in the real world, yet their primary contribution is the deepracer platform rather than an in-depth analysis of the road-following policy. almási et al. [7] also used reinforcement learning to solve lane following in the duckietown environment, but their work differs from the present study in the use of an off-policy reinforcement learning algorithm (deep q-networks (dqns) [8]); in this study an on-policy algorithm (proximal policy optimization [9]) is used, which achieves significantly better sample efficiency and shorter training times. another important difference is that almási et al. applied hand-crafted colour threshold-based segmentation to the input images, whereas the method presented here takes the ‘raw’ images as inputs, which allows for a more robust real performance. abstract the present study focused on vision-based end-to-end reinforcement learning in relation to vehicle control problems such as lane following and collision avoidance. the controller policy presented in this paper is able to control a small-scale robot to follow the righthand lane of a real two-lane road, although its training has only been carried out in a simulation. this model, realised by a simple, convolutional network, relies on images of a forward-facing monocular camera and generates continuous actions that directly control the vehicle. to train this policy, proximal policy optimization was used, and to achieve the generalisation capability requir ed for real performance, domain randomisation was used. a thorough analysis of the trained policy was conducted by measuring multiple performance metrics and comparing these to baselines that rely on other methods. to assess the quality of the simulation-to-reality transfer learning process and the performance of the controller in the real world, simple metrics were measured on a real track and compared with results from a matching simulation. further analysis was carried out by visualising salient object maps. mailto:andras.kalapos.research@gmail.com acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 8 this paper is an extended version of the authors’ original contribution [10]. it includes the results of the 5th ai driving olympics [11] and aims to improve the description of the methods. in both works, vision-based end-to-end reinforcement learning relating to vehicle control problems is studied and a solution is proposed that performs lane following in the real world, using continuous actions, without any real data provided by an expert (as in [3]). also, validation of the trained policies in both the real and simulated domains is conducted. the training and evaluation code for this paper is available on github1. 2. methods in this study, a neural-network-based controller was trained that takes images from a forward-looking monocular camera and produces control signals to drive a vehicle in the right-hand lane of a two-way road. the vehicle to be controlled was a small differential-wheeled mobile robot, a duckiebot, which is part of the duckietown ecosystem [11], a simple and accessible platform for research and education on mobile robotics and autonomous vehicles. the primary objective was to travel as far as possible within a given time without leaving the road. lane departure was allowed but not preferred. although the latest version of the duckiebot is equipped with wheel encoders, for this method, the vehicle was solely reliant on data from the robot's forward-facing monocular camera. 2.1. reinforcement learning algorithm in reinforcement learning, an agent interacts with the environment by taking 𝑎𝑡 action, then the environment returns 𝑠𝑡+1 observation and 𝑟𝑡+1 reward. the agent computes the next 𝑎𝑡+1 action based on 𝑠𝑡+1 and so on. the policy is the parametric controller of the agent, and it is tuned during the reinforcement learning training. sequences of actions, observations and rewards (𝜏 trajectories) are used to train the parameters of the policy to maximise the expected reward over a finite number of steps (agent–environment interactions). for vehicle control problems, the actions are the signals that control the vehicle, such as the steering and throttle, and the observations are the sensor data relating to the environment of the vehicle, such as the camera, lidar data or higher-level environment models. in this research, the observations were images from the robot's forward-facing camera, and the actions were the velocity signals for the two wheels of the robot. policy optimisation algorithms are on-policy reinforcement learning methods that optimise the parameters of the πθ(𝑎𝑡|𝑠𝑡) policy based on the 𝑎𝑡 actions and the 𝑟𝑡 reward received for them; 𝜃 denotes the trainable parameters of the policy. on-policy reinforcement learning algorithms optimise the πθ(𝑎𝑡|𝑠𝑡) policy based on trajectories in which the actions have been computed by πθ(𝑎𝑡|𝑠𝑡). in contrast, off-policy algorithms (such as dqns [8]) compute actions based on the estimate of the action-value function of the environment, which they learn using data from a large number of (earlier) trajectories, making these algorithms less stable in some environments. in policy optimisation algorithms, the πθ(𝑎𝑡|𝑠𝑡) policy is stochastic, and in the case of deep reinforcement learning, it is implemented by a neural network, which is updated using a gradient method. the policy is stochastic because, instead of computing the actions directly, 1 https://github.com/kaland313/duckietown-rl (accessed 23 september 2021) the policy network predicts the parameters of a probability distribution (see 𝜇 and 𝜎 in figure 1) that is sampled to acquire the 𝑎�̃� predicted actions (here, predicted refers to this action being predicted by the policy). in the present study, to train the policy, the proximal policy optimization algorithm [9] was used because of its stability, sample-complexity and ability to take advantage of multiple parallel workers. proximal policy optimization performs the weight updates using a special loss function to keep the new policy close to the old, thereby improving the stability of the training. two loss functions were proposed by schulman et al. [9]: 𝔏clip(𝜃) = �̂�[min(𝜌𝑡(𝜃)�̂�𝑡,clip(𝜌𝑡(𝜃),1 − 𝜖,1 + 𝜖)�̂�𝑡)], (1) 𝔏klpen(𝜃) = �̂�[𝜌𝑡(𝜃)�̂� − 𝛽 kl[πθold(⋅ |𝑠𝑡),πθ(⋅ |𝑠𝑡)]], (2) where clip(⋅) and kl[⋅] refer to the clipping function and the kullback–leibler (kl) divergence, respectively, while �̂� is calculated as the generalised advantage estimate [12]. in these loss functions, 𝜖 is usually a constant in the [0.1,0.3] range, while 𝛽 is an adaptive parameter, and 𝜌 𝑡 (𝜃) = πθ(𝑎𝑡|𝑠𝑡) πθold(𝑎𝑡|𝑠𝑡) . (3) an open-source implementation of proximal policy optimization from rllib [13] was used, which performs the gradient updates based on the weighted sum of these loss functions. the pseudo code and additional details for the algorithm are provided in the appendix. 2.2. policy architecture the controller policy was realised by a shallow (4-layer) convolutional neural network. both the policy and the value network used the architecture presented by mnih et al. [1], with the only difference being the use of linear activation in the output of the policy network. no weights were shared between the policy and the value network. this policy is considered to be endto-end because the only learning component is the neural network, which directly computes actions based on observations from the environment. some preand post-processing was applied to the observations and actions, but these only performed very simple transformations (explained in the next paragraph and section 2.3). the aim of these preand post-processing steps was to transform the 𝑠𝑡 observations and 𝑎𝑡 actions into representations that enabled faster convergence without losing figure 1. illustration of the policy architecture with the notations used. the agent is represented jointly by the ‘policy network’ and ‘sampling action distribution’ blocks; 𝑠𝑡: ‘raw’ observation, 𝑠�̃�: pre-processed observation, 𝑎�̃�: predicted action, 𝑎𝑡: post-processed action. https://github.com/kaland313/duckietown-rl acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 9 any important features in the observations or restricting necessary actions. the input of the policy network consisted of the last three observations (images) scaled, cropped and stacked (along the depth axis). the observations returned by the environment (𝑠𝑡 on figure 1) were 640 × 480 (width, height) rgb images, the top third of which mainly showed the sky, which was therefore cropped. the cropped images were then scaled down to 84 × 84 resolution (note the uneven scaling), which were then stacked along the depth axis, resulting in 84 × 84 × 9 input tensors (𝑠�̃� in figure 1). the last three images were stacked to provide the policy with information about the robot's speed and acceleration. multiple action representations were experimented with (see section 2.3). based on these representations, the policy outputs 𝒂�̃� predicted an action vector of two or a scalar value that controlled the vehicle. the policy was stochastic, and the output of the neural network therefore produced the 𝜇 and logσ parameters of a multivariate diagonal normal distribution. during training, this distribution was sampled to acquire the 𝑎�̃� actions, which improved the exploration of the action space. during these evaluations, the sampling step was skipped by using the predicted 𝜇 mean value as the 𝑎�̃� policy output. 2.3. action representations the action mapping step transformed the 𝑎�̃� predicted actions, which could be implemented using many representations, to 𝑎𝑡 = [𝜔𝑙,𝜔𝑟] wheel velocities (see figure 1). the vehicle to be controlled was a differential-wheeled robot; the most basic action representation was therefore to directly compute the angular velocities of the two wheels as continuous values in the 𝜔𝑙,𝑟 ∈ [−1;1] range (where 1 and −1 corresponded to forward and backward rotation at full speed). however, this action space allowed for actions that were not necessary for the manoeuvres examined in this paper. moreover, as the reinforcement learning algorithm ruled out unnecessary actions, exploration of the action space was potentially made more difficult, and the number of steps required to train an agent was therefore increased. several methods can be used to constrain and simplify the action space, such as discretisation, clipping some actions or mapping to a lower-dimensional space. most previous studies [1],[2],[5],[7] have used discrete action spaces, thus the neural network in these policies selected one from a set of hand-crafted actions (steering, throttle combinations), while kendall et al. [3] utilised continuous actions, as has been used in this study. in order to test the reinforcement learning algorithm's ability to address general tasks, multiple action mappings and simplifications of the action space were experimented with. these are described in the following paragraphs. wheel velocity: wheel velocities were a direct output of the policy; 𝑎𝑡 = [𝜔𝑙,𝜔𝑟] = 𝑎�̃�, therefore 𝜔𝑙,𝑟 ∈ [−1;1]. wheel velocity positive only: only positive wheel velocities were allowed because only these were required to move forward. values predicted outside the 𝜔𝑙,𝑟 ∈ [0;1] interval were clipped: 𝑎𝑡 = [𝜔𝑙,𝜔𝑟] = clip(𝑎�̃�,0,1). wheel velocity braking: wheel velocities were still only able to fall within the 𝜔𝑙,𝑟 ∈ [0;1] interval, but the predicted values were interpreted as the amount of braking from the maximum speed. the main differentiating factor from the ‘positive only’ option was the bias towards moving forward at full speed: 𝑎𝑡 = [𝜔𝑙,𝜔𝑟] = clip(1 − 𝑎�̃�,0,1). steering: predicting a scalar value that was continuously mapped to combinations of wheel velocities. the 0.0 scalar value corresponds to moving straight (at full speed), while −1.0 and 1.0 refer to turning left or right with one wheel completely stopped and the other going at full speed. intermediate values are computed using linear interpolation between these values. the speed of the robot is always maximal for a particular steering value. below is the formula that implements this action mapping: 𝑎𝑡 = [𝜔𝑙,𝜔𝑟] = clip([1 + 𝑎�̃�,1 − 𝑎�̃�],0,1). 2.4. reward shaping the reward function is a fundamental element of every reinforcement learning problem, as it serves the important role of converting a task from a textual description to a mathematical optimisation problem. the primary objective for the agent is to travel as far as possible within a given time in the right-hand lane; therefore, two rewards that promote this behaviour were proposed. distance travelled: the agent’s reward was directly proportional to the distance it moved along the right-hand lane at each step. only longitudinal motion was counted and only if the robot stayed in the right-hand lane. orientation: the agent was rewarded if it was facing and moving in the desired orientation, which was determined based on its lateral position. in simple terms, it received the largest reward if it faced towards the centre of the right-hand lane (some example configurations are shown in figure 2 d). a term proportional to the angular velocity of the faster moving wheel was also added to encourage fast motion. this reward was calculated as 𝑟 = 𝜆ψ 𝑟ψ(𝛹,𝑑) + λ𝑣 𝑟𝑣(𝜔𝑙,𝜔𝑟), where 𝑟ψ(⋅),𝑟𝑣(⋅) are the orientation and velocity-based components (explained below), while the 𝜆ψ,𝜆𝑣 constants scale these to [-1,1]. 𝛹,𝑑 are the orientation and lateral error from the desired trajectory, which is the centreline of the right-hand lane (see figure 2 a). the orientation-based term was calculated as 𝑟ψ(𝛹,𝑑) = λ(𝛹𝑒𝑟𝑟) = λ(𝛹 − 𝛹des(𝑑)), where 𝛹des(𝑑) is the desired orientation calculated using the lateral distance from the desired trajectory (see figure 2 b for the illustration of 𝛹des(𝑑)). the λ function achieves the promotion of the |𝛹𝑒𝑟𝑟| < 𝜑 error, while an error larger than 𝜑 leads to a small negative reward (a plot of λ(𝑥) is shown in figure 2 c): figure 2. explanation of the proposed orientation reward: (a) explains 𝛹,d, (b) shows how the desired orientation depends on the lateral error, (c) shows the λ(𝑥) function and (d) provides some examples of desired configurations. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 10 λ(𝑥) = { 1 2 + 1 2 cos(π 𝑥 𝜑 ) if − 1 ≤ 𝑥 ≤ 1 (1 − | 𝑥 𝜑 |) otherwise , (4) where the ε ∈ [10−1,10−2] and 𝜑 = 50° hyperparameters are selected arbitrarily. the velocity-based component was calculated as 𝑟𝑣(𝜔𝑙,𝜔𝑟) = max(𝜔𝑙,𝜔𝑟) to reward an equally high-speed motion in both straight and curved sections. in the curved sections, only the outer wheel was able to rotate at maximal speed, while on a straight road, both wheels were able to do so. 2.5. simulation-to-reality transfer to train the agents, an open-source simulation of the duckietown environment was used [14]. this simulation models certain physical properties of the real environment accurately (dimensions of the robot, camera parameters, dynamic properties, etc.), but several other effects (textures, objects at the side of the road) and light simulation are less realistic (e.g. compared to modern computer games). these inaccuracies create a gap between simulation and reality that makes it challenging for any reinforcement learning agent to be trained only in simulation but operate in reality. to bridge the simulation-to-reality gap and to achieve the generalisation capability required for real performance, domain randomisation was used. this involves training the policy in many different variants of a simulated environment by varying lighting conditions, object textures, the camera, vehicle dynamics parameters and road structures (see figure 3 for examples of domain randomised observations). in addition to the ‘built-in’ randomisation options of gym-duckietown, this study used a diverse set of maps to train on in order to further improve the agent's generalisation capability. 2.6. collision avoidance collision avoidance with other vehicles greatly increases the complexity of the lane-following task. these problems can be solved in different ways, for example, by overtaking or following at a safe distance. however, the sensing capability of the vehicle and the complexity of the policy determine the solution it can learn. images from the forward-facing camera of a duckiebot only have a 160 ° horizontal field of view; therefore, the policy controlling the vehicle has no information about objects moving next to or behind the robot. for simplicity, in this study, the same convolutional network for collision avoidance as for lane following was used, which does not feature a long short-term memory cell or any other sequence modelling component (in contrast to [2]). for these reasons, it is unable to plan long manoeuvres, such as overtaking, which also requires side vision to check when it is safe to return to the right-hand lane. the policy was therefore trained in situations where there was a slow vehicle ahead, and the agent had to learn to perform lane following at full speed until it had caught up with the vehicle in front, at which point it had to reduce its speed and maintain a safe distance to avoid collision. in these experiments, the wheel velocity braking action representation was used as the policy's output because this allowed the agent to slow down or even stop the vehicle if necessary (unlike the steering action). both the orientation and the distance travelled reward functions were used to train agents for collision avoidance. the former was supplemented with a term that promoted collision avoidance, while the latter was used unchanged. the simulation used provided a 𝑝coll penalty if the safety circles around the two vehicles overlapped. the 𝑟𝑐𝑜𝑙𝑙 reward component that promoted collision avoidance was calculated using this penalty. if the penalty decreased because the robot was able to increase its distance from an obstacle, the reward term was proportional to the change in penalty; otherwise, it was 0: 𝑟coll = { −𝜆coll ⋅ δ𝑝coll if δ𝑝coll < 0 0 otherwise . (5) this term was added to the orientation reward, and it aimed to encourage the policy to increase the distance from the vehicle ahead if it got too close. collisions were only penalised by terminating the episode without giving any negative rewards. 2.7. evaluation to assess the performance of the reinforcement learningbased controller, multiple performance metrics in the simulation were measured and compared against two baselines, one using a classical control theory approach and the other being human driving. survival time (𝑡survive) in s: the time until the robot left the road or the duration of an evaluation episode. distance travelled in ego-lane (𝑠ego) in m: the distance travelled along the right-hand lane within a fixed time period. only longitudinal motion was counted; tangential movement therefore counted the most towards this metric. distance travelled both lanes (𝑠both) in m: both the distance travelled along the right-hand-lane within a fixed time period and sections where the agent moved into the oncoming lane counted towards this metric. lateral deviation (𝑑𝑑) in m·s: lateral deviation from the lane’s centreline integrated over the time of an episode. orientation deviation (𝑑ψ) in rad·s: the robot orientation's deviation from the tangent of the lane centreline integrated over the time of an episode. figure 3. examples of domain randomised observations. a) simulated b) simulated c) real figure 4. a) test track used for simulated reinforcement learning and baseline evaluations; b) and c) real and simulated test track used for the evaluation of the simulation-to-reality transfer. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 11 time outside ego-lane (𝑡𝑜𝑢𝑡) in s: time spent outside the ego-lane. even though duckietown is intended to be a standardised platform, it is still under development, and the official evaluation methods and baselines have not been adopted widely in the research community. the ai driving olympics provided a great opportunity to benchmark the solution presented here to others; however, the methods behind these solutions have not yet been published in the scientific literature. for this reason, this method was analysed primarily by comparing it with baselines that could be evaluated under the same conditions. the classical control theory baseline relies on information about the robot’s relative location and orientation to the centreline of the lane, which is available in the simulator. this baseline works by controlling the robot to orient itself towards a point on its desired path ahead and calculating wheel velocities using a proportional-derivative (pd) controller based on the orientation error of the robot. the parameters of this controller are hand-tuned to achieve a sufficiently good performance, but more advanced control schemes could offer better results. in many reinforcement learning problems (e.g. the atari 2600 games [15]) the agents are compared to human baselines. motivated by this benchmark, a method to measure how well humans were able to control duckiebots was proposed, which was then used as a baseline. the values shown in table 1 were recorded by controlling the simulated robot using the arrow keys on a keyboard (therefore via discrete actions), while the observations seen by the human driver were very similar to the observations of the reinforcement learning agent. 2.8. methods to improve results at the ai driving olympics the agents in this study were trained to solve autonomous driving problems in the duckietown environment and not to maximise scores at the ai driving olympics. therefore, some hyperparameters and methods had to be modified to match the competitions' evaluation procedures. it was found that training on lower frame rates (0.1 ms step time) improved the scores even though the evaluation simulation was stepped more frequently. in addition, implementing the same motion blur simulation that was applied in the official evaluation improved the results significantly compared with agents that were trained on nonblurred observations. 3. results 3.1. simulation even though multiple papers have demonstrated the feasibility of training vision-based driving policies using reinforcement learning, adapting to a new environment still poses many challenges. due to the high dimensionality of the image-like observations, many algorithms converge slowly and are very sensitive to hyperparameter selection. the method presented in this study, using proximal policy optimization, is able to converge with good lane-following policies in 1-million timesteps thanks to the high sample complexity of the algorithm. this training takes 2–2.5 hours on five cores of an intel xeon e5-2698 v4 2.2 ghz cpu and an nvidia tesla v100 gpu if 16 parallel environments are used. 3.1.1. comparison against baselines table 1 compares the reinforcement learning agent from this study with the baselines. the performance of the trained policy is measurable to the classical control theory baseline as well as to how well humans are able to control the robot in the simulation. most metrics indicate similarly good or equal performance even though the pd-controller baseline relies on high-level data such as position and orientation error rather than images. 3.1.2. comparison against other solutions at the ai driving olympics table 2 shows the top-ranking solutions of the simulated lane-following (validation) challenge at the 5th ai driving olympics. all top-performing solutions were able to control the robot reliably in the simulation for the duration of an episode (60 s); however, the distances travelled were different. the method in this study is able to control the robot reliably at the highest speed, so it therefore achieves the highest distance-travelled value while also showing good lateral deviation and rarely departing from the ego-lane. 3.1.3. action representation and reward shaping experiments with different action representations show that constrained and preferably biased action spaces allow convergence with good policies (wheel velocity braking and steering). however, more general action spaces (wheel velocity and its clipped version) can only converge with inferior policies during the same number of steps (see figure 5). the proposed orientation-based table 1. comparison of the reinforcement learning agent with two baselines in simulation. mean metrics over 5 episodes rl agent pd baseline human baseline survival time in s ↑ 15 15 15 distance travelled both lanes in m ↑ 7.1 7.6 7.0 distance travelled ego-lane in m ↑ 7.0 7.6 6.7 lateral deviation in m ·s ↓ 0.5 0.5 0.9 orientation deviation in rad·s ↓ 1.5 1.1 2.8 table 2. comparing the method in this study with other solutions at the ai driving olympics author 𝒕𝐬𝐮𝐫𝐯𝐢𝐯𝐞 in s ↑ 𝒔𝐞𝐠𝐨 in m ↑ 𝒅𝒅 in m·s ↓ 𝒕𝐨𝐮𝐭 in s ↓ a. kalapos [10], [16] 60 30.38 2.65 0 a. béres [16] 60 29.14 4.10 1.4 m. tim [16] 60 28.52 3.45 0.4 a. nikolskaya 60 24.80 3.15 1.6 r. moni [16] 60 18.60 1.78 0 z. lorincz [16] 60 18.6 3.5 0.8 m. sazanovich 60 16.12 4.35 3.4 r. jean 60 15.5 3.28 0 y. belousov 60 14.88 5.41 9.8 m. teng 60 11.78 2.92 0 p. almási [7], [16] 60 11.16 1.32 0 a) orientation reward b) distance travelled reward figure 5. learning curves for the reinforcement learning agent with different action representations and reward functions. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 12 reward function also leads to as good a final performance as one that is ‘trivially’ rewarding based on the distance travelled; however, the latter seems to perform better on more general action representations (because policies using these action spaces and trained with the orientation reward do not learn to move fast). 3.2. real-world driving to measure the quality of the transfer learning process and the performance of the controller in the real world, performance metrics that were easily measurable both in reality and simulation were selected. these were recorded in both domains in matching experiments and compared against each other. the geometry of the tracks, the dimensions and the speed of the robot were simulated accurately to evaluate the robustness of the policy against all the inaccurately simulated effects and those that were not simulated. using this method, policies trained in the domainrandomised simulation were tested as well as those that were trained only in the ‘nominal’ simulation. this allows for the evaluation of the transfer learning process and the highlighting of the effects of training with domain randomisation. the real and simulated version of the test track used in this analysis is shown in figure 4 b and figure 4 c. during real evaluations, it was generally found that under ideal circumstances (no distracting objects at the side of the road and good lighting conditions), the policy trained in the ‘nominal’ simulation was able to drive reasonably well. however, training with domain randomisation led to a more reliable and robust performance in the real world. table 1 shows the quantitative results of this evaluation. the two policies seemed to perform equally well when compared based on their performance in the simulation. however, metrics recorded in the real environment show that the policy trained with domain randomisation performed almost as well as in the simulation, while the other policy performed noticeably worse. the lower distance travelled ego-lane metric of the domainrandomised policy can be explained by the fact that the vehicle tended to drift to the left-hand lane at sharp turns but returned to the right-hand lane afterwards, while the nominal policy usually made more serious mistakes. note that in these experiments the orientation-based reward and the steering action representation were used, as this configuration learns to control the robot in the minimum number of steps and the shortest training time. an online video demonstrates the performance of the trained agent from this study: https://youtu.be/kz7ywemg1is (accessed 23 september 2021). an important limitation for the method presented in this study is that during real evaluations, the speed of the robot had to be decreased to half of the simulated value. the policy evaluations were executed on a pc connected to the robot via wireless lan; therefore, the observations and the actions were transmitted between the two devices at every step. this introduced delays in the order of 10 – 100 ms, making the control loop unstable when the robot was moving at full speed. however, at half speed, a stable operation was achieved. it was noticed that models trained with motion blur and longer step times for the ai driving olympics performed more reliably in the real world regardless of whether they used domain randomisation. however, further analysis and retraining of these agents multiple times is needed to firmly support these presumptions. 3.3. collision avoidance figure 6 demonstrates the learned collision avoidance behaviour. in the first few seconds of the simulation, the robot controlled by the reinforcement learning policy accelerates to full speed. then, as it approaches the slower, non-learning robot, it reduces its speed and maintains an approximately constant distance from the vehicle ahead (see figure 6). from the simple, fully convolutional network of this policy, learning, planning and executing a more complex behaviour, such as overtaking, cannot be expected. table 4 shows that training with both reward functions leads to functional lane-following behaviour. however, the nonmaximal survival time values indicate that neither of the policies are capable of performing lane following reliably with the presence of an obstacle robot for 60 s. all metrics in table 4 indicate that the modified orientation reward leads to better lanefollowing metrics than the simpler distance travelled reward. it should be noted that these metrics were mainly selected to evaluate the lane-following capabilities of an agent; a more intable 3. evaluation results of reinforcement learning agent in the real environment and in matching simulations. eval. domain mean metrics over 6 episodes domain rand. policy nominal policy real survival time in s ↑ 54 45 distance travelled both lanes in m ↑ 15.6 11.4 distance travelled ego-lane in m ↑ 7.0 8.4 sim. survival time in s ↑ 60 60 distance travelled in m ↑ 15.5 15.0 a) 𝑡 = 0 s b) 𝑡 = 6 s c) 𝑡 = 8 s d) 𝑡 = 24 s e) approximate distance between the vehicles initial positions catching up following the vehicle ahead figure 6. sequence of robot positions in a collision avoidance experiment with a policy trained using the modified orientation reward. after 𝑡 = 6 s, the controlled robot follows the vehicle in front at a short but safe distance until the end of the episode (approximate distance is calculated as the distance between the centre points of the robots minus the length of a robot). table 4. evaluation results of policies trained for collision avoidance with different reward functions. mean metrics over 15 episodes distance travelled orientation +𝑟coll survival time (max. 60) in s ↑ 46 52 distance travelled both lanes in m ↑ 22.5 22.9 distance travelled ego-lane in m ↑ 22.7 23.1 lateral deviation in m·s ↓ 1.9 1.6 orientation deviation in rad·s ↓ 6.3 5.8 https://youtu.be/kz7ywemg1is acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 13 depth analysis of collision avoidance with a vehicle in front calls for more specific metrics. an online video demonstrates the performance of the agent trained in this study: https://youtu.be/8gqauvty1po (accessed 23 september 2021) 3.4. salient object maps visualising which parts of the input image contribute the most to a particular output (action) is important because it provides some explanation of the network's inner workings. figure 7 shows salient object maps in different scenarios generated using the method proposed in [17]. all of these images indicate high activations on lane markings, which is expected. 4. conclusions this work presented a solution to the problem of complex, vision-based lane following in the duckietown environment using reinforcement learning to train an end-to-end steering policy capable of simulation-to-real transfer learning. it was found that the training is sensitive to problem formulation, such as the representation of actions. this study has demonstrated that by using domain randomisation, a moderately detailed and accurate simulation is sufficient for training end-to-end lanefollowing agents that operate in a real environment. the performance of these agents was evaluated by comparing some basic metrics to match real and simulated scenarios. agents were also successfully trained to perform collision avoidance in addition to lane following. finally, salient object visualisation was used to give an illustrative explanation of the inner workings of the policies in both the real and simulated domains. acknowledgement we would like to show our gratitude to professor bálint gyires-tóth (bme, dept. of telecommunications and media informatics) for his assistance and comments on the progress of our research. the research reported in this paper and carried out at the budapest university of technology and economics was supported by continental automotive hungary ltd. and the ‘tkp2020, institutional excellence programme’ of the national research development and innovation office in the field of artificial intelligence (bme ie-mi-sc tkp2020). references [1] v. mnih, a. p. badia, m. mirza, a. graves, t. lillicrap, t. harley, d. silver, k. kavukcuoglu, asynchronous methods for deep reinforcement learning, proc. of the international conference on machine learning, new york, united states, 19–24 june 2016, pp. 1928-1937. [2] m. jaritz, r. de charette, m. toromanoff, e. perot, f. nashashibi, end-to-end race driving with deep reinforcement learning, proc. of the ieee international conference on robotics and automation (icra), brisbane, australia, 21–25 may 2018, pp. 2070-2075. [3] a. kendall, j. hawke, d. janz, p. mazur, d. reda, j. allen, v. lam, a. bewley, a. shah, learning to drive in a day, proc. of the international conference on robotics and automation (icra), montreal, canada, 20–24 may 2019, pp. 8248-8254. [4] w. shi, s. song, z. wang, g. huang, self-supervised discovering of causal features: towards interpretable reinforcement learning, 2020. online [accessed 3 august 2020] https://arxiv.org/abs/2003.07069 [5] b. balaji, s. mallya, s. genc, s. gupta, l. dirac, v. khare, g. roy, t. sun, y. tao, b. townsend, e. calleja, s. muralidhara, d. karuppasamy, deepracer: educational autonomous racing platform for experimentation with sim2real reinforcement learning, 2019. online [accessed 13 april 2020] https://arxiv.org/abs/1911.01562 [6] m. szemenyei, p. reizinger, attention-based curiosity in multiagent reinforcement learning environments, proc. of the international conference on control, artificial intelligence, robotics & optimization (iccairo), majorca island, spain, 3–5 may 2019, pp. 176-181. [7] p. almási, r. moni, b. gyires-tóth, robust reinforcement learning-based autonomous driving agent for simulation and real world, proc. of the international joint conference on neural networks (ijcnn), glasgow, united kingdom, 19–24 july 2020, pp. 1-8. [8] v. mnih, k. kavukcuoglu, d. silver, a. graves, i. antonoglou, d. wierstra, m. riedmiller, playing atari with deep reinforcement learning, 2013. online [accessed 13 april 2020] https://arxiv.org/abs/1312.5602 [9] j. schulman, f. wolski, p. dhariwal, a. radford, o. klimov, proximal policy optimization algorithms, 2017. online [accessed 2 december 2019] https://arxiv.org/abs/1707.06347 [10] a. kalapos, c. gór, r. moni, i. harmati, sim-to-real reinforcement learning applied to end-to-end vehicle control, proc. of the 23rd international symposium on measurement and control in robotics (ismcr), budapest, hungary, 15–17 october 2020, pp. 1-6. [11] j. zilly, j. tani, b. considine, b. mehta, a. f. daniele, m. diaz, g. bernasconi, c. ruch, j. hakenberg, f. golemo, a. k. bowser, m. r. walter, r. hristov, s. mallya, e. frazzoli, a. censi, l. paull, the ai driving olympics at neurips, 2018. online [accessed 13 april 2020] https://arxiv.org/abs/1903.02503 [12] j. schulman, p. moritz, s. levine, m. jordan, p. abbeel, highdimensional continuous control using generalized advantage estimation, proc. of the international conference on learning representations (iclr), san juan, puerto rico, 2–4 may 2016, 14 pp. online [accessed 23 september 2021] http://arxiv.org/abs/1506.02438 [13] e. liang, r. liaw, r. nishihara, p. moritz, r. fox, k. goldberg, j. gonzalez, m. jordan, i. stoica, rllib: abstractions for distributed reinforcement learning, proc. of the international conference on machine learning, stockholm, sweden, 10–15 july 2018, pp. 3053-3062. [14] m. chevalier-boisvert, f. golemo, y. cao, b. mehta, l. paull, duckietown environments for openai gym, 2018. online [accessed 15 january 2021] https://github.com/duckietown/ gym-duckietown [15] m. g. bellemare, y. naddaf, j. veness, m. bowling, the arcade learning environment: an evaluation platform for general agents, j. artif. intell. res. 47 (2013), pp. 253-279. doi: 10.1613/jair.3912 [16] r. moni, a. kalapos, a. béres, m. tim, p. almási, z. lőrincz, pia project achievements at aido5, 2020. online [accessed 15 january 2021] https://medium.com/@smartlabai/pia-project-achievementsat-aido5-a441a24484ef a) simulated b) real c) collision avoidance figure 7. salient objects highlighted on observations in different domains and tasks. blue regions represent high activations throughout the network. https://youtu.be/8gqauvty1po https://arxiv.org/abs/2003.07069 https://arxiv.org/abs/1911.01562 https://arxiv.org/abs/1312.5602 https://arxiv.org/abs/1707.06347 https://arxiv.org/abs/1903.02503 http://arxiv.org/abs/1506.02438 https://github.com/duckietown/%20gym-duckietown https://doi.org/10.1613/jair.3912 https://medium.com/@smartlabai/pia-project-achievements-at-aido5-a441a24484ef https://medium.com/@smartlabai/pia-project-achievements-at-aido5-a441a24484ef acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 14 [17] m. bojarski, p. yeres, a. choromanska, k. choromanski, b. firner, l. d. jackel, u. muller, explaining how a deep neural network trained with end-to-end learning steers a car, 2017. online [accessed 15 april 2020] https://arxiv.org/abs/1704.07911 appendix proximal policy optimization the pseudo code for proximal policy optimization (ppo) is as follows: algorithm ppo, actor-critic style (based on [9]) input: initial policy with 𝜃0 parameters and initial value function estimator with 𝜙0 parameters for iteration = 1,2,... do for actor=1,2,...,n do run πθold in the environment for t timesteps to collect τ𝑖 trajectory compute advantage estimates �̂�1,… ,â𝑇 based on the current value function end optimise 𝔏clip(𝜃) + 𝔏klpen(𝜃) wrt. 𝜃, for k epochs and minibatch size 𝑀 ≤ 𝑁𝑇 fit the value function estimate by regression on mean-squared error 𝜃old ← 𝜃, 𝜙old ← 𝜙 end the 𝛽 adaptive parameter mentioned in section 2.1 is updated according to the following rule: 𝛽 ← { 𝛽/2, if 𝑑 < 𝑑targ/1.5 𝛽 × 2, if 𝑑 > 𝑑targ × 1.5, (6) where 𝑑targ is a hyperparameter and 𝑑 is the kl-divergence of the old and the updated policy 𝑑 = �̂�[𝐾𝐿[πθold(⋅ |𝑠𝑡),πθ(⋅ |𝑠𝑡)]]. (7) the �̂�𝑡 generalised advantage estimate [12] is calculated as �̂�𝑡 = ∑(γλ) 𝑙δ𝑡 𝑉 ∞ 𝑙 (8) 𝛿𝑡 𝑉 = 𝑟𝑡 + 𝛾 𝑉(𝑠𝑡+1) − 𝑉(𝑠𝑡) , (9) where 𝑉(𝑠𝑡) and 𝑉(𝑠𝑡+1) are the value function estimates calculated by the value network at steps 𝑡 and 𝑡 + 1; 𝛾 is the discount factor, while 𝜆 is a hyperparameter of the generalised advantage estimate. to assure reproducibility, the hyperparameters of the algorithm are provided in the table 5. table 5. hyperparameters of the algorithm. the description of some parameters is from the rllib documentation [13]. description value number of parallel environments 𝑁 = 16 learning rate α = 5 × 10−5 discount factor for return calculation 𝛾 = 0.99 𝜆 parameter for the generalised advantage estimate 𝜆 = 0.95 ppo clip parameter ϵ = 0.2 sample batch size 𝑇 = 256 sgd minibatch size 𝑀 = 128 number of epochs executed in every iteration 𝐾 = 30 target kl-divergence for the calculation of 𝛽 𝑑targ = 0.01 https://arxiv.org/abs/1704.07911 nilm techniques applied to a real-time monitoring system of the electricity consumption acta imeko issn: 2221-870x june 2021, volume 10, number 2, 139 146 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 139 nilm techniques applied to a real-time monitoring system of the electricity consumption b. cannas1, s. carcangiu1, d. carta2, a. fanni1, c. muscas1, g. sias1, b. canetto3, l. fresi3, p. porcu3 1 diee, university of cagliari, via marengo 2, 09123 cagliari, italy 2 iek-10, forschungszentrum jülich, 52425 jülich, germany 3 bithiatech technologies, elmas, (ca), italy section: research paper keywords: non-intrusive load monitoring (nilm); electricity consumption; energy disaggregation; features extraction; smart metering; blued dataset; proprietary dataset citation: barbara cannas, sara carcangiu, daniele carta, alessandra fanni, carlo muscas, giuliana sias, beatrice canetto, luca fresi, paolo porcu, nilm techniques applied to a real-time monitoring system of the electricity consumption, acta imeko, vol. 10, no. 2, article 20, june 2021, identifier: imekoacta-10 (2021)-02-20 section editor: giuseppe caravello, università degli studi di palermo, italy received january 21, 2021; in final form march 16, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was partially funded by sardinian region under project “rt-nilm (real-time non-intrusive load monitoring for intelligent management of electrical loads)”, call for basic research projects, year 2017, fsc 2014-2020. corresponding author: daniele carta, e-mail: d.carta@fz-juelich.de 1. introduction knowing how electric appliances are used and how different appliances contribute to the aggregate total consumption could help users to have a better understanding on how the energy is consumed, thus leading possibly to a more efficient management of their loads. non-intrusive load monitoring (nilm) is an area of computational sustainability research, and it presently identifies a set of techniques that can disaggregate the power usage into the individual appliances that are functioning and identify the consumption of electricity for each of them [1]. in residential buildings, where it is impractical to monitor single appliances, or even groups of appliances, through specific meters, nilm techniques are a low-cost and not invasive option for electric consumption monitoring, considering a single monitoring point where a smart meter is installed. literature reports several papers that applied different methods throughout the years to solve this problem. a first classification of nilm techniques could be done in supervised and unsupervised methods [2]. supervised methods require labelled data of consumption to train the model. unsupervised methods are targeted to extract the information to operate directly from the measured aggregate data consumption profiles. due to their better performance, most of the approaches are based on supervised algorithms and they require appliance data for model training to estimate the loads number, type and power by analysing the aggregate consumption signal. solutions based on machine learning range from classic supervised machine-learning algorithms (e.g., support vector machines, and artificial neural networks) to supervised statistical learning methods (e.g., k-nearest neighbours and bayes classifiers) and unsupervised method (e.g., hiddenmarkov models and its variants). a review of these methods is reported in [2]. recently, deep learning (dl) methods were also abstract non-intrusive load monitoring (nilm) allows providing appliance-level electricity consumption information and decomposing the overall power consumption by using simple hardware (one sensor) with a suitable software. this paper presents a low-frequency nilmbased monitoring system suitable for a typical house. the proposed solution is a hybrid event-detection approach including an eventdetection algorithm for devices with a finite number of states and an auxiliary algorithm for appliances characterized by complex patterns. the system was developed using data collected at households in italy and tested also with data from blued, a widely used dataset of real-world power consumption data. results show that the proposed approach works well in detecting and classifying what appliance is working and its consumption in complex household load dataset. mailto:d.carta@fz-juelich.de acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 140 employed and seem promising for the most challenging problem posed by the consumption profiles of multi-state appliances [3]-[6]. the frequency of energy data monitoring drives the use of the analysis techniques and the specific tools. although higher the frequency of energy data monitoring frequency higher could be the accuracy of the nilm disaggregation algorithms, commercial smart meters for homes supply low frequency sampling (less than 60 hz) of the electric power quantities. in this field, the majority of the research efforts focused on event-based techniques that identify significant variations in the power signals as switching events of appliances. these events must be classified as a state transition related to a specific appliance. for this purpose, electric signal characteristics extracted from measurements in proximity of the events (i.e., signatures) are used as distinctive features, and then labelled with classification procedures. in this paper, a monitoring system is proposed using a smart meter that supplies low frequency (1 hz) samples of power consumption. the system is able to disaggregate and keep track of the power consumption of the devices existing in a typical italian house. the households should follow the proposed procedure to customize the system for their homes, choosing the appliances of interest and collecting the corresponding measurements. the disaggregation algorithm is an improvement of the one proposed in [7]-[8]. the load disaggregation is performed applying a hybrid approach to power data, i.e. eventbased techniques and pattern recognition techniques for large household appliances. moreover, the procedure of the first setup phase has been simplified: the event detection for type-i and type-ii appliances that was performed separately for each device is now carried out on the aggregate power signals, strongly reducing the user effort. in order to validate the method, the procedure is also applied to building-level fully-labeled (blued) dataset for electricity disaggregation [9]. blued is a publicly available big dataset consisting of real voltage and current measurements for a singlefamily residence in the united states, sampled at 12 khz for a whole week. blued has been applied as benchmark dataset in several recent papers on nilm [5], [10], [11], [12]. however, few contributions have been proposed with low frequency samples. among them, [13] and [14] will be considered in this work as comparison. 2. the home energy management system the home energy management system (hems) is used to provide comfortable life for consumers as well as to save energy. this can be obtained using a home’s smart meter to monitor the electricity consumption of the devices existing in a household applying nilm techniques. the hems should identify the appliances that are active at any time, disaggregate the energy and estimate consumptions of the single device. in this work, the chosen low-frequency smart meter is a sensor belonging to a precommercial prototype [7] which provides steady-state signatures, such as real and reactive power time series. it is important that the hems set-up is easy to understand and interact with. it should have features, like auto-configuration, which make the set-up process very easy. the non-intrusive technique, resorting to only one smart meter for the household, requires some effort in the first set-up phase, but it can be sometimes the only possible choice because installing a specific monitoring infrastructure, including new devices cables, may result in high implementation cost to the user. in this work the appliances are categorized into four types based on their operation states [15]: 1) type-i, appliance with on/off states (binary states of power); 2) type-ii, finite state machines with a finite number of operation states (multiple states of power); 3) type-iii, continuously varying devices with variable power consumption, as a washing machine and a light dimmer (infinite middle states of power); 4) type-iv, appliances that stay on for days or weeks and with a constant power consumption level. in case of type-ii, data for all the transitions between possible states should be acquired and labelled (manual set-up). in case of type-iii devices, data from each cycle are characterized by complex patterns. in this paper, a technique to deal with such devices is proposed, by considering the washing machine as a case study. figure 1 and figure 2 show the filtered apparent power consumption typical of different models of washing machines with different washing cycles. as can been noted, the power consumption fluctuates while heating/washing or rinsing/drying the laundry. thus, the events do not correspond to simple steps in the power consumption, but characteristic complex patterns appear in the power time series. as it can be noted, heating water accounts for about 90% of the energy needed to run a washer and in both figures the washing machine typically has electrical components which turn on and off in sequence. the proposed technique facilitates the training process by pre-populating the training data set with signatures of some type-iii devices showing typical patterns (automatic set-up). 3. the procedure in the following, the implemented procedure is presented showing how the variations of apparent (s), real (p) and reactive powers (q), the oscillation frequencies of the signals and the varying patterns of type-iii appliances are used in the training figure 1. apparent power for a washing machine (hot water). figure 2. apparent power for a washing machine (cold water). acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 141 phase for the creation of the signatures database, whereas during a recall phase the appliances are recognized inside an aggregated signal. in order to better understand the whole procedure, figure 3 shows its flow chart with the main phases of data collection and extraction of the signatures. 3.1. data collection the automatic procedure to collect the aggregate data of electric consumption consists of the following two steps, one for type-i and ii appliances, one for type-iii appliances. in the first step, type-iii appliances (such as washing machine or dishwasher) are switched off, whereas type-i and type-ii appliances could regularly work. the user must switch on and off type-i and ii devices several times for each event of interest. this allows to increase data robustness to the noise, e.g., small fluctuations in appliance consumption, electronics constantly on, and appliances turning on/off with consumption levels too small to be detected. multi-state devices, such as kitchen ovens, stoves, clothes dryers, etc., go through several states where heating elements and fans can be switched in various combinations. thus, to collect all the events, the user must test all the possible transitions from one state to another. for instance, for an electric stove with three power levels (states), the user should trigger twelve possible switching events, among off/on states. in the second step, large type-iii appliances work alone and their data of consumption are recorded. note that, for this work, also data from type-i and type ii appliances working alone have been collected, in order to combining them and create several synthetic aggregate data series to test the system. 3.2. extraction of the signatures once the aggregate data have been collected, the database of the signatures of each switching event is built. these data are normalized with respect to the constant voltage of 230 v in such a way that the voltage drops due to the load insertion do not influence the result. moreover, a causal filtering is applied to apparent, real, and reactive power signals. in this way, possible spikes and outliers can be discarded or smoothed. with such a low sampling frequency, fast transients should be removed, since they could be sometimes recorded, and other times missed during the acquisition. figure 4 shows the changes in electricity consumption due to the switching on and off of a fan before and after the filtering. for the type-i and ii devices, an edge detector finds the switching events in the apparent power data when the absolute value of the difference s between two consecutive values is larger than 20 va. the sign of s identifies the start up or the shutdown of the appliance. then, the real p and reactive q power variations in each edge must be determined and candidate as one of the signatures of the individual load. within the time interval between two consecutive events there is an almost constant power level consumption. in order to find a candidate signature of the appliance-switching events, the difference between the mean values of the measurements before and after each edge of real and reactive power is evaluated. among all the candidate signatures, the k-medoids clustering method [16] is applied to partition the set of switching events into a set of clusters whose number k depends on the possible states of the single appliances and must be set by the user. this clustering method is robust to noise and outliers and it chooses data points as centers of the clusters. at the end of the clustering figure 3. flow chart of the procedure. figure 4. original and filtered fan power consumption. pre-processing edge detection apparent power δs real power δp reactive power δq δp and δq clustering db signatures aggregate data acquisition insert # states appliance nearest neighbor search in the δp-δq plane recognition and labeling db patterns aggregate data oscillation frequency ≥ 0.05hz ? pre-processing edge detection apparent power δs real power δp reactive power δq no yes on/off large appliances check pattern energy disaggregation recall phase db event labels extraction of the signatures data collection extraction of the lower q envelope δq>100var? yes no acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 142 procedure, these centers will form the signatures associated with each transition. note that, as appliances with small power consumptions are not interesting from the point of view of energy savings, and hardly distinguished, loads with p < 20 w are discarded and loads with 20 w < p <50 w are associated to a unique “low consumption” cluster. for the same reason, switching events lasting less than 5 s are not taken into account. note that, the threshold of 50 w is implemented by default, but it can be modified by the user, in case only consumptions greater than a predetermined power are of interest. at the end of the training phase, all the collected data are given as input to the monitoring system; in case of errors in the classification, new common clusters are created for those devices characterized by close consumptions of real and reactive power. for large type-iii appliances, an ad hoc procedure has been implemented. for the washing machine, the start and the end of the cycle and the motor-spin events are detected. to this aim, the peak values of the real power oscillations that identify the heating and washing phases are identified. in order to avoid peaks due to noise or other events not characterizing this device, maximum relevance peak values are selected, i.e., those that drop at least 30 w on either side before the signal attains a higher value. a statistical analysis of the time distance of such peaks shows that the typical time distance between the peaks is close to 20 s. thus, the oscillations in the active power signal show a frequency greater than or equal to 0.05 hz. moreover, in order to avoid spurious detections, the pattern of the motor spin pattern is isolated. the motor-spin pattern shown in figure 5 is identified in the individual appliance signals. the pattern identified in the individual appliance signals is then extracted and included in the database of the system. 3.3. the recall phase during the recall phase, first of all, a check is carried out to verify if a washing machine is running: as described in section 3.2, the switching on (off) of the washing machine operation is identified when the oscillations, characterized by a frequency greater than or equal to 0.05 hz, start (end). when no washing machine cycle is detected, the following procedure is applied. when an edge in the aggregated signal is detected, the corresponding point in the p-q plane is evaluated. then, a nearest neighbour search in the p-q plane is performed, and the event is classified and associated to the appliance event with the nearest signature vector. moreover, a check on the sign of q is considered as further information, in addition to the cluster center distance, to identify the proper cluster in order to increase the discrimination capabilities. see as an example the signatures of the hairdryer and stove in figure 6, where the signatures are very close but the former, unlike the latter, is characterized by zero reactive power. if no association is performed, the event is labelled as unidentified. the detections of events of type-i and ii appliances switched on and off during a washing machine cycle become quite challenging with such a low frequency and many false switching could be triggered directly applying the described procedure. thus, if the washing machine is running in its heating phase, the lower envelope of q is extracted. since the washing-machine q lower envelope during the heating phases is equal to zero, the presence of the intervals of not-null constant values indicates the switching of an appliance. in this case a threshold of 100 var is considered. when an edge is detected in a flat-top interval of the q lower envelope, the nearest neighbour search is then applied in the p-q plane in order to associate the event to the appliance with the nearest signature vector. this procedure allows one to improve the detection in case of reactive loads switched on and off during the washing machine heating phases. finally, using a similarity search algorithm, it is possible to identify, in the aggregate signal, the operating phase that best matches with the reference associated to the spin motor functioning. this procedure could be applied for other devices with characteristic patterns, e.g. microwave ovens, to distinguish their operating conditions. 4. case study in this section the experiments carried out to verify the validity of the methodology proposed are reported. the algorithms are implemented in matlab. performance have been evaluated both on a custom dataset collected by the authors in some italian houses and on a public dataset, blued [9], which has already been extensively analysed in the literature. 4.1. custom dataset in order to create our dataset, several domestic consumption data have been acquired by installing an energy meter between the investigated appliances and the domestic network. the implemented acquisition system of the electricity consumption data consists of: one eastron sdm220 single-phase energy meter [17], for residential and industrial applications at a rated voltage of 230 v (range 176 v to 276 v) and current of 10 a (range 0.5 a to 100 a). the accuracy requirements of the meter are reported in table 1; figure 5. motor-spin pattern. figure 6. custom dataset: appliance signatures in δp-δq plane. the overlapping signatures of hairdryer and stove are highlighted with black rectangles [7]. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 143 a pc on which the measurement software and the load disaggregation algorithm are implemented; a modbus/rs485 serial interface, including a serial port converter adapter cable usb-rs485, allowing the remote communication between the energy meter and the pc. the dataset has been created with an acquisition frequency equal to 1 hz that represents the maximum value allowed by the used meter. the dataset consists of three parts: individual loads, aggregate loads and synthetic aggregate loads. the data were acquired in single and aggregate manner during the actual operation of the devices or were generated by simulating conditions corresponding to actual user behaviour. the electricity consumptions of the individual loads reported in table 2 have been acquired by connecting the meter directly to the device plugs. the dataset was obtained acquiring multiple appliances simultaneously via a multi-socket. in order to increase the amount of the data corresponding to aggregate consumption, an aggregate synthetic dataset was obtained by combining the consumption data of the individual appliances, by summing the measurements of p and q and averaging the values of v. 4.2. blued dataset this dataset was built in 2011 by monitoring a whole house located in pennsylvania (us), for 8 days. in us there are three electricity feed lines for ordinary houses: two firewires and one neutral line. the two firewires have 120 v amplitude of voltage and they are named as phase a and phase b. usually, small 120 v-rated appliances are connected between one firewire and one neutral while larger 240 v-rated appliances such as heaters and air conditioners are connected between two firewires. in this work only phase a data have been used. the appliances connected to this phase are reported in table 3. the blued dataset contains high frequency (12 khz) aggregated data of raw current and voltage. during the creation of the dataset, every single switching on/off of any appliance is recorded and called as an event. in particular, all the changes in the state of power consumption higher than 30 w have been considered. every appliance event in the house was then labelled and time-stamped, providing the necessary event labels database for the evaluation of the proposed procedure. in total, 904 events have been registered in the considered phase a. in this work, to take into account the technical specifications of the pre-commercial smart meter used in the experiments discussed in the previous section 4.1, power signals evaluated from raw data are down-sampled at 1 hz. then, the events identified in blued, but unknown, and those with duration less than or equal to 5 s have been discarded obtaining a final database of 662 ground-truth events. 5. results figures 6 and 7 show the signatures of custom and blued databases in the p-q plane obtained by applying the phase of the procedure “extraction of the signatures”, described in section 3.2. in this section the performances of the recall phase (event detection and appliances identification) over the two datasets are presented. 5.1. performance measures in binary classification problems (such as the event detection) there are only two classes, called positive (on or off event) and negative (non-event). when a positive sample is incorrectly classified as negative, it is called a false negative (fn); when a negative sample is incorrectly classified as a positive, it is called table 1. accuracy requirements of eastron sdm220 [17]. parameter accuracy voltage 0.5% of range maximum current 0.5% of nominal power active 1% of range maximum reactive apparent energy active class 1 iec 62053-21 class b en 50470-3 reactive 1% of range maximum table 2. list of appliances in the custom dataset. appliance average power consumption in w type fridge 180 ii kettle 1900 i lamp 40 i notebook 60 i stereo 30 i toaster 500 i tv 40 i electric oven 2000 ii hairdryer 300-900 ii fan 30-40 ii induction cooker 400-2500 ii microwave oven 1000-1200 ii stove 900-1800 ii water heater 600-1200 ii washing machine 130-1700 iii table 3. list of appliances in the blued dataset (phase a). appliance average power consumption in w type kitchen aid chopper 1500 ii fridge 120 ii air compressor 1130 i hair dryer 1600 ii backyard lights 60 i washroom light 110 i bathroom upstairs lights 65 i bedroom lights 190 i figure 7. blued dataset (phase a): appliance signatures in δp-δq plane. the magnification of the part enclosed by the black rectangle shows more in detail the signatures with |∆𝑃| < 250 w. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 144 a false positive (fp); when a positive sample is correctly classified as positive is called a true positive (tp). the 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 (pr) represents what proportion of predicted positives is truly positive. it is the ratio between the number of correctly classified positives and the total number of samples predicted as positives: 𝑃𝑟 = 𝑇𝑃 𝑇𝑃 + 𝐹𝑃 . (1) the 𝑅𝑒𝑐𝑎𝑙𝑙 (re) represents what proportion of actual positives is correctly classified. it is the ratio between the number of correctly classified positives and the total number of positives in the dataset: 𝑅𝑒 = 𝑇𝑃 𝑇𝑃 + 𝐹𝑁 . (2) the 𝐹1 − 𝑠𝑐𝑜𝑟𝑒 combines 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 and 𝑅𝑒𝑐𝑎𝑙𝑙 into a single measure. mathematically it is the harmonic mean of precision and recall. 𝐹1 − 𝑠𝑐𝑜𝑟𝑒 = 2 ∙ 𝑃𝑟 ∙ 𝑅𝑒 𝑃𝑟 + 𝑅𝑒 . (3) in a multiclass classification problem (as the appliance identification) there are no positive or negative classes, but tp, fp and fn and the other performance measures can be evaluated for each individual class (each appliance event). summing up the single class measures, the total tp, the total fp and the total fn of the model can be obtained; then the global metrics of precision, recall and f-score can be evaluated. note that in this case, all the global metrics become equal, i.e. 𝑃𝑟 = 𝑅𝑒 = 𝐹1 − 𝑠𝑐𝑜𝑟𝑒. 5.2. performance with custom dataset the recall phase has been tested on aggregate signals composed of type-i and type-ii appliances with and without the washing machine. using the pattern shown in figure 5, the operation phases of the motor-spin are individuated even considering washing machines of different brands and with different washing programs. figure 8 shows the power demand of a household over a 14-hour period, between 08:00 and 22:00. as it can be observed, the aggregate power demand is generated by the fridge, which shows a periodical power consumption behaviour, and other appliances. the on and off events are shown as markers at the level of +δp and -δp respectively, whereas the motor spin functioning is indicated with red segments. the results are very satisfactory, especially regarding identification of the activation status of appliances that consume more energy, such as the washing machine. as an example, in figure 9 the events detection during a washing machine cycle, is shown. flat-top intervals of the q lower envelope, denoted by the green segments in figure 9, identify the switching on/off of the fridge. in order to show the effectiveness of the improvements proposed in this paper, in table 4 the performance for the edge detection as reported in [7], are shown. in table 5 the event classifier performance is compared with those reported in [7]. as it can be noted, despite the performance index obtained in [7] for the synthetically generated aggregate signals was already very high, a slight improvement has been achieved. a more significant improvement in the performance index can be observed on the experimental data after the introduction of the described changes. the pie charts in figure 10 compare the estimated decomposition results with ground-truth of energy consumption. as can be observed, the proposed method is capable of disaggregating energy consumption of the appliances with good accuracy. 5.3. performance with blued dataset in the blued dataset the aggregate signals are composed of type-i and type-ii appliances. the event detector applied to blued time series identifies 647 events characterized by ∆𝑆 > 20 var and lasting more than 5 s. among them, 9 events figure 8. household power demand over a 14-hour period. figure 9. detections of on/off events for the fridge during a washing machine cycle. table 4. performance metrics for the edge detector tested on synthetic and experimental custom dataset. test set # groundtruth events tp fp fn pr re f1score synthetic data 140 139 1 1 0.99 0.99 0.99 experimental data 137 135 2 2 0.99 0.99 0.99 table 5. performance metrics for appliances classification tested on synthetic and experimental custom dataset. test set # detected events tp fp fn pr re f1score synthetic data [7] 139 133 6 6 0.96 0.96 0.96 synthetic data 139 133 3 3 0.97 0.97 0.97 experimental data [7] 135 121 14 14 0.90 0.90 0.90 experimental data 135 129 6 6 0.96 0.96 0.96 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 145 refer to appliances characterized by an active power consumption less than 30 w. since these events are not labeled in blued, they have been discarded. table 6 and table 7 report the performance obtained by the algorithm for the event detection and the event classification respectively. all the performance indexes are very high confirming the suitability of the approach to detect sudden state changes. in table 8 the confusion matrix showing per class accuracy (in percent) is reported. in the confusion matrix all the different lights connected to the phase-a have been grouped into a single class. we can observe that the only classification errors are for the fridge class which is confused with that of the lights and vice versa. literature reports few contributions for nilm algorithms applied to blued data at the same frequency of 1 hz [13], [14], [18]. in [13] a panel of different machine learning algorithms is applied to appliance classification for a subset of blued dataset (air compressor, basement, computer, garage door, iron, kitchen light, laptop, lcd monitor, monitor, refrigerator, tv). both precision and recall are equal to 79%. in [14], clustering in δp δq plane is solved through a hierarchical approach executed with some manual supervision. the paper reports for phase-a blued data an f1-score for the event detection and for the appliance classification of about 92% and 88% respectively. in [18] an event detection algorithm is used to identify the time instant when a sudden increase of active power occurs, indicating a possible turn-on event, and a convolutional neural network (cnn) classifier is applied for event classification. results reported for blued are limited to three appliances (washing machine, fridge, microwave). table 9 reports the calculated recall (also called true positive rate-tpr) and accuracy metrics for event detection and classification respectively for [13], [14] and [18] and also other solutions applied to higher frequency data [19]-[21]. figure 10. energy disaggregation results. table 6. performance metrics for the edge detector tested on blued dataset. test set # groundtruth events tp fp fn pr re f1score experimental data 662 638 9 13 0.99 0.96 0.97 table 7. performance metrics for appliances classification tested on blued dataset. test set # detected events tp fp fn pr re f1score experimental data 638 625 13 13 0.98 0.98 0.98 table 8. confusion matrix showing per class accuracy (in percent) for the appliances connected to phase-a of blued; blue: actual device, grey: classified device, green: correct classifications (tp), orange: false positives. class fridge air compressor lights kitchen aid chopper hair dryer fridge 0.99 0 0.01 0 0 air compressor 0 1 0 0 0 lights 0.05 0 0.95 0 0 kitchen aid chopper 0 0 0 1 0 hair dryer 0 0 0 0 1 table 9. comparison of event detection and classification performance. reference method sampling-rate recall (tpr) accuracy proposed 1 hz 0.96 0.98 [13] active machine learning strategy 1 hz 0.79 0.92 [14] hierarchical approach 1 hz 0.89 0.88 [18] first and second differences + cnn 1 hz 0.94 0.98 [19] finite-precision analysis 4 khz 0.91 [20] extremely randomized trees 12 khz 0.94 [21] hybrid approach 60 hz 0.94 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 146 it can be seen that the proposed algorithm achieves good results while being simple and computationally efficient. in fact, despite the low frequency of data analyzed, the approach shows competitive performance even when it is compared to other more complex methodologies applied to high-sampling-rate signals, from 60 hz to 12 khz [19]-[21]. 6. conclusions in this paper, a monitoring system has been proposed that is able to disaggregate and keep track of the power consumption of the devices existing in a typical house analysing low frequency aggregate data. by applying a hybrid approach to power data, i.e. event-based techniques and pattern recognition techniques for large household appliances, the load disaggregation is performed with good performance, even when complex type-iii devices, such as the washing machine, are working. finally, using the more populated blued dataset, we showed that the proposed procedure is able to achieve high performance in two key tasks in energy disaggregation: identifying events from non-events, and identifying which appliance is associated with a specified event. references [1] s. s. hosseini, k. agbossou, s. kelouwani, a. cardenas, nonintrusive load monitoring through home energy management systems: a comprehensive review, renewable and sustainable energy reviews, vol. 79, 2017, pp. 1266–1274. doi: 10.1016/j.rser.2017.05.096 [2] a. ruano, a. hernandez, j. ureña, m. ruano, j. garcia, nilm techniques for intelligent home energy management and ambient assisted living: a review, energies, vol. 12, june 2019, p. 2203. doi: 10.3390/en12112203 [3] w. kong, z. y. dong, b. wang, j. zhao and j. huang, a practical solution for non-intrusive type ii load monitoring based on deep learning and post-processing, ieee trans. smart grid, vol. 11, no. 1, jan. 2020, pp. 148-160. doi: 10.1109/tsg.2019.2918330 [4] a. harell, s. makonin, i. v. bajić, wavenilm: a causal neural network for power disaggregation from the complex power signal, proc. of icassp 2019 ieee international conference on acoustics, speech and signal processing, brighton, uk, 12-17 may 2019, pp. 8335-8339. doi: 10.1109/icassp.2019.8682543 [5] q. wu, f. wang, concatenate convolutional neural networks for non-intrusive load monitoring across complex background, energies, vol. 12, no. 8, pp. 1572, apr. 2019. doi: 10.3390/en12081572 [6] p. davies, j. dennis, j. hansom, w. martin, a. stankevicius, l. ward, deep neural networks for appliance transient classification, proc. of icassp 2019 ieee international conference on acoustics, speech and signal processing, brighton, uk, 12-17 may 2019, pp. 8320-8324. doi: 10.1109/icassp.2019.8682658 [7] b. cannas, b. canetto, s. carcangiu, a. fanni, l. fresi, m. marceddu, c. muscas, p. porcu, g. sias, non-intrusive loads monitoring techniques for house energy management, proc. of the 1st int. conf. on energy transition in the mediterranean area (synergy med), cagliari, italy, 28-30 may 2019, pp. 1-6. doi: 10.1109/synergy-med.2019.8764104 [8] b. cannas, s. carcangiu, d. carta, a. fanni, c. muscas, g. sias, b. canetto, l. fresi, p. porcu, real-time monitoring system of the electricity consumption in a household using nilm techniques, proc. of the 24th imeko tc4 international symposium and 22nd international workshop on adc and dac modelling and testing, palermo, virtual, italy, 14–16 september 2020; pp. 90-95. online [accessed 18 june 2021] https://www.imeko.org/publications/tc4-2020/imeko-tc42020-18.pdf [9] k. anderson, a. ocneanu, d. benitez, d. carlson, a. rowe, m. berges, blued: a fully labeled public dataset for event-based non-intrusive load monitoring research, proc. of the 2nd kdd workshop on data mining applications in sustainability (sustkdd), acm, beijing, china, 2012, pp. 1–5. [10] h. liu, q. zou, z. zhang, energy disaggregation of appliances consumptions using ham approach, ieee access 2019, 7, pp. 185977–185990. doi: 10.1109/access.2019.2960465 [11] p. ricardo, p.r.z. taveira, c.h.v.d. moraes, g. lamberttorres, non-intrusive identification of loads by random forest and fireworks optimization, ieee access 2020, 8, 75060–75072. doi: 10.1109/access.2020.2988366 [12] b. cannas, s. carcangiu, d. carta, a. fanni, c. muscas, selection of features based on electric power quantities for non-intrusive load monitoring, applied sciences, 2021, 11(2):533. doi: 10.3390/app11020533 [13] f. rossier, ph. lang, j. hennebert, near real-time appliance recognition using low frequency monitoring and active learning methods, energy procedia 122 (2017), pp. 691–696. doi: 10.1016/j.egypro.2017.07.371 [14] t. khandelwal, k. rajwanshi, p. bharadwaj, s. srinivasa garani, r. sundaresan, exploiting appliance state constraints to improve appliance state detection, proc. of e-energy ’17, shatin, hong kong, 16-19 may 2017, pp. 111–120. doi: 10.1145/3077839.3077859 [15] g. w. hart, nonintrusive appliance load monitoring, ieee proc. 1992, 80, pp. 1870–1891. doi: 10.1109/5.192069 [16] mathworks: k-medoids clustering. online [accessed 18 june 2021] https://it.mathworks.com/help/stats/kmedoids.html [17] eastrongroup: energy meters. online [accessed 18 june 2021] http://www.eastrongroup.com [18] c. athanasiadis, d. doukas, t. papadopoulos, a. chrysopoulos, a scalable real-time non-intrusive load monitoring system for the estimation of household appliance power consumption, energies, vol. 14, feb. 2021, no. 3. pp. 767. doi: 10.3390/en14030767 [19] r. nieto, l. de diego-otón, á. hernández, j. ureña, finite precision analysis for an fpga-based nilm event-detector, proc. of the 5th international workshop on non-intrusive load monitoring, online, 18 november 2020, pp. 30-33. doi: 10.1145/3427771.3427849 [20] a. k. jain, s. s. ahmed, p. sundaramoorthy, r. thiruvengadam, v. vijayaraghavan, current peak based device classification in nilm on a low-cost embedded platform using extra-trees, proc. of 2017 ieee mit undergraduate research technology conference (urtc), cambridge, ma, 3-5 november 2017, 4 pp. doi: 10.1109/urtc.2017.8284200 [21] m. lu, z. li, a hybrid event detection approach for nonintrusive load monitoring, ieee transactions on smart grid, vol. 11, no. 1, jan. 2020, pp. 528-540. doi: 10.1109/tsg.2019.2924862 https://doi.org/10.1016/j.rser.2017.05.096 https://doi.org/10.3390/en12112203 https://doi.org/10.1109/tsg.2019.2918330 https://doi.org/10.1109/icassp.2019.8682543 https://doi.org/10.3390/en12081572 https://doi.org/10.1109/icassp.2019.8682658 https://doi.org/10.1109/synergy-med.2019.8764104 https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-18.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-18.pdf https://doi.org/10.1109/access.2019.2960465 https://doi.org/10.1109/access.2020.2988366 https://doi.org/10.3390/app11020533 https://doi.org/10.1016/j.egypro.2017.07.371 http://doi.org/10.1145/3077839.3077859 https://doi.org/10.1109/5.192069 https://it.mathworks.com/help/stats/kmedoids.html http://www.eastrongroup.com/ https://doi.org/10.3390/en14030767 https://doi.org/10.1145/3427771.3427849 https://doi.org/10.1109/urtc.2017.8284200 https://doi.org/10.1109/tsg.2019.2924862 the importance of sound velocity determination for bathymetric survey acta imeko issn: 2221-870x december 2021, volume 10, number 4, 46 53 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 46 the importance of sound velocity determination for bathymetric survey pier paolo amoroso1, claudio parente2 1 international phd programme “environment, resources and sustainable development”, department of science and technology, parthenope university of naples, centro direzionale, isola c4, (80143) naples, italy 2 department of science and technology, parthenope university of naples, centro direzionale, isola c4, (80143) naples, italy section: research paper keywords: bathymetric survey; single beam; multi beam; sound velocity in water; depth measurements citation: pier paolo amoroso, claudio parente, the importance of sound velocity determination for bathymetric survey, acta imeko, vol. 10, no. 4, article 10, december 2021, identifier: imeko-acta-10 (2021)-04-10 section editor: silvio del pizzo, university of naples 'parhenope', italy received may 1, 2021; in final form december 6, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by parthenope university of naples, italy. corresponding author: claudio parente, e-mail: claudio.parente@uniparthenope.it 1. introduction as reported in huet (2009), “hydrography is the branch of applied sciences, which deals with the measurement, and description of the physical features of oceans, seas coastal areas, lakes and rivers, as well as with the prediction of their evolution, for the primary purpose of safety of navigation and all other marine activities, including economic development, security and defence, scientific research, and environmental protection” [1]. according to iho, international hydrographic organization [2], hydrographic survey can be defined as the survey of an area of water, but in modern use it includes many other objectives such as measurements of tides, currents, and determination of the physical and chemical properties of water. the main objective is to obtain essential data for the edition of nautical charts with particular interest in the characteristics that may influence navigation [3]. in addition, the hydrographic surveys aim to acquire the information necessary for marine navigation products and for the management, engineering, and science of coastal areas. the bathymetric surveys belong to the family of hydrographic surveys and are carried out whenever there is a need to precisely know the morphological trend of the seabed [4], [5]. they are therefore preliminary to the realization of maritime and river works and in the case of works already supplied, they are indispensable to verify continuously water heads and dredging volumes [6]. generally, hydrographic surveys are carried out using a vessel equipped with a precision echo sounder, which uses the principle of acoustic waves to sound the bottom and determine the depth. an accurate determination of the vessel position and attitude as well as a correct functioning of the echo sounder are both fundamental for the quality of the survey results. using satellites techniques, differential corrections with code measurements allow accuracies estimated to a few meters. currently, one of the abstract bathymetric surveys are carried out whenever there is a need to know the exact morphological trend of the seabed. for a correct operation of the echo sounder, which uses the principle of acoustic waves to scan the bottom and determine the depth, it is important to accurately determine the sound velocity in water, as it varies according to specific parameters (density, temperature, and pressure). in this work, we want to analyse the role of sound velocity determination in bathymetric survey and its impact on the accuracy of depth measurement. the experiments are conducted on data set provided by “istituto idrografico della marina militare italiana” (iim), the official hydrographic office for italy, and acquired in the ligurian sea. in our case, the formulas of chen & millero (unesco), medwin, and mackenzie were applied. the introduction of errors on chemical-physical parameters of the water column (temperature, pressure, salinity, depth) simulating inaccurate measurements, produces considerable impacts on sound velocity determination and subsequently a decrease of the depth value accuracy. the results remark the need to use precise probes and accurate procedures to obtain reliable depth data. mailto:claudio.parente@uniparthenope.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 47 most widely used techniques for high-precision positioning of vessels in bathymetric survey is the rtk mode (phase measurements) [7] which allow to achieve centimetre accuracy on the horizontal and vertical plane. since the effects of the vessel motion on the accuracy of the observed depths and their positioning, attitude (roll, pitch, and heading) and heave must be measured using appropriate instruments, such as inertial sensors that can be also integrated with gps [8]. a correct functioning of the echo sounder is very important for accurate depth measurement, but an accurate determination of the sound velocity in water is necessary [9]. once the instrument is activated, the transmitter sends a small amount of electrical energy towards the transducer, which switches it into a sound pulse and sends it towards the seabed. once it reaches the seabottom, the signal goes back to the transducer. therefore, the instrument is used to measure the time interval between the transmission and reception of the signal [10], [11]. two principal types of echo sounder are available, namely the single beam and the multibeam. the substantial difference is that the single beam emits a single sound impulse, while the multibeam emits a beam of sound impulses, which allow to obtain a greater and more detailed acquisition of the seabed [12]. the single beam and multibeam are basic instruments of acoustical oceanography, the discipline that describes the role of the ocean as an acoustic medium by relating oceanic properties to the behaviour of underwater acoustic propagation, noise, etc [13]. the sound velocity in a medium is mostly influenced by the medium itself [14], [15], so it is affected by the conditions of the sea-bottom boundaries as well as by the variation of the chemical-physical parameters of the water volume [16]. in fact, the sound velocity in seawater is defined as a function of the isothermal compressibility, the ratio of specific heats of seawater at constant pressure and constant volume, and the density of seawater [17]. particularly, the sound velocity in sea water increases with an increase in temperature, salinity or pressure [18]. temperature decreases from the sea-surface to the seabed, but there are different local variations. the sound velocity profile is very variable near the surface according to the seasons and the hours of the day, due to the heat exchange with the atmosphere, which modifies the temperature and salinity of the sea [16]. if temperature is constant, the velocity sound increases with depth due to the pressure gradient [19]. normally in literature, the average value of the sound velocity in water is accepted as 1500 m/s, calculated taking as reference the nominal conditions of the water, characterized by a temperature of 0 ° c, a salinity of 35 ppt (parts per thousand) and a pressure of 760 mmhg [20], [21]. this average value, however, can oscillate according to the characteristics of the water, varying between 1387 m/s and 1529 m/s [17]. local velocity measurements are quite difficult to perform accurately, whereas its constitutive parameters are more easily quantified. particular probes, bar check method and empirical formulas, permit to determine the sound velocity. empirical formulas require physical and chemical parameters, such as depth (d), temperature (t), salinity (s), pressure (p). these parameters can be measured with different types of instruments. there are many different formulas available to calculate the sound velocity in water, and the most popular and accurate are chen & millero (1977) [22]-[25], del grosso (1974) [26]-[28], mackenzie (1981) [29]-[31] and medwin (1975) [32], [33]. 2. sound velocity determination the determination of the sound velocity in water can be obtained through direct or indirect measurements. among the systems commonly used to measure the sound velocity in situ, we find the depth velocimeter, which directly measures the sound velocity for a high frequency wave transmitted over an accurately regulated distance. among the systems commonly used to determine the sound velocity in indirect way, we find specific probes capable of measuring chemical-physical parameters of water as input data. for example, the bathythermograph or the xbt probe [34], [35] measures the water temperature only as a function of depth; to deduce the sound velocity, it is necessary to have the salinity data independently, so it is measured simultaneously by a conductivity meter, integrated in the same device. the echo sounder is calibrated for water temperature and salinity, or directly to a known depth using the "bar check" method (measurement of the immersion depth of a metal bar or disc lowered below the transducer and suspended a graduated cable). this method consists in immersing under the echo sounder transducer, a plate with a square base of edge equal to 60 cm supported by a chain that has centimetre subdivisions. dive depths are generally set at 5 m, 10 m, 15 m, 20 m [36]. then you go to operate on the settings of the sound velocity in the water proceeding with the measurements until the correct depth value is obtained. this type of operation is repeated for the various control depths at least twice on each depth, and an arithmetic average is calculated on the obtained sound velocity values and set on the instrument. some systems, however, use all the values read at the various depths. this method is conveniently carried out in shallow water with low currents, gives a mean velocity to the observed depth and simultaneously checks the calibration of the sounder [37]. when indirect measurements are carried out and chemicalphysical parameters of water are available, the application of formulas that permit to calculate sound velocity is necessary. different formulas are used depending on the depth values, i.e.: 1. chen & millero is only used for depths less than 1000 m; 2. del grosso is used only for depths greater than 1000 m; 3. mackenzie for quick calculations in ocean waters up to 8000 m depth; 4. medwin is used for quick calculations in ocean waters up to 1000 m depth [38]. a range of validity of physical and chemical parameters characterizes every empirical formula. normally, depths are measured in meters, temperatures are measured in °c, salinity is measured in ppt (part for thousand) and pressure is measured in bar. a brief description of each formula is reported below. the formula chen & millero is denominated as the international algorithm adopted by unesco; accurate models than others characterize it, and the equation is the following: 𝑐 = 𝐶𝑤(𝑇, 𝑃) + 𝐴(𝑇, 𝑃)𝑆 + 𝐵(𝑇, 𝑃)𝑆 3 2⁄ + 𝐷(𝑇, 𝑃)𝑆 2 , (1) where 𝐶𝑤 (𝑇, 𝑃) = c00+c01 t+c02 t 2+c03 t 3+c04t 4 + +c05t 5 + (c10+c11t+c12t 2 +c13t 3+ c14 t 4)p+ +(c20+c21t+c22t 2+c23t 3 +c24t 4)p2+ +(c30+c31t+c32t 2)p3 (2) acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 48 𝐴(𝑇, 𝑃) = a00 + a01t + a02t 2 + a03t 3 + a04t 4 + +(𝐴10 + 𝐴11𝑇 + 𝐴12𝑇 2 + 𝐴13𝑇 3 + +𝐴14𝑇 4)𝑃 + +(𝐴20 + 𝐴21𝑇 + 𝐴22𝑇 2 + 𝐴23𝑇 3)𝑃2 + +(𝐴30 + 𝐴31𝑇 + 𝐴32𝑇 2)𝑃3, (3) 𝐵(𝑇, 𝑃) = 𝐵00 + 𝐵01𝑇 + (𝐵10 + 𝐵11 𝑇)𝑃 , (4) 𝐷(𝑇, 𝑃) = 𝐷00 + 𝐷10𝑃 . (5) this formula is valid for temperature values included in the range 0 °c < t < 40 °c, salinity values included in the range 0 ppt < s < 40 ppt, and pressure values included in the range 0 bar < p < 1000 bar [39]. the coefficients in the equations are reported in table 1. the del grosso equation is used as an alternative to unesco algorithm. the formula is the following: 𝑐(𝑆, 𝑇, 𝑃) = 𝐶000 + ∆𝐶t + ∆𝐶s + ∆𝐶p + ∆𝐶stp, (6) where ∆𝐶t(𝑇) = 𝐶t1𝑇 + 𝐶t2𝑇 2 + 𝐶t3𝑇 3, (7) ∆𝐶s(𝑆) = 𝐶s1𝑆 + 𝐶s2𝑆 2 , (8) ∆𝐶p(p) = 𝐶p1𝑃 + 𝐶p2𝑃 2 + 𝐶p3𝑃 3 , (9) ∆cstp(𝑆, 𝑇, 𝑃) = 𝐶tp𝑇 𝑃 + 𝐶t3p𝑇 3𝑃 + 𝐶tp2𝑇 𝑃 2 + +𝐶t2p2𝑇 2𝑃2 + 𝐶tp3𝑇 𝑃 3 + 𝐶st𝑆 𝑇+ 𝐶st2𝑆 𝑇 2 + +𝐶stp𝑆 𝑇 𝑃 + +𝐶s2tp𝑆 2𝑇 𝑃 + 𝐶s2p2𝑆 2𝑃2 . (10) the coefficient values are shown in table 2. this formula is valid for temperature values included in the range 0 °c < t < 30 °c, salinity values included in the range 30 ppt < s < 40 ppt, and pressure values included in the range 0 bar < p < 980.665 bar. unlike chen & millero and del grosso, mackenzie uses depth in the formula for velocity calculation. the formula is: 𝑐(𝐷, 𝑆, 𝑇) = 1448.96 + 4.591𝑇 − 5.304 ∙ 10−2𝑇2 + +2.374 ∙ 10−4𝑇3 + 1.340(𝑆 − 35) + 1.630 ∙ 10−2𝐷 + 1.675 ∙ 10−7𝐷2 − +1.25 ∙ 10−2𝑇(𝑆 − 35) − 7.139 ∙ 10−13𝑇𝐷3. (11) this formula is valid for temperature values included in the range 2 °c < t < 30 °c, salinity values included in the range 25 ppt < s < 40 ppt, and depth values included in the range 0 m < d < 8000 m [38]. medwin is the simplest formula, and it is given as: 𝑐 = 1449.2 + 4.6 𝑇 − 0.055 𝑇2 + 0.00029 𝑇3 + (1.34 − 0.010 𝑇)(𝑆 − 35) + 0.016 𝐷 . (12) this formula, instead, is valid for temperature values included in the range 0 °c < t < 35 °c, salinity values included in the range 0 ppt < s < 40 ppt and depth values included in the range 0 m < d < 1000 m. 3. applications 3.1. hydrographic data the “istituto idrografico della marina militare italiana (iim)” provided the data set for this work. the ship used for the survey is the nave magnaghi, a hydro-oceanographic ship. the hydrographic activity carried out by the unit is concretized in the realization of port, coastal and offshore reliefs through sounding operations, research of minimum depths and wrecks, determination of the topography of the coastline and port works, study of the nature of the seabed etc. the sampling point is localized near the ligurian coast named “riviera di levante” and has the following coordinates referred to world geodetic system 84 (wgs-84): φ = 43°55’17’’.53 n, λ = 9°40’55’’.4 e (figure 1). this type of data set was obtained by using the "ctd idronaut-ocean seven 316 plus" probe which measures chemicalphysical parameters of water providing in output the values of pressure, temperature, conductivity, salinity, depth and velocity. particularly, data are provided about every meter of depth, in order to be able to determine the correct vertical profile of the sound velocity. the analysed depths range from 0.99 m to 324.12 m, the pressure values are provided by the probe in db (decibar), the temperature in degrees celsius (° c) and the salinity in parts per thousand (ppt). as reported in the probe technical specification document, pressure and temperature are directed measured as well as other parameters e. g. conductivity and ph. table 1. coefficients of chen & millero formula. a00 1.389 b00 -0.01922 c22 2.60e-08 a01 -0.01262 b01 -4.40e-05 c23 -2.50e-10 a02 7.16e-05 b10 7.36e-05 c24 1.04e-12 a03 2.01e-06 b11 1.79e-07 c30 -9.80e-09 a04 -3.20e-08 c00 1402.388 c31 3.85e-10 a10 9.47e-05 c01 5.03711 c32 -2.40e-12 a11 -1.30e-05 c02 -0.05809 d00 0.001727 a12 -6.50e-08 c03 0.000334 d10 -8.00e-06 a13 1.05e-08 c04 -1.50e-06 a14 -2.00e-10 c05 3.15e-09 a20 -3.90e-07 c10 0.153563 a21 9.10e-09 c11 0.00069 a22 -1.60e-10 c12 -8.20e-06 a23 7.99e-12 c13 1.36e-07 a30 1.10e-10 c14 -6.10e-10 a31 6.65e-12 c20 3.13e-05 a32 -3.40e-13 c21 -1.70e-06 table 2. coefficients of del grosso formula. c000 1402.392 ct1 5.01e+00 ct2 -5.51e-02 ct3 2.22e-04 cs1 1.33e+00 cs2 1.29e-04 cp1 0.1560592 cp2 2.45e-05 cp3 -8.83e-09 cst -1.28e-02 ctp 6.35e-03 ct2p2 2.66e-08 ctp2 -1.59e-06 ctp3 5.22e-10 ct3p -4.38e-07 cs2p2 -1.62e-09 cst2 9.69e-05 cs2tp 4.86e-06 cstp -3.41e-04 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 49 other parameters such as salinity and water density are obtained using specific algorithms available in literature [40]. the probe is characterized by the following accuracy values for direct measurements: 5.00e-04 bar for pressure; 2.00e-03 °c for temperature. in consideration of the adopted approach of the indirect measurement of the salinity, the accuracy is better than 0.005 ppt. 3.2. results in this work, we proceeded with the calculation of the sound velocity in water using three of the formulas previously described (del grosso formula is not applicable because the analyzed depths are less than 1000 m). table 3 shows a selection of data supplied by “idronaut ctd” acquired at different depths along the investigated water column, as well as the relative sound velocity values cu, cme, cma, calculated using respectively the unesco formula, the medwin formula and the mackenzie formula. table 4 shows the statistics (mean, standard deviation, minimum and maximum values) of sound velocity values supplied by the adopted formulas. the sound velocity calculated with the unesco formula oscillates between a minimum equal to 1508.988 m/s and a maximum equal to 1533.994 m/s, with an average value equal to 1513.543 m/s. the velocity calculated with the medwin formula oscillates between a minimum equal to 1509.142 m/s and a maximum equal to 1534.026 m/s, with an average value of 1513.636 m/s. finally, the sound velocity calculated with the mackenzie formula oscillates between a minimum equal to 1509.455 m/s and a maximum equal to 1534.676 m/s, with an average value equal to 1514.043 m/s. in the table 5, the statistical values of the residuals produced by the comparison between the adopted formulas are reported. once the sound velocity was calculated, a systematic error was introduced in each chemical-physical parameter of the water, i.e. temperature, salinity, pressure and depth, in order to evaluate the impact on the determination of the sound velocity. by systematic error we mean, in this case, an incorrect sampling of the parameters in question, simulating probe or human errors during the measurement. the introduced errors are: 0.1, 0.5 and 1 °c figure 1. localization of the sample point: visualization on the italy map (upper) and on sentinel-2 satellite image of the ligurian sea (lower). table 3. a selection of values of pressure, temperature and salinity supplied by ctd probe and corresponding velocity of the sound in water calculated by using unesco, medwin and mackenzie formulas. p (bar) d (m) t (°c) s (ppt) cu (m/s) cme (m/s) cma (m/s) 0.1 0.990 23.397 38.1408 1533.889 1533.924 1534.569 0.2 1.980 23.396 38.147 1533.910 1533.945 1534.591 0.3 2.980 23.398 38.151 1533.934 1533.968 1534.615 0.4 3.970 23.397 38.154 1533.953 1533.987 1534.635 0.5 4.960 23.398 38.156 1533.974 1534.006 1534.655 0.6 5.950 23.397 38.160 1533.994 1534.026 1534.676 0.7 6.940 23.389 38.163 1533.994 1534.024 1534.676 0.8 7.940 23.380 38.166 1533.990 1534.020 1534.672 0.9 8.930 23.369 38.171 1533.987 1534.016 1534.669 1 9.920 23.361 38.176 1533.988 1534.017 1534.671 5.1 50.580 16.426 38.046 1515.445 1515.598 1515.984 10.1 100.170 14.034 38.180 1508.991 1509.145 1509.458 20.2 200.280 13.944 38.519 1510.773 1510.865 1511.252 32.7 324.120 14.007 38.660 1513.210 1513.216 1513.672 table 4. statistics of sound velocity values supplied by the adopted formulas. cu (m/s) cme (m/s) cma (m/s) mean 1513.543 1513.636 1514.043 st. dev. 6.686 6.666 6.747 min 1508.988 1509.142 1509.455 max 1533.994 1534.026 1534.676 table 5. statistical values of the residuals produced by the comparison between the adopted formulas. cme – cu (m/s) cma – cu (m/s) cma – cme (m/s) mean 0.094 0.501 0.407 st. dev. 0.050 0.063 0.087 rmse 0.106 0.505 0.416 min 0.006 0.462 0.312 max 0.165 0.694 0.666 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 50 for temperature, 0.1, 0.5 and 1 ppt for salinity, 0.1, 0.5 and 1 bar for pressure, 0.1 0.5 and 1 m for depth. however, those errors are worse than the accuracy values reported for the probe in subchapter 3.1, so to simulate combination of unfavourable environmental situations and poorly accurate operations. we then proceeded with the calculation of the residuals obtained by comparing the values of sound velocity produced by systematic error with the initial values of sound velocity considered as a reference. subsequently, the statistical values (mean, standard deviation, rmse, minimum and maximum values) are calculated for each formula, so to define the impact of the systematic errors on the accuracy of sound velocity determination. table 6, table 7, table 8 and table 9 show the statistical values of the residual produced by the injection of systematic errors in, respectively, the temperature, salinity, pressure and depth values in each formula. we finally proceeded with the calculation of the statistical values of the residuals generated by the worst possible combination of the systematic errors on temperature and salinity. the results of this calculation are shown in the table 10. in the end, the estimate of the errors on the depths was calculated. to provide the reader with an idea of this error dimension, average values of sound velocity in water were taken, calculated using the three formulas, every 100 m, 200 m and a single average value over the entire column of water, both with and without systematic errors. figure 2 shows the variability of the sound velocity in the case of unesco formula application, when we introduced systematic errors on temperature (0.5 °c), salinity (0.5 ppt) and both (0.5 °c-0.5 ppt). table 11 shows the rmse values of the differences between the known and the calculated depths by using different sound table 6. statistical values of the residuals produced by systematic errors for the temperature (0.1-0.5-1 °c) introduced in the adopted formulas. mean st. dev. rmse min max δc u (m/s) (0.1 °c) 0.311 0.020 0.312 0.249 0.321 δc me (m/s) (0.1 °c) 0.310 0.021 0.311 0.247 0.320 δc ma (m/s) (0.1 °c) 0.315 0.021 0.315 0.250 0.325 δc u (m/s) (0.5 °c) 1.549 0.102 1.553 1.239 1.597 δc me (m/s) (0.5 °c) 1.542 0.104 1.546 1.227 1.591 δc ma (m/s) (0.5 °c) 1.565 0.107 1.569 1.240 1.616 δc u (m/s) (1 °c) 3.079 0.203 3.085 2.461 3.174 δc me (m/s) (1 °c) 3.063 0.207 3.070 2.437 3.161 δc ma (m/s) (1 °c) 3.110 0.213 3.117 2.463 3.210 table 7. statistical values of the residuals produced by systematic errors for the salinity (0.1-0.5-1 ppt) introduced in the adopted formulas. mean st. dev. rmse min max δc u (m/s) (0.1 ppt) 0.117 0.002 0.117 0.109 0.118 δc me (m/s) (0.1 ppt) 0.119 0.003 0.119 0.111 0.120 δc ma (m/s) (0.1 ppt) 0.133 0.000 0.133 0.133 0.133 δc u (m/s) (0.5 ppt) 0.584 0.012 0.584 0.547 0.590 δc me (m/s) (0.5 ppt) 0.594 0.013 0.594 0.553 0.600 δc ma (m/s) (0.5 ppt) 0.664 0.000 0.664 0.664 0.664 δc u (m/s) (1 ppt) 1.169 0.025 1.169 1.094 1.181 δc me (m/s) (1 ppt) 1.188 0.027 1.189 1.106 1.201 δc ma (m/s) (1 ppt) 1.328 0.000 1.328 1.328 1.328 table 8. statistical values of the residuals produced by systematic errors for the pressure (0.1-0.5-1 bar) introduced in the adopted formulas. mean st. dev. rmse min max δc u (m/s) (0.1 bar) 0.017 0.000 0.017 0.017 0.017 δc me (m/s) (0.1 bar) δc ma (m/s) (0.1 bar) δc u (m/s) (0.5 bar) 0.083 0.000 0.083 0.083 0.083 δc me (m/s) (0.5 bar) δc ma (m/s) (0.5 bar) δc u (m/s) (1 bar) 0.166 0.001 0.166 0.166 0.167 δc me (m/s) (1 bar) δc ma (m/s) (1 bar) table 9. statistical values of the residuals produced by systematic errors for the depth (0.1-0.5-1 m) introduced in the adopted formulas. mean st. dev. rmse min max δc u (m/s) (0.1 m) δc me (m/s) (0.1 m) 0.002 0.000 0.002 0.002 0.002 δc ma (m/s) (0.1 m) 0.002 0.000 0.002 0.002 0.002 δc u (m/s) (0.5 m) δc me (m/s) (0.5 m) 0.008 0.000 0.008 0.008 0.008 δc ma (m/s) (0.5 m) 0.008 0.000 0.008 0.008 0.008 δc u (m/s) (1 m) δc me (m/s) (1 m) 0.016 0.000 0.016 0.016 0.016 δc ma(m/s) (1 m) 0.016 0.000 0.016 0.016 0.016 table 10. statistical values of the residuals produced by combination of systematic errors for temperature and salinity (0.1 °c-0.1 ppt, 0.5 °c-0.5 ppt, 1 °c-1 ppt), introduced in the adopted formulas. mean st. dev. rmse min max δc u (m/s) (0.1 °c-0.1 ppt) 0.428 0.023 0.429 0.358 0.439 δc me (m/s) (0.1 °c-0.1 ppt) 0.429 0.024 0.429 0.357 0.440 δc ma (m/s) (0.1 °c-0.1 ppt) 0.447 0.021 0.448 0.382 0.458 δc u (m/s) (0.5 °c-0.5 ppt) 2.131 0.114 2.134 1.784 2.185 δc me (m/s) (0.5 °c-0.5 ppt) 2.134 0.117 2.137 1.777 2.189 δc ma (m/s) (0.5 °c-0.5 ppt) 2.229 0.107 2.232 1.904 2.280 δc u (m/s) (1 °c-1 ppt) 4.237 0.227 4.244 3.547 4.344 δc me (m/s) (1 °c-1 ppt) 4.242 0.233 4.248 3.533 4.352 δc ma (m/s) (1 °c-1 ppt) 4.437 0.213 4.442 3.790 4.538 figure 2. impact on the sound velocity in water caused by systematic errors on t, s and t-s. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 51 velocity values, those obtained from different approaches, i.e. using unesco formula (rmseu), medwin formula (rmseme) and mackenzie formula (rmsema), averaging the sound velocity values on sections (100 m) or on the entire water column, injecting errors in temperature (0.1-0.5-1 °c), salinity (0.1-0.5-1 ppt) or both. 3.3. discussions the results highlight how the values of sound velocity in water obtained by means of the three formulas are like each other. in particular, as reported in table 5, it is possible to see how the unesco formula and the medwin formula are very similar to each other, as they show a very small rmse, while the mackenzie formula, compared to the other two, approximates the sound velocity values differently in the same conditions. from the results shown in the previous tables, temperature and salinity are the parameters that have the greatest effect on the determination of the sound velocity in water. in fact, the introduction of systematic errors on them produces impacts greater than depth and pressure. as regards the temperature, the parameter that has the greatest influence compared to the others, there is an increase in the rmse value, in particular with the presence of an error of 1 °c, an rmse equal to 3.088 m/s is obtained, as shown in table 6. these results are approximately similar for all formulas. for salinity, another parameter with strong influence, as shown in table 7, we can see that the results obtained by using the unesco formula and the medwin formula are very similar, while regarding the mackenzie formula, we can see how an important variation of the velocity values is obtained, but this remains constant along the water column. this result is highlighted by the standard deviation values. for the pressure and the depth, the situation is the opposite, as they are the parameters that have minor effects on the determination of the velocity, in fact the impact produced by the introduction of systematic errors can be considered almost negligible. for pressure, we have an rmse much smaller compared to the one seen previously. it can be noted that even with the presence of a systematic error equal to 1 bar, the rmse is almost negligible, showing a value of 0.16 m/s as shown in table 8. finally, for the depth, as shown in table 9, we get the smallest values of rmse. also for these two parameters, the formulas show similar values. in the case of the worst combination generated by the simultaneous introduction of systematic errors on temperature and salinity, table 10 shows very important results. the incorrect determination of these parameters, combined with each other, leads to an rmse ranging from 0.428 m/s (for a systematic error of 0.1 °c 0.1 ppt) up to 4.248 m/s (for a systematic error of 1 °c – 1 ppt). also in this case, the values obtained by the formulas of unesco and medwin are similar, while the mackenzie formula shows a slight increase in the obtained values. in the end, table 11 shows the rmse values for the depth associated with the resulting velocity values. in particular, it can be noted that, as more velocity values are taken along the water column, the rmse value tends to decrease. in fact, for a single velocity value over the entire water column, an rmse equal to 0.260 m is obtained, while for velocity values taken every 100 m, an rmse equal to 0.110 m is obtained. we can also note that a systematic error on velocity naturally leads to an error on depth, even taking multiple velocity values along the water column, generating an rmse equal to 0.560 m in the worst possible case, i.e. with the presence of a systematic error on both temperature and salinity. 4. conclusions bathymetric surveys are typically carried out using techniques that exploit the propagation of acoustic waves in water. therefore, the correct determination of the sound velocity in water is of fundamental importance. it was found that an error on the chemical-physical parameters of the water (temperature, pressure, salinity, depth) due to an inaccurate instrument calibration, or, rather, to the use of a wrong model for one of them, can impact significantly on sound velocity determination. in particular, this article provides a measurement of the errors that can be produced. for our application, the formulas of unesco, medwin and mackenzie have been taken into consideration and systematic errors on the four parameters have been simulated. the inaccuracy of temperature and salinity measurements produces the greatest effects. particularly, the results remark that the sound velocity is very sensitive to the variation of the temperature. an error on the determination of the sound velocity in water leads to a non-negligible error on the depth. it has been seen that using a single sound velocity value over the entire water column, affected by a combination of systematic errors for temperature and salinity, generates errors that can reach about 0.5 m. of course, taking more different velocity values as reference along the water column allows you to determine the depth of the bottom more accurately, but even in this case, you risk having non negligible errors. it is possible to conclude that the sound velocity in water represents a very important parameter in bathymetric surveys, and therefore it must necessarily be determined with the highest possible accuracy. our experiments do not permit to evaluate the best method for indirect measurement of sound in water: the table 11. rmse values of the differences between the known and the calculated depths using velocity sound values derived from different approaches. velocity values taken on: rmseu (m) rmseme (m) rmsema (m) entire water column 0.260 0.250 0.263 every 100 m 0.111 0.107 0.121 entire water column (0.1 °c) 0.246 0.236 0.243 entire water column (0.5 °c) 0.245 0.243 0.248 entire water column (1 °c) 0.353 0.356 0.360 every 100 m (0.1 °c) 0.116 0.116 0.128 every 100 m (0.5 °c) 0.216 0.224 0.229 every 100 m (1°c) 0.388 0.397 0.401 entire water column (0.1 ppt) 0.254 0.244 0.251 entire water column (0.5 ppt) 0.237 0.229 0.234 entire water column (1 ppt) 0.235 0.231 0.239 every 100 m (0.1 ppt) 0.111 0.109 0.123 every 100 m (0.5 ppt) 0.130 0.134 0.148 every 100 m (1 ppt) 0.178 0.188 0.207 entire water column (0.1 °c-0.1 ppt) 0.241 0.233 0.239 entire water column (0.5 °c-0.5 ppt) 0.276 0.277 0.286 entire water column (1 °c-1 ppt) 0.470 0.477 0.497 every 100 m (0.1 °c-0.1 ppt) 0.121 0.123 0.134 every 100 m (0.5 °c-0.5 ppt) 0.279 0.290 0.302 every 100 m (1°c-1 ppt) 0.526 0.538 0.560 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 52 paper highlights the impact of inaccurate determination of temperature, pressure and salinity on the bathymetric survey results. extremely precise probes for direct measurement now available are to prefer to improve the depth determination. acknowledgement this work synthesizes results of experiments executed within a research project performed in the laboratory of geomatics, remote sensing and gis of the “parthenope” university of naples. we would like to thank the technical staff for their support. references [1] m. huet, marine spatial data infrastructure, an iho perspective: data, products, standards and policies, international hydrographic bureau, monaco (2009). [2] m. j. umbach, hydrographic manual, department of commerce, national oceanic and atmospheric administration, national ocean survey, 20 (2), (1976). [3] r. m. alkan, n.o. aykut, evaluation of recent hydrographic survey standards, in proc. of the 19th international symposium on modern technologies, education and professional practice in geodesy and related fields, pp. 116-130, (2009). [4] j. v. gardner, p. dartnell, l. a. mayer, j. e. h. clarke, geomorphology, acoustic backscatter, and processes in santa monica bay from multibeam mapping, marine environmental research, 56 (1-2), pp.15-46, (2003). doi: 10.1016/s0141-1136(02)00323-9 [5] t. a. kearns, j. breman, bathymetry-the art and science of seafloor modeling for modern applications, ocean globe, pp. 136, (2010). [6] l. m. pion, j. c. m. bernardino, dredging volumes prediction for the access channel of santos port considering different design depths, transnav international journal on marine navigation and safety of sea transportation 12, (2018). doi: 10.12716/1001.12.03.09 [7] d. popielarczyk, rtk water level determination in precise inland bathymetric measurements, in proceedings of the 25th international technical meeting of the satellite division of the institute of navigation (ion gnss 2012), september (2012), pp. 1158-1163. [8] manual on hydrography, publication c-13, 1 st edition, published by the international hydrographic bureau, may(2005). [9] g. b. mills, international hydrographic survey standards, the international hydrographic review, (1998). [10] g. antonelli, f. arrichiello, a. caiti, g. casalino, d. de palma, g. indiveri, m. razzanelli, l. pollini, e. simetti, isme activity on the use of autonomous surface and underwater vehicles for acoustic surveys at sea, acta imeko, 7(2), pp. 24-31, (2018). doi: 10.21014/acta_imeko.v7i2.539 [11] c. parente, and a. vallario, interpolation of single beam echo sounder data for 3d bathymetric model, int. j. adv. comput. sci. appl, 10, pp.6-13, (2019). doi: 10.14569/ijacsa.2019.0101002 [12] i. parnum, j. siwabessy, a. gavrilov, m. parsons, a comparison of single beam and multibeam sonar systems in seafloor habitat mapping, in proc. 3rd int. conf. and exhibition of underwater acoustic measurements: technologies & results, nafplion, greece, june (2009), pp. 155-162. [13] h. medwin, sounds in the sea: from ocean acoustics to acoustical oceanography, cambridge university press, (2005). [14] j. w. s. rayleigh, theory of sound, 2(255-266), macmillan, london, (1894). [15] d. agrez, s. begus, evaluation of pressure effects on acoustic thermometer with a single waveguide, acta imeko, 7(4), pp. 4247, (2019). doi: 10.21014/acta_imeko.v7i4.576 [16] x. lurton, an introduction to underwater acoustics: principles and applications, springer science & business media, (2002). [17] p. c. etter, underwater acoustic modeling and simulation, crc press, (2018). doi: 10.1201/9781315166346 [18] l. d. talley, descriptive physical oceanography: an introduction, academic press, (2011). doi: 10.1016/b978-0-7506-4552-2.10001-0 [19] introduction to sonar, bureau of naval personnel navy training course, 2nd ed., washington, d.c.: u.s. bureau of naval personnel, (1963). [20] a. e. ingham, hydrography for the surveyor and engineer, 3rd edn., blackwell scientific publications, london, pp. 132, (1992). [21] s. jamshidi, m. n. abu bakar, an analysis on sound speed in seawater using ctd data, journal of applied sciences, 10(2), pp. 132-138, (2010). doi: 10.3923/jas.2010.132.138 [22] c. t. chen, r. a. fine, f. j. millero, the equation of state of pure water determined from sound speeds, the journal of chemical physics, 66(5), pp. 2142-2144, (1977). doi: 10.1063/1.434179 [23] c. t. chen, f. j. millero, the equation of state of seawater determined from sound speeds, journal of marine research, 36(4), pp. 657-691, (1978). [24] f. j. millero, c. t. chen, a. bradshaw, k. schleicher, a new highpressure equation of state for seawater, deep sea research part a. oceanographic research papers, 27(3-4), pp. 255-264, (1980). doi: 10.1016/0198-0149(80)90016-3 [25] f. j. millero, history of the equation of state of seawater, oceanography, 23(3), pp.18-33, (2010). doi: 10.5670/oceanog.2010.21 [26] v. a. del grosso, c. w. mader. speed of sound in pure water, the journal of the acoustical society of america, 52(5b), pp. 1442-1446, (1972). doi: 10.1121/1.1913258 [27] f. filiciotto, g. buscaino, the role of sound in the aquatic environment, ecoacoustics, pp.61-79, (2017). doi: 10.1002/9781119230724.ch4 [28] l. li, t. wang, l. yang, c. gu, h. liang, modeling of sound speed in uwasn, in 2018 ieee 15th international conference on networking, sensing and control (icnsc) , ieee., march (2018), pp. 1-6. doi: 10.1109/icnsc.2018.8361308 [29] c. c. leroy, development of simple equations for accurate and more realistic calculation of the speed of sound in seawater, the journal of the acoustical society of america, 46(1b), pp. 216-226, (1969). doi: 10.1121/1.1911673 [30] k. v. mackenzie, discussion of seawater sound‐speed determinations, the journal of the acoustical society of america, 70(3), pp. 801-806, (1981). doi: 10.1121/1.386919 [31] k. v. mackenzie, nine‐term equation for sound speed in the oceans, the journal of the acoustical society of america, 70(3), pp.807-812, (1981). doi: 10.1121/1.386920 [32] h. medwin, speed of sound in water: a simple equation for realistic parameters, the journal of the acoustical society of america, 58(6), pp.1318-1319, (1975). doi: 10.1121/1.380790 [33] c. c. leroy, s. p. robinson, m. j. goldsmith, a new equation for the accurate calculation of sound speed in all oceans, the journal of the acoustical society of america, 124(5), pp.2774-2782, (2008). doi: 10.1121/1.2988296 [34] r. h. heinmiller, c. c. ebbesmeyer, b. a. taft, d. b. olson, o. p. nikitin, systematic errors in expendable bathythermograph (xbt) profiles, deep sea research part a. oceanographic research papers, 30(11), pp. 1185-1196, (1983). doi: 10.1016/0198-0149(83)90096-1 https://doi.org/10.1016/s0141-1136(02)00323-9 https://doi.org/10.21014/acta_imeko.v7i2.539 https://doi.org/10.14569/ijacsa.2019.0101002 https://doi.org/10.21014/acta_imeko.v7i4.576 https://doi.org/10.1201/9781315166346 https://doi.org/10.1016/b978-0-7506-4552-2.10001-0 https://doi.org/10.3923/jas.2010.132.138 https://doi.org/10.1063/1.434179 https://doi.org/10.1016/0198-0149(80)90016-3 https://doi.org/10.5670/oceanog.2010.21 https://doi.org/10.1121/1.1913258 https://doi.org/10.1002/9781119230724.ch4 https://doi.org/10.1109/icnsc.2018.8361308 https://doi.org/10.1121/1.1911673 https://doi.org/10.1121/1.386919 https://doi.org/10.1121/1.386920 https://doi.org/10.1121/1.380790 https://doi.org/10.1121/1.2988296 https://doi.org/10.1016/0198-0149(83)90096-1 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 53 [35] l. cheng, j. abraham, g. goni, t. boyer, s. wijffels, r. cowley, v. gouretski, f. reseghetti, s. kizu, s. dong, f. bringas, m. goes, l. houpert, j. sprintall, j. zhu, xbt science: assessment of instrumental biases and errors, bulletin of the american meteorological society, 97(6), pp. 924-933., (2016). doi: 10.1175/bams-d-15-00031.1 [36] m. j. langland, bathymetry and sediment-storage capacity change in three reservoirs on the lower susquehanna river, 1996-2008, reston, va: us geological survey, (2009). doi: 10.3133/sir20095110 [37] c. d. maunsell, the speed of sound in water, canadian acoustics, 4(3), pp. 2-4, (1976). [38] k. h. talib, m. y. othman, s. a. h. sulaiman, m. a. m. wazir, a. azizan, determination of speed of sound using empirical equations and svp, ieee 7th international colloquium on signal processing and its applications, ieee, march (2011), pp. 252256. doi: 10.1109/cspa.2011.5759882 [39] r. m. alkan, y. kalkan, n. o. aykut, sound velocity determination with empirical formulas and bar check, in proceedings of 23rd fig congress, munich, october (2006). [40] n. p. fofonoff, r.c. millard jr., algorithms for computation of fundamental properties of seawater, unesco technical papers in marine science, 44, (1983). https://doi.org/10.1175/bams-d-15-00031.1 https://doi.org/10.3133/sir20095110 https://doi.org/10.1109/cspa.2011.5759882 an iot measurement solution for continuous indoor environmental quality monitoring for buildings renovation acta imeko issn: 2221-870x december 2021, volume 10, number 4, 230 238 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 230 an iot measurement solution for continuous indoor environmental quality monitoring for buildings renovation serena serroni1, marco arnesano2, luca violini1, gian marco revel1 1 università politecnica delle marche, department of industrial engineering and mathematical science,via delle brecce bianche, 60131 ancona (an),italy 2 università ecampus, via isimbardi 10, 22060 novedrate (co), italy section: research paper keywords: thermal comfort; indoor air quality; ieq; measurements; iot; building renovation citation: serena serroni, marco arnesano, luca violini, gian marco revel, an iot measurement solution for continuous indoor environmental quality monitoring for buildings renovation, acta imeko, vol. 10, no. 4, article 35, december 2021, identifier: imeko-acta-10 (2021)-04-35 section editor: carlo carobbi, university of florence, gian marco revel, università politecnica delle marche and nicola giaquinto, politecnico di bari, italy received october 10, 2021; in final form december 10, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by p2endure, european project horizon 2020. corresponding author: serena serroni, e-mail: s.serroni@pm.univpm.it 1. introduction the measurement of indoor environmental quality (ieq) requires the acquisition of multiple quantities regarding thermal comfort and indoor air quality. therefore, accurate monitoring and control of those environmental conditions can be useful for preventing the spread of covid-19. however, about 75 % of european buildings stock was built before 1990, before any eu building regulation [1] and with a climate context that has changed through the last decade [2]. thus, most of occupied buildings are not able to keep the required comfort conditions because of the poor performance of the envelope and heating/cooling systems [3]. the importance of indoor environmental quality (ieq) is a well-known and largely discussed theme because of its impact on human comfort, well-being, productivity, learning capability and health [4]. ieq derives from the combination of different factors influencing the human comfort sensation: thermo hygrometry, acoustic, illumination and concentration of pollutant components [5]. all those aspects should be considered at the same level of importance considering that humans spend roughly 90 % of their time indoors, especially after covid-19 spread. in addition, a recent study [6] demonstrated that indoor environmental factors, such as temperature, humidity, ventilation, and filtering systems could have a significant influence on the infection. several studies showed a correlation between the concentration of air pollutants, especially particular matter 2.5 (pm2.5) and particular matter 10 (pm10), and covid-19 virus transmission [7]. for this reason, current buildings’ renovation approaches and trends are including the ieq in the renovation assessment with abstract the measurement of indoor environmental quality (ieq) requires the acquisition of multiple quantities regarding thermal comfort and indoor air quality. the ieq monitoring is essential to investigate the building’s performance, especially when renovation is needed to improve energy efficiency and occupants’ well-being. thus, ieq data should be acquired for long periods inside occupied buildings, but traditional measurement solutions could not be adequate. this paper presents the development and application of a non-intrusive and scalable iot sensing solution for continuous ieq measurement in occupied buildings during the renovation process. the solution is composed of an ir scanner for mean radiant temperature measurement and a desk node with environmental sensors (air temperature, relative humidity, co2, pms). the integration with a bim-based renovation approach was developed to automatically retrieve building’s data required for sensor configuration and kpis calculation. the system was installed in a nursery located in poland to support the renovation process. ieq performance measured before the intervention revealed issues related to radiant temperature and air quality. using measured data, interventions were realized to improve the envelope insulation and the occupant’s behaviour. results from postrenovation measurements showed the ieq improvement achieved, demonstrating the impact of the sensing solution. mailto:s.serroni@pm.univpm.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 231 an increased role of importance [8]. recently, the standard en 16798 [9] has been released, in substitution of the en 15251, which provides a framework for the building performance assessment concerning the indoor environment. en 16798 provides methodologies for the calculation of ieq metrics for buildings’ classification, based on environmental measurements [10]. in that standard, thermal comfort is assessed according to well-known predictive and adaptive approaches, as defined in iso 7730 [11] and ashrae 55. however, the same level of detail is not given to indoor air quality, visual comfort, and acoustic comfort. in their critical review [12], khovalyg et al. remarked that iso and en standards should include more requirements on pm. similarly, another critical investigation on ieq data collection and assessment criteria presented in [13] revealed that sensor technology and data analysis are mainly applied to thermal comfort, while other ieq’s domains have not been addressed with the same effort. for this reason, the assessment of ieq domains, unconsidered by standards, has been conceptualized within recent research activities [14] also with a holistic approach that groups domains together [15]. even if the importance of ieq has been largely demonstrated, actual measurement tools are not adequate because of the need of measuring several environmental quantities with a high temporal and spatial resolution. traditional spot measurements tools are bulky and require a strong human effort for data processing and analysis. so, they can’t be used for responsive strategies to improve ieq by implementing retrofit actions (envelope insulation, mechanical ventilation) or triggering occupants’ behaviours (windows opening, thermostat regulation). recently, the use of sensors integrated into building management systems (bms) has also been investigated for ieq monitoring [16].however, those systems generally make use of wall-mounted sensors providing environmental quantities measured away from the real location of the occupants. therefore, data could be representative of a small part of the building. optimization of the sensing location could be performed for some quantities, such as air temperature [17], but the same optimization could be difficult for quantities that present relevant deviations in the same room, as the case of mean radiant temperature. the mean radiant temperature is not homogeneous in the room and depends on the indoor walls’ temperature [18]. in the proximity of a glazed surface or of a poorly insulated wall, the mean radiant temperature presents higher variations with respect to the inner side of the same room [19]. iso 7726 [20] provides two methods for measuring the mean radiant temperature: i) the globe thermometer that is basically a temperature sensor located at the centre of a matt black sphere; ii) the angle factors approach, based on the walls’ temperature measurements. the globe thermometer is widely used for spot measurements. it is intrusive, provides data only related to the position where it is located and one measurement point could not be enough to determine the real room’s comfort. thus, is not the preferable solution for continuous ieq measurements. the second approach, based on the angle factors, could provide a higher resolution once that the problem of measuring the walls’ thermal maps is solved. to this scope, revel et al. [21] developed an infrared (ir) scanner that provides continuous measurement of indoor walls temperatures for mean radiant temperature calculation according to iso 7726. the proposed sensor turned out to provide a measurement accuracy of ± 0.4 °c with respect to traditional microclimate stations [22]. the ir scanner was integrated with a desk node that measures the air temperature and relative humidity to build the solution named comfort eye, a sensor for multipoint thermal comfort measurement in indoor environments. that system provides a solution to the problem of measuring real-time thermal comfort with an increased spatial resolution. the impact on heating control efficiency was demonstrated by experimental testing presented in [23]. this paper presents the new development of the comfort eye for the continuous monitoring of ieq for buildings renovation. indoor air quality (iaq) sensors have been integrated into the desk node to provide measurements of co2 and pms. moreover, a led lighting system has been embedded in the desk node to provide occupants with feedback about the actual status of the indoor air quality. an internet of things (iot) architecture has been developed to allow remote configuration and data exchange. interoperability with bim (building information model) has been developed to automatically retrieve building’s data (e.g., floor area, geometry, material emissivity, occupancy, etc.), needed for sensors configuration, metrics calculation and performance assessment. the most important differences compared to the previous version presented in the paper [21] are: new ir sensor; a new sensor for iaq, in particular, new co2 and pms sensor; the last version presents an updated architecture, it is plug&play and iot; integration with the bim to configure automatically the sensors and kpis calculation. the advanced version of the comfort eye was developed within the framework of the european project, p2endure. p2endure aims at including the ieq in the renovation process, in all the stages of the 4m (mapping, modelling, making, monitoring) process. this means that a protocol for the ieq monitoring, and assessment has been developed, allowing: the accurate evaluation of ieq performance of the building as it is to feed the design stage with the suggestions to achieve the optimal ieq level after the intervention, and the post-renovation monitoring to verify the achievement of ieq compliance according to issues revealed with the mapping. the comfort eye was applied for the continuous ieq monitoring of a building located in gdynia (poland) before and after renovation works. the aspects of ieq that are taken into consideration, using the comfort eye, and exploiting all its potential, are thermal comfort and iaq (co2 and pm) results from the field application are reported. this paper demonstrates the applicability and advantages of the developed system with the application to a real case study. a pre and post renovation analysis was performed to validate the developed monitoring methodology. the monitoring allowed to quantify, in the pre and post renovation phase, the thermal performances of the building, to identify the main causes of discomfort and to assess the iaq. the specific goals of the paper are: • to present the measurement system, sensors specification, iot architecture and integration with the bim; • to propose an ieq monitoring and assessment protocol that extends the en 16798 approach to include pms; • demonstrate the applicability and advantages of the proposed measurement device with the application to a real case study. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 232 2. materials and methods 2.1. the measurement technology the comfort eye is an iot sensor composed of two nodes, the ceiling node with the ir sensor to measure walls thermal maps and mean radiant temperature, and the desk node with sensors to measure: air temperature, relative humidity, co2 and pms. it can provide the whole thermal dynamic behaviour of the wall for continuous and real-time thermal monitoring of buildings. being a prototype, the comfort eye has a production cost of less than 100 € for each node. this work aims to explore and deepen the functionality of comfort eye that allows the measurement of data necessary for the ieq. 2.1.1. comfort eye-ceiling node the ceiling node is the innovation of the comfort eye. it is a 3d thermal scanner and to measure temperature maps of all the room’s indoor surfaces there are 2-axes rotating ir sensor (figure 1). it is installed on the ceiling of the room, and it is composed of a 16x4 thermopile array, meaning that each acquired frame is a map of 64 temperatures. with a horizontal field of view (hfov) of 60° and a vertical field of view (vfov) of 16°, an area of 1.15 x 0.56 m2 is scanned with one frame on a wall at one meter of distance from the sensor. the tilt movement with 0° 180° provides the full vertical scanning of the wall with the possibility to measure the floor and ceiling temperatures. to provide the scan of all the surfaces a continuous 360° pan movement is available. the device entails a custom mainboard with a microcontroller, programmed with dedicated firmware, to perform the automatic scanning of all the room’s surfaces by controlling the pan and tilt servos. the 12c communication protocol is used to acquire data are from the ir sensor. the ir scanner required cabling for a 12 v power supply while the communication is performed with a wi-fi module that is integrated into the mainboard. the scanning process produces one thermal map for each wall that is the result of the concatenation of multiple acquisitions. given the sensor’s fov, the installation point and the room geometry, the reconstruction of the wall thermal map is performed to remove the overlapping pixels derived from the vertical concatenation and to remove pixels related to the neighbour walls. concerning the surface emissivity, correction of the ir raw measurement is implemented [24]. the complete procedure for maps correction is detailed in [21]. the ir data are acquired in continuous, processed and stored in a database in real-time. corrected thermal maps are then used for two-fold scope: measurement of the mean radiant temperature for thermal comfort evaluation and measurement of building’s envelope thermal performance [21]. mean radiant temperature is measured for several locations (e.g. near and far from the window) with the angle factors method, as presented in iso 7726 [20]. 2.1.2. comfort eyedesk node a desk node is used to acquire environmental quantities for thermal comfort and indoor air quality (iaq) assessment (figure 1). an integrated sensor, sensirion scd30, allows the single point measurement of the air temperature (ta), relative humidity (rh), and co2. the desk node also integrates a pm sensor, sensirion sps30 (table 1). sensors’ data are acquired via the i2c interface and a wi-fi module, the same that is used for the ceiling node, provides wireless communication. the data are acquired in continuous, processed and stored in the database in real-time. the desk node is in a position that should be representative of the room’s environmental conditions, avoiding exposures to direct solar radiation, air droughts and zones characterized by stagnant air. the desk node must be installed in a position representative of the environmental conditions, away from heat sources, solar radiation, direct ventilation, or other sources that could disturb the measurements. installation is done by an experienced technician. if possible, the sensor is fixed, otherwise the occupants are informed to prevent the sensor from being moved or covered. the air quality measurement system has been proposed for real-time, low-cost, and easy to install air quality monitoring. it provides precise and detailed information about the air quality of the living environment and helps to plan interventions that lead to improve air quality. in crowded closed environments such as classrooms, offices, or meeting rooms, in the case of limited ventilation, co2 values of between 5,000 ppm and 6,000 ppm can be reached. to have a good air quality the limit of 1000 ppm of co2 must not be exceeded [25]. an efficient iaq monitoring system should detect any change in the air quality, give feedback about the measured values of co2 to the users, and trigger the necessary mechanisms, if available, such as automatic/ natural ventilation and fresh air, to improve performance and protect health. the desk node communicates to the users in real time the measured values of co2 simply and intuitively, through different colours of the leds, green, yellow, red (green for good air quality and red for bad air quality). the value of co2 < 700 ppm is represented by green leds. in this case, the co2 values are acceptable, there is good air quality, it is not necessary to ventilate the environment. the value of co2 between 700 ppm and 1000 ppm is represented by yellow leds. the co2 values are very close to the limit value (1000 ppm) and it is recommended to ventilate the environment. the value of co2 > 1000 pm is represented by red leds. the value of co2 has exceeded the limit value and it is necessary to ventilate the environment [26]. table 1. specifications of sensors used by the desk node of the comfort eye. sensirion scd30/sps30 range accuracy repeatability co2 in ppm 0 – 40 000 ± (30 ppm + 3 % mv) ± 10 ppm rh in % rh 0 – 100 ± 3 ± 0.1 ta in °c -40 – +70 ± [0.4 + 0.023 (t°c 25°c)] ± 0.1 pm2.5 in µg / m³ 0 1000 ± 10 / pm10 in µg / m³ 0 1000 ± 10 / a) b) figure 1. comfort eye: a) ceiling node and b) desk node. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 233 2.1.3. system architecture comfort eye’s nodes can be connected through the local wifi network to the remote server where data are sent, processed, and then stored in a mysql database. figure 2 shows general the architecture. the communication module used is the pycom w01. it is an efficient module, which reduces power consumption as implements bluetooth low energy and supports message queuing telemetry transport (mqtt) protocol that allows only byte communication between the client and server, so to get a light communication, suitable also for low wi-fi signal conditions. to integrate the pycom w01 a custom printed circuit board (pcb) was developed. the mqtt communication protocol is used and each node is programmed to be a publisher and subscriber as shown in figure 2. the first step, to start the monitoring, is the device configuration with a dedicated mobile application. configuration data are sent from the mobile device: server information, local wi-fi credentials, and room tag. once the connection is established, the desk node sends directly raw data to the server where a subscriber function provides to processing and storing actions. for the ceiling node, the scanning procedure is configured using geometrical data of the room. geometry is retrieved from the bim, as explained in the next subsection. once the tilt and pan angles have been defined and published to the sensors, the data acquisition starts. the raw data are sent to the server and processed by a subscriber. the processed data are stored on a mysql database. the whole monitoring process takes place continuously and in realtime. the data stored in the database are then available for being consumed. a dashboard for thermal performance assessment and data exploration has been developed. the dashboard is a web-app accessible through any browser. the data processing core is served by a restful application programming interface (api) service running on the server and called with standard get request. the user can view the measured data via the web-app, obtaining information by consulting the related kpis, and can have direct feedback on iaq by observing the led’s color of the desk node installed in the building. the two means of communication are not correlated and independent of each other. 2.1.4. bim integration building’s data are required to configure the ir scanning process, to perform geometrical maps corrections and to calculate ieq kpi. to reduce the manual operation, integration with the bim was developed. the bim model of a building includes most of the required data, but not those related to the comfort eye installation. for this reason, the comfort eye bim object was developed. the object can be imported into the bim and added to each room where the sensor is installed. then the ifc file can be exported and processed with a dedicated software based on the ifcopenshell python library [27]. the software automatically retrieves the data and store them in the mysql database that is used for the automatic configuration of the comfort eye. the scope of using bim data is three-fold: i) to correct the ir thermal maps using the room geometry, the sensor position and the wall emissivity; ii) to allow the application of the angle factors method for the mean radiant temperature measurement according to iso7726 [20]; iii) to calculate ieq kpis weighted by floor area. thus, the bim integration provides a way to configure the sensor automatically and therefore reduce the installation time. 2.2. ieq measurement and assessment in addition to standard temporal series of data, the comfort eye can provide long term indicators and kpi to be used for thermal comfort and iaq assessment according to en 16798 methodology that is based on indicators and their boundaries for building’s classification [9]. there are four categories (i, ii, iii, iv) for classification: category i is the higher level of expectation and may be necessary if the building houses occupants with special requirements (children, elderly, occupants with disabilities, etc.); category iv is the worst. to provide a good level of ieq, a building is expected to operate at least as category ii. for each ieq aspect, an hourly or daily indicator (𝐼ℎ,𝑖) is measured. for every room the number of occupied hours, outside the range (hor,i) is calculated as the number of hours when 𝐼ℎ,𝑖 ≥ 𝐼lim , ilim is the limit of the indicator determined by the targeted category. thus, the percentage of occupied hours outside the range, (pori) [%], is calculated as follows (1): 𝑃𝑂𝑅𝑖 = ℎor,𝑖 ℎtot ∙ 100 % , (1) where ℎtot [h] is total occupied hours during the considered period. the por of the entire building is finally calculated as the average of the pori for each room, weighted according to the floor area (2): 𝑃𝑂𝑅 = ∑ 𝑃𝑂𝑅𝑖 ∙ 𝑆𝑖 𝑛 𝑖=1 ∑ 𝑠𝑖 𝑛 𝑖=1 , (2) where n is the total number of rooms in the building and si the floor area of i-th room in the building. the por is then used to determine the kpi. a por of 0 % corresponds to a kpi of 100 %, which is the best value achievable. a kpi equal to 100 % means that the indicator is always within the limits of the targeted category. a maximum deviation of 5 % of the por is considered acceptable. thus, for por equal or higher than 5 %, the corresponding kpi is 0 % figure 2. general architecture of comfort eye. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 234 (worse case), which means the building is operating for a significant period outside the limits of the targeted category. to define an assessment scale, a linear interpolation between the minimum por (5 %) and best por (0 %) is recommended to determine a scale of the kpi between 0 % (worse case) and 100 % (best case). the analysis is performed only considering the main occupied rooms (e.g., bedrooms, living room, kitchen), and does not consider short-term occupancy and transit areas (e.g., bathrooms, corridors, small storage areas). moreover, given the fact that en 16798 does not cover all the ieq aspects, such as pm, additional indicators are included. following subsections present in detail each indicator used for ieq assessment. the methodology provides indicators for the calculation of kpis for the thermal comfort and iaq assessment. 2.2.1. thermal comfort according to iso 7730 [11] and iso 7726 [20] ”a human being’s thermal sensation is mainly related to the thermal balance of his or her body as a whole. this balance is influenced by physical activity and clothing, as well as the environmental parameters: air temperature (ta), mean radiant temperature (tr), air velocity (va), relative humidity (rh), clothing insulation (icl) and metabolic rate (m). in a moderate environment, the human thermoregulatory system will automatically attempt to modify skin temperature and sweat secretion to maintain the heat balance. as the definition given in iso 7730 implies, thermal comfort is a subjective sensation, and it can be expressed with a mathematical model of pmv (3) which is function of four environment parameters and two personal ones: 𝑃𝑀𝑉 = f(𝑇𝑎 , 𝑇𝑟 , 𝑉𝑎 , 𝑅𝐻, 𝐼𝑐𝑙 , 𝑀) . (3) the air velocity (in m/s) based on room ventilation system is set to 0.05 m/s metabolic rate (m) is set in function of the usual occupants’ activity using a table of typical metabolic rate available in en7730 [11]. insulation level (icl) of the clothing generally worn by room occupants. for summer season is set to 0.5 clo and 0.9 clo for the winter season using a table of typical clothing insulations available in en7730 [11]. the pmv predicts the mean value of votes of a group of occupants on a seven-point thermal sensation scale. as the predicted quality of the indoor thermal environment increases, the pmv value gets closer to 0 (neutral thermal environment). the hourly pmv values need a limit to be set to identify the number of occupied hours outside an acceptable comfort range. iso 7730 define the pmv limit for each category, the i category specified as -0.2 < pmv < +0.2, the ii category 0.5 < pmv < +0.5, the iii category as -0.7 < pmv < +0.7 and the iv category -1 < pmv < +1. the benchmark is done considering a threshold of a maximum 5 % of operating hours outside the pmv range. the best performance is achieved when there is no deviation outside the design range. so, a linear interpolation between 5 % and 0 % is done. 2.2.2. indoor air quality iaq is known to have acute and chronic effects on the health of the occupants, generally is expressed in terms of co2 concentration and ventilation required to contain co2 levels and reducing the concentration of indoor air pollutants dangerous for human health [28]. continuous co2 monitoring can provide a comprehensive and straightforward way to assess and measure improvements in building ventilation. the iaq assessment methodology, which provides the co2 kpi, is defined by en 16798, and where relevant and needed, iaq monitoring is enhanced with the monitoring of pm. the iaq measurement shall be made where occupants are known to spend most of their time. the kpi aggregates the values at the building level to provide an overall value but to identify critical issues is recommended to analyse all room values. the hourly co2 concentration values above outdoor are assessed against a safety threshold to identify the number of hours outside an acceptable comfort range, and the room values are aggregated through a floor-area weighted average. the co2 limits for each category, is 550 ppm for the i category, 800 ppm,1350 ppm and greater than 1350 ppm for the ii, iii, and iv category respectively. according to en 16789, an acceptable amount of deviation is 5 % of occupied hours. the best performance is achieved when there are no deviations outside the design limit. to define an assessment scale, a linear interpolation between the minimum (5 %) and best performance (0 %) is done. pm is a widespread air pollutant, consisting of a mixture of solid and liquid particles suspended in the air. commonly used indicators describing pm that are relevant to health refer to the mass concentration of particles with a diameter of less than 10 µm (pm10) and particles with a diameter of less than 2.5 µm (pm2.5). the methodology for pm2.5 and pm10 assessment is provided by the victoria epa institute (australia). the categories are obtained according to the hourly and 24-hour rolling average pms concentration as shown in table 2 and table 3 [29]. an acceptable amount of deviation from the optimal category (very good) is the 5 %. the best performance is achieved when there are no deviations outside the design limit. to define an assessment scale, a linear interpolation between the minimum (5 %) and best performance (0 %) is done. table 2. pm2.5 concentration for ieq classification. category 24-hr-pm2.5 µg/m³ 1-hr pm2.5 µg/m³ very good 0-8.2 0-13.1 good 8.3-16.4 13.2-26.3 fair 16.5-25.0 26.4-39.9 poor 25.1-37.4 40-59.9 very poor 37.5 or greater 59.9 or greater table 3. pm 10 concentration for ieq classification. category 24-hr-pm10 µg/m³ 1-hr pm10 µg/m³ very good 0-16.4 0-26.3 good 16.5-32.9 26.4-52.7 fair 33-49.9 52.8-79.9 poor 50-74.9 80-119.9 very poor 75 or greater 120 or greater acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 235 3. experimental application the comfort eye was installed in two rooms of a nursery of gdynia and the data collection started in july 2018. the scope of the monitoring was the assessment of the envelope performance and ieq before and after renovation works (figure 3). the demonstration building is a two-story kindergarten building, attended by about 130 children. it was constructed in the year 1965 and it has the function of kindergarten from the beginning. the building volume is 2712 m3 and the built-up area is 464 m2. the main goal of the demonstration was to minimize the energy consumption especially for heating needs through the retrofitting of the envelope (add insulation layer), implementing new windows and improving the aesthetic appearance of the envelope. a comprehensive thermal insulation retrofit should create a continuous insulated envelope around the living accommodation and, ideally, avoid any residual thermal bridging of the structure. the focus of the renovation works was to achieve the targeted quality and performance improving the ieq. 4. discussion of results monitoring started in july 2018, the comfort eye is still installed and is still acquiring data. the data acquired were analysed for the winter and summer season, before (pre) and after renovation (post), for the assessment of thermal comfort and iaq building performance and validation of improved ieq. the renovation works were done from august to october 2019. the selected periods for the ieq analysis are august 2018 and august 2020 for the summer season in pre and post renovation state respectively. february 2019 and february 2020 for the winter season in pre and post renovation state respectively. the most significant weeks of the selected periods for the analysis are reported with the relative average outdoor temperature in table 4. the weather data were gathered from the weather the station igdyni26 of the weather underground web service. the building is equipped with a centralized heating system with radiators in each room, without a thermostat. no cooling system is present. 4.1. thermal comfort concerning thermal comfort, the pmv model was applied using a metabolic rate of 1.2 met (typical office/school activity) and clothing insulation of 0.9 clo, suitable for the end-use of the building and typical climate in poland in the winter season, and metabolic rate of 1.2 met and clothing insulation of 0.5 clo for the summer season. to compare the pre and post thermographic maps, days with the same average outside temperature have been selected, 6 °c (26/02/2019-20) and 25 °c (09/08/2018-20) for winter and summer respectively. the recap of kpis calculated for the winter and summer season are shown in table 4. the mean radiant temperature (tr) pre and post renovation is shown in figure 4. figure 5 shows the thermal maps maintaining the same scale in pre and post renovation phases to show the different wall temperatures reached. figure 5 (a-b) shows the thermal maps of the winter season, pre and post renovation, and in figure 6 (a-b) is also possible to observe the trend of the temperature of the wall, of the window and the internal temperature. although the external temperatures were the same, it can be seen from the graphs and the thermographic map that the temperature of the wall of pre renovation is colder than the wall of post renovation. the monitoring provided evidence that the renovation has increased the insulation capacity of the building. the building operated for the winter season in category iv, iii, ii with a kpi of 0 % before renovation. after renovation operated most of the time in category i with a kpi of 100 %. this turned out to provide an indication of uncomfortable conditions in pre renovation and comfortable conditions in post renovation. figure 5 (c-d) show the thermographic maps of the summer season, and in figure 6 (c-d) is also possible to observe the trend of the temperature of the wall, the window, and the internal air temperature. the wall of post renovation is colder than the wall of the pre renovation, this confirms the better insulation of the post renovation. in figure 5 (c-d) the hottest areas represent the window. figure 3. gdynia before and after the renovation, comfort eye installed in the building. figure 4. tr measured with the comfort eye before and after renovation. table 4. selected period for the summer and winter season, pre and post renovation state respectively and outdoor mean temperature. summer winter pre post pre post period 6/8/2018 12/8/2018 3/8/2020 9/8/2020 25/2/2019 3/3/2019 24/2/2020 1/3/2020 out t 21.3 °c 19.0 °c 2.9 °c 2.8 °c acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 236 for the summer season, the building operated in categories iv, iii, ii before renovation and in categories iii, ii, i after renovation. in any case, a kpi of 0 % was registered and this is due to the absence of a cooling system. thus, only a slight improvement in thermal comfort was registered for the summer season. the thermal comfort requirements were satisfied for the post renovation. however, in the pre renovation, the detailed analysis on radiant temperatures turned out to provide a clear problem due to the temperatures of the wall exposed to the exterior that, being lower than the other surfaces’ temperatures and the air temperature, caused a lower mean radiant temperature (see figure 4). the consequence of lower mean radiant temperature is turned out to provide uncomfortable conditions, resulting in a poor kpi. the post renovation analysis proved that the intervention mitigated that problem thanks to the installation of the envelope panels that increased the wall insulation with a consequent minor drift of the indoor surfaces’ temperatures. 4.2. iaq table 5 shows the iaq building performance pre and post renovation. considering the winter season, the co2 kpi pre renovation is 0 % and improved to 20 % in the post renovation. the kpi of pm2.5 is 0 % for both pre and post renovation. the kpi for the pm10 pre renovation is 40 % and 60 % for the post. no ventilation was installed before and after renovation. the iaq kpis registered only a slight improvement that was caused by the introduction of communication with leds in the post renovation phase. the led light was able to change colour in function of the co2 and, as shown in figure 7, it was able to trigger the windows opening by occupants when poor air quality was put in evidence with the light. this underlines how simple communication can improve air quality by changing the habits of occupants. considering the summer season, the co2 kpi is 30 % for the pre renovation and 75 % for post renovation. figure 5. thermographic image pre and post renovation for the summer and winter season. figure 6. window, wall and air temperature pre and post renovation for the summer and winter season. table 5. ieq kpis. kpis summer winter pre post pre post thermal comfort 0 % 100 % 0 % 0 % iaq-co2 0 % 20 % 30 % 75 % iaq-pm2.5 0 % 0 % 85 % 50 % iaq-pm10 40 % 60 % 90 % 60 % acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 237 the kpi of pm2.5 is 85 % for the pre renovation and 50 % for post renovation. the kpi for pm10 is 90 % and 60 % for the pre and post respectively. in this case, the final iaq is provided by natural ventilation because of the opened windows. concerning the pms, a high level of concentrations has been registered. the source of those concentrations requires further investigation. lower levels of pm2.5 and pm10 were measured in the summer period. the installation of a ventilation system with air purifiers could be considered to mitigate the problem. the results derived from the case study the feasibility of the proposed iot innovative system for continuous measurement of all parameters required for an accurate analysis of the ieq. this system allows to carry out a long-term monitoring and, therefore, to evaluate of the ieq in the most significant seasons. in the presented application, both communication means were used. the occupants were able to take advantage of the realtime iaq advice by leds. the web-app was used to generate seasonal reports with ieq kpis that were used to evaluate the optimal renovation strategy and the impact of the renovation works. 5. conclusion the comfort eye is an innovative and non-intrusive solution, allowing the implementation of an iot sensing system for a complete, continuous, and long-term monitoring of the ieq. together with the sensing device, a set of performance indicators has been developed to assess the overall quality in terms of thermal comfort and indoor air quality. this paper demonstrates the applicability and advantages of the proposed measurement device with the application to a real case study. a pre and post renovation analysis was performed to validate the developed monitoring methodology. the ceiling node, given the high spatial and temporal resolution compared with traditional tools, provides two main advantages. first, multipoint measurements of the mean radiant temperature with only one sensor. given the geometry of the room, several locations can be chosen to apply the calculation of the mean radiant temperature (e.g., near and far from the window). second, together with information about comfort, thermal maps of the indoor surfaces are available and can be used to track the thermal performance of the building envelope (e.g., tracking surface temperature of the external wall, recognizing cool zones etc.). the ir scanner of the comfort eye turned out be useful to accurately detect the thermal comfort issues because of its capability of measuring thermal maps and mean radiant temperatures. to this aim, the accuracy and completeness of bim data, such as material emissivity and geometry, are pivotal and should be guaranteed by a standardized bim modeling approach. the monitoring allowed to identify the causes of the thermal discomfort. in the pre-renovation phase the building operated for a significant period (more than 5 %) out of the targeted category with a kpi of 0 % (worst conditions). the average radiant temperature is the most significant cause of discomfort, the building had poor thermal insulation of the walls. the monitoring has confirmed that by working on the causes, the performance of the building can be improved with a consequent minor drift of the indoor surfaces’ temperatures. in the post renovation phase the building operates within the limits of the targeted category with a kpi of 100 %. the desk node allows a more detailed view of the performance of the building, providing information regarding the air quality. furthermore, the large set of information available from the measured data can be used to identify ieq problems and their origin. such advanced knowledge of the building performance allows a better design of the renovation. concerning the iaq, the monitoring demonstrated that the building operates in the pre and post renovation phases outside the limit of the targeted category (more than 5 %). no mechanical ventilation system was installed with the renovation and the installation of a ventilation system with air purifiers could be considered to mitigate the problem. the iaq kpis registered only a slight improvement that was caused by the introduction of communication with leds in the postrenovation phase. the capability of communicating the status of indoor air quality with the simple led colour based on real time measurements, triggered occupants’ actions with the scope of restoring the required environmental quality. future developments will provide improvements of the measurement system to enhance the plug&play installation and to include additional sensors for the evaluation of other ieq aspects, as visual and acoustic comfort. acknowledgement this research has received funding from the p2endure. the p2endure research project (https://www.p2endureproject.eu/en) is co-financed by the european union within the h2020 framework programme with contract no. 723391. the authors want to thank the project partners for the useful discussions and collaboration. references [1] eu building stock observatory. online [accessed 17 december 2021] https://ec.europa.eu/energy/topics/energy-efficiency/energyefficient-buildings/eu-bso_en [2] a. g. kwok, n. b. rajkovich, addressing climate change in comfort standards, 2010, building and environment, vol. 45, issue 1, pp. 18-22. [3] a. h. yousef, m. arif., m. katafygiotou, a. mazroei, a. kaushik, e. elsarrag, impact of indoor environmental quality on occupant well-being and comfort: a review of the literature. international journal of sustainable built environment, 2016, vol. 5, pp. 1-11. doi: 10.1016/j.buildenv.2009.02.005 [4] i. mujan, a. s. anđelković, v. munćan, m. kljajić, d. ružić, influence of indoor environmental quality on human health and productivity a review, journal of cleaner production, 2019, vol. figure 7. co2 concentration measured with the comfort eye before and after renovation. https://ec.europa.eu/energy/topics/energy-efficiency/energy-efficient-buildings/eu-bso_en https://ec.europa.eu/energy/topics/energy-efficiency/energy-efficient-buildings/eu-bso_en https://doi.org/10.1016/j.buildenv.2009.02.005 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 238 217, pp. 646-657. doi: 10.1016/j.jclepro.2019.01.307 [5] e. oldham, h. kim,ieq field investigation in highperformance, urban elementary schools. atmosphere 2020, 11, 81. doi: 10.3390/atmos11010081 [6] k. azuma, n. kagi, h. kim, m. hayashi, impact of climate and ambient air pollution on the epidemic growth during covid-19 outbreak in japan. environ res. 2020, 190, 110042. doi: 10.1016/j.envres.2020.110042 [7] m. a. zoran, r. s. savastru, d. m. savastru, m. n. tautan, assessing the relationship between surface levels of pm2.5 and pm10 particulate matter impact on covid-19 in milan, italy, science of the total environment, volume 738, 2020, 139825. doi: 10.1016/j.scitotenv.2020.139825 [8] a. m. atzeri, f. cappelletti, a. tzempelikos, a. gasparella, comfort metrics for an integrated evaluation of buildings performance,energy and buildings, volume 127, 2016, pp. 411424. doi: 10.1016/j.enbuild.2016.06.007 [9] cen, en 16798 indoor environmental input parameters for design and assessment of energy performance of buildings addressing indoor air quality, thermal environment, lighting and acoustics. cen, 2019]. [10] a. kylili, p. a. fokaides, p. a. l. jimenez, key performance indicators (kpis) approach in buildings renovation for the sustainability of the built environment: a review, renewable and sustainable energy reviews, vol. 56, 2016, pp. 906-915. doi: 10.1016/j.rser.2015.11.096 [11] iso, iso 7730 ergonomics of the thermal environment analytical determination and interpretation of thermal comfort using calculation of the pmv and ppd indices and local thermal comfort criteria, international standardization organization, geneva (2005) [12] d. khovalyg, o. b. kazanci, h. halvorsen, i. gundlach, w. p. bahnfleth, j. toftum, b. w. olesen, critical review of standards for indoor thermal environment and air quality, energy and buildings, vol. 213, 2020. doi: 10.1016/j.enbuild.2020.109819 [13] y. song, f. mao and q. liu, human comfort in indoor environment: a review on assessment criteria, data collection and data analysis methods, in ieee access, vol. 7, 2019, pp. 119774-119786. doi: 10.1109/access.2019.2937320 [14] l. claudi, m. arnesano, p. chiariotti, g. battista, g. m. revel, a soft-sensing approach for the evaluation of the acoustic comfort due to building envelope protection against external noise, measurement, vol. 146, 2019, pp. 675-688. doi: 10.1016/j.measurement.2019.07.003 [15] t. s. larsen, l. rohde, k. t. jønsson, b. rasmussen, r. l. jensen, h. n. knudsen, t. witterseh, g. bekö, ieq-compass – a tool for holistic evaluation of potential indoor environmental quality, building and environment, vol. 172, 2020, 106707. doi: 10.1016/j.buildenv.2020.106707 [16] b. d. hunn, j. s. haberl, h. davie, b. owens, measuring commercial building performance protocols for energy, water, and indoor environmental quality (2012), ashrae journal, 54 (7), pp. 48-59. [17] f. seri, m. arnesano, m.m. keane,g.m. revel,temperature sensing optimization for home thermostat retrofit. sensors 2021, 21, 3685. doi: 10.3390/s21113685 [18] i. atmaca, o. kaynakli, a. yigit, effects of radiant temperature on thermal comfort, building and environment, vol. 42, issue 9, 2007, pages 3210-3220. doi: 10.3390/buildings11080336 [19] g. gan, analysis of mean radiant temperature and thermal comfort. building services engineering research and technology 2001, 22(2), pp.95-101. doi: 10.1191/014362401701524154 [20] iso, iso 7726. 2002. ergonomics of the thermal environment instruments for measuring physical quantities”. international standardization organization, geneva, 2002. [21] g. m. revel, e. sabbatini, m. arnesano, development and experimental evaluation of a thermography measurement system for real-time monitoring of comfort and heat rate exchange in the built environment, measurement science and technology, 2012, 23(3). doi: 10.1088/0957-0233/23/3/035005 [22] g. m. revel, m. arnesano, f. pietroni, development and validation of a low-cost infrared measurement system for realtime monitoring of indoor thermal comfort, measurement science and technology, vol. 25(085101), 2014l. doi: 10.1088/0957-0233/25/8/085101 [23] l. zampetti, m. arnesano, g. m. revel, experimental testing of a system for the energy-efficient sub-zonal heating management in indoor environments based on pmv, energy and buildings, vol. 166, 2018, pp. 229-238. doi: 10.1016/j.enbuild.2018.02.019 [24] x. p. maldague, theory and practice of infrared technology for nondestructive testing. wiley-interscience, 2001, isbn: 978-0471-18190-3. [25] health canada, residential indoor air quality guidelines: carbon dioxide, 2021. [26] rehva federation of european heating, ventilation and air conditioning associations, co2 monitoring and indoor air quality. online [accessed 17 december 2021] https://www.rehva.eu/rehva-journal/chapter/co2-monitoringand-indoor-air-quality [27] ifcopenshell academy. online [accessed 17 december 2021] https://academy.ifcopenshell.org/ [28] p. wargocki, d. p. wyon, j. sundell, g. clausen, p. o. fanger, the effects of outdoor air supply rate in an office on perceived air quality, sick building syndrome (sbs) symptoms and productivity, indoor air 10 (2000) 222–236. doi: 10.1034/j.1600-0668.2000.010004222.x [29] epa victoria, air pollution in victoria – a summary of the state of knowledge, publication 1709 august 2018. https://doi.org/10.1016/j.jclepro.2019.01.307 https://doi.org/10.3390/atmos11010081 https://doi.org/10.1016/j.envres.2020.110042 https://doi.org/10.1016/j.scitotenv.2020.139825 https://doi.org/10.1016/j.enbuild.2016.06.007 https://doi.org/10.1016/j.rser.2015.11.096 https://doi.org/10.1016/j.enbuild.2020.109819 https://doi.org/10.1109/access.2019.2937320 https://doi.org/10.1016/j.measurement.2019.07.003 https://doi.org/10.1016/j.buildenv.2020.106707 https://doi.org/10.3390/s21113685 https://doi.org/10.3390/buildings11080336 https://doi.org/10.1191/014362401701524154 https://doi.org/10.1088/0957-0233/23/3/035005 http://dx.doi.org/10.1088/0957-0233/25/8/085101 http://dx.doi.org/10.1016/j.enbuild.2018.02.019 https://www.rehva.eu/rehva-journal/chapter/co2-monitoring-and-indoor-air-quality https://www.rehva.eu/rehva-journal/chapter/co2-monitoring-and-indoor-air-quality https://academy.ifcopenshell.org/ http://dx.doi.org/10.1034/j.1600-0668.2000.010004222.x measurement of the structural behaviour of a 3d airless wheel prototype by means of optical non-contact techniques acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 8 measurement of the structural behaviour of a 3d airless wheel prototype by means of optical non-contact techniques antonino quattrocchi1, damiano alizzio1, lorenzo capponi2, tommaso tocci3, roberto marsili3, gianluca rossi3, simone pasinetti4, paolo chiariotti5, alessandro annessi6, paolo castellini6, milena martarelli6, fabrizio freni1, annamaria di giacomo1, roberto montanini1 1 department of engineering, university of messina, messina, italy 2 aerospace department, university of illinois urbana-champaign, urbana, illinois, usa 3 department of engineering, university of perugia, perugia, italy 4 department of mechanical and industrial engineering, university of brescia, brescia, italy 5 department of mechanical engineering, politecnico di milano, milan, italy 6 department of industrial engineering and mathematical sciences, polytechnic university of marche, ancona, italy section: research paper keywords: additive manufacturing; digital image correlation; thermoelastic stress analysis; finite element modelling; contact pressure mapping citation: antonino quattrocchi, damiano alizzio, lorenzo capponi, tommaso tocci, roberto marsili, gianluca rossi, simone pasinetti, paolo chiariotti, alessandro annessi, paolo castellini, milena martarelli, fabrizio freni, annamaria di giacomo, roberto montanini , measurement of the structural behaviour of a 3d airless wheel prototype by means of optical non-contact techniques, acta imeko, vol. 11, no. 3, article 13, september 2022, identifier: imeko-acta11 (2022)-03-13 section editor: francesco lamonaca, university of calabria, italy received march 17, 2022; in final form june 23, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by the ministry of education, university and research in italy (miur) under the research project of national interest (prin2015) “experimental techniques for the characterization of the effective performances of trabecular morphology structures realized in additive manufacturing”. corresponding author: antonino quattrocchi, e-mail: antonino.quattrocchi@unime.it 1. introduction additive manufacturing (am) or, more commonly, 3d printing is an emerging manufacturing technique, applied in several fields of industry [1], [2]. it allows to achieve some important goals such as weight reduction, development of complex shapes and use of a wide variety of materials [3]. currently, am is applied to exploit the design flexibility of numerical topology optimization tools and to lead to the creation of innovative samples [4]. moreover, am is not only adopted to produce prototypes, but it can target mass production. the latter abstract additive manufacturing (am) is becoming a widely employed technique also in mass production. in this field, compliances with geometry and mechanical performance standards represent a crucial constrain. since 3d printed products exhibit a mechanical behaviour that is difficult to predict and investigate due to the complex shape and the inaccuracy in reproducing nominal sizes, optical non-contact techniques are an appropriate candidate to solve these issues. in this paper, 2d digital image correlation and thermoelastic stress analysis are combined to map the stress and the strain performance of an airless wheel prototype. the innovative airless wheel samples are 3d-printed by fused deposition modelling and stereolithography in poly-lactic acid and photopolymer resin, respectively. the static mechanical behaviour for different wheel-ground contact configurations is analysed using the aforementioned non-contact techniques. moreover, the wheel-ground contact pressure is mapped, and a parametric finite element model is developed. the results presented in the paper demonstrate that several factors have great influence on 3d printed airless wheels: a) the type of material used for manufacturing the specimen, b) the correct transfer of the force line (i.e., the loading system), c) the geometric complexity of the lattice structure of the airless wheel. the work confirms the effectiveness of the proposed non-contact measurement procedures for characterizing complex shaped prototypes manufactured using am. mailto:antonino.quattrocchi@unime.it acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 purpose implies compliance with many restrictions due to the need to satisfy the production standards [5]. functional quality is generally associated with the structural response during the application of static and dynamic loads. however, in 3d printed lattice structures, the mechanical response can be significantly different than expected [6]. the causes of this anomaly can be identified in the geometric complexity and in the inaccurate reproduction of the nominal shape by the am process [7]. although finite elements numerical analyses have been performed [8], [9], experimental measurement of the actual structural response of lattice components has a relevant interest for am technology [10], [11]. indeed, traditional techniques [12] are not suitable for investigating 3d printing lattice parts, for example, due to the complex structures, non-conventional material, and small size. non-contact methods represent powerful alternatives, providing remarkable results despite the demanding technological requirements [13], [14]. the state-ofthe-art on this topic collects only few works, which often do not investigate in depth the mechanical behaviour of the product obtained. despite numerical finite elements analyses provide a good overview [15], the experimental mechanical characterization has been generally limited to the estimation of conventional stress-strain curves, carried out by static and dynamic compression tests [16], [17]. other ancillary analyses were performed for the evaluation of the porosity of the material by scanning electron microscopy [18] and for the investigation of the composition of the crystalline structure of the produced materials by means of backscattering electron diffraction [19]. one of the first examples of experimental mechanical characterization by non-contact methods was discussed by brenne et al. [20]. they mapped the strain behaviour of a lattice, heat-treated titanium alloy specimen, subjected to both uniaxial and bending loads, using 2d digital image correlation (dic) [21], [22] and electron backscatter diffraction. the results clarified the effect of the heat-treatment and the weaknesses of the structure, but, unfortunately, the quality of the 2d dic images [23], [24] did not allow to investigate the performance of the individual beam and to carry out some quantitative results. a more detailed inspection was provided by vanderesse et al. [25]. they investigated the strain behaviour of porous lattice materials with body cubic-centred reinforced, and diamond mesostructures subjected to quasi-static compression up to failure. the strain maps, obtained using dic, showed the localization of the most strained areas before and after the sample failure, highlighting a diffuse distribution, strongly depending on the analyzed reticular structure. in fact, it was generally noted that some struts exhibited a critical behavior that quickly leaded to their collapse, while other ones were slightly strained. allevi et al. [26] performed a feasibility study of the thermoelastic stress analysis (tsa) [27], [28] on a titanium based-alloy space bracket, made by electron beam melting. their results showed the same load trends that can be identified on larger scales, but also especially small and unexpected peaks in the tsa output and theoretical outcomes calculated by finite elements analysis. quattrocchi et al. [29] evaluated the mechanical behavior of a lumbar transforaminal interbody fusion cage implant, made by a 3d printing process adopting medical grade titanium. although these devices have a trabecular structure, useful for bone development, tsa allowed to identify that at the small scale the complex geometry of the specimen determines local differences in the stress distribution with intensification of the loads at the trabecular knots. finally, allevi et al. [30], [31] developed experimental protocols based on advanced non-contact measurement techniques to qualify the full mechanical behaviour of lattice structures. they employed different techniques, such as 2d vision systems, 2d dic, tsa and laser doppler vibrometry (ldv), to investigate morphological characteristics, to map local stress-strain fields, and to analyse the modal behaviour of simple lattice structures. this paper is the extended version of the one presented at the ieee i2mtc 2020 [32] and focuses on the evaluation of local stress-strain mapping. consequently, the aim of this work is to measure the mechanical behaviour of an airless wheel [33], [34] with a complex lattice structure, obtained through am, by adopting non-contact optical techniques. indeed, unlike traditional methods which are inefficient and even inapplicable in such conditions, non-contact optical techniques are an appropriate candidate for obtaining full-field information without altering the object of the study. two airless wheel samples, which were manufactured using different printing technologies (fused deposition modelling, fdm, and stereolithography, sla) and materials (poly-lactic acid, pla, and photopolymer resin, ppr), have been investigated. a measurement procedure, based on dic and tsa, has been applied to achieve combined full-field strain and stress measurements of the different wheel-ground contact configurations. furthermore, the wheel-ground contact pressure (cp) has been mapped estimating the load transfer. finally, a parametric finite elements model has been developed and compared to the results obtained from the experimental approaches. 2. materials and methods 2.1. airless wheel prototypes the airless wheel prototype (figure 1) was specifically manufactured according to [4]. the geometry is designed following a regular pattern of fixed angular amplitude (36◦). the pattern is then extruded in the axial direction of the same wheel. the lattice structure is obtained by connecting the intersection points of four circular crowns of equal width along the diameter and ten circular sectors of the wheel through a “zig-zag” criterion. the thickness of each trabecula is always kept the same. while fdm allows to obtain a model by subsequent layering of fused material, sla adopts a 3d printing method based on a photochemical process. a laser beam is focused on a liquid ppr to enable the wiring and solidification processes of the monomers on a building platform. the raw structure is washed in isopropyl alcohol; the supports useful for the realization of the product are removed and the grafting points of the same supports are smoothed. finally, a post-cure process is performed to complete the solidification process and to improve the figure 1. frontal view of the airless wheel prototype with lattice morphology. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 mechanical properties of the printed structure. the sla sample (figure 2 a) was realized by a form 2 printer (formlabs inc., usa) with a standard ppr, while the post-cure process was performed by exposing the raw sample to uv light (wavelength of 405 nm) for 30 min at 60 °c (15 min for each side). instead, the fdm sample (figure 2 b) was obtained by ultimaker 2+ printer (ultimaker b.v., utrecht; nl) with pla. in this case, the post-cure is not necessary. table 1 reports the parameters adopted for the 3d printing processes. 2.2. experimental setup the mechanical behaviour was studied using an experimental setup consisting of a loading system, a digital camera with lighting projectors for dic measurements, an infrared (ir) camera suited to tsa measurements and an acquisition system with a piezoresistive sensor for wheel-ground cp estimation (figure 3). a rubber tread was not considered yet for the tests performed and discussed in this paper. the stress and the strain fields were mapped employing specific load stages on the wheel. 2d dic was performed applying a static load, while tsa was implemented by a dynamic load with harmonic trend. the load was transferred to the wheel by a dedicated apparatus (figure 4), i.e., a shaft and a loading frame specifically designed and manufactured. the airless wheel was locked by a bolted connection. this arrangement prevented the rotation of the wheel around its axis. the load was then applied in the vertical direction (y-axis). the loading condition was driven by an actuator and the effective load provided was measured through a load cell. more in detail, two slightly different loading systems were used: an electromechanical material testing machine (mod. electroplus e3000, instron company, norwood, ma, usa) equipped with a calibrated 5 kn load cell was employing during the tests on the sla wheel, while an electrodynamic shaker (mod. lds v650, brüel & kjær, nærum, denmark) with a dedicate power amplification unit was used for the tests on the fdm wheel. the morphology of the lattice structure of the airless wheel does not guarantee the same the wheel hub stress and strain distributions because these depend on the topology occurring on the angular portion of the wheel corresponding to the wheel contact patch. consequently, three different contact configurations have been analyzed: mixed, rhombic and trapezoidal; this labelling refers to the frontal geometric topology of the lattice region. (figure 5). 2d dic measurements required a preliminary preparation of the target surface (frontal view) of the wheel in order to create a suitable random speckle pattern [35]. this was done by spraying a matt white paint, hence obtaining a high contrast surface given the natural black colour of the wheel material. the images were taken using a nikkor af micro 200 mm lens mounted on a a) b) figure 2. 3d printed samples manufactured by means of: a) sla and b) fdm technologies. figure 3. schematic representation of the experimental setup used for dic and tsa measurements; cps = contact pressure sensor. table 1. 3d printing parameters used for the airless wheel prototypes. am technique sla fdm material black v4 pla red support points size in mm 0.6 layer thickness in µm 50 100 printing time in h 9.15 23.33 number of layers 1003 258 material volume or weight 72 ml 56 g orientation in ° 15 0 figure 4. loading apparatus for dic and tsa measurements. a) b) c) figure 5. frontal view of the wheel lattice structure for different wheelground contact configurations: a) rhombic, b) mixed and c) trapezoidal. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 nikon d3100 digital camera (for the tests on the sla-printed wheel) and a canon ef 200m lens with canon eos 7d digital camera (for the tests on the fdm-printed wheel). the two imaging systems approximately ensure the same field of view (fov) but with a different spatial resolution. a diffuse illumination was obtained by exploiting led light. this was adopted to improve image quality and further increase contrast. particular attention was paid to checking the image quality and to verifying that no saturation phenomena occurred in the image. dic tests were carried out at different load levels synchronizing the frame rate to the load increase (i.e., 1 frame recorded every 1 n load increase step). the post-processing of dic tests was performed off-line, using the gom correlate pro software (gom gmbh, braunschweig, germany). table 2 reports the details of the two image systems and the load configuration for the 2d dic measurement. the wheel-ground cp was acquired during the dic tests using a flexible, thin and piezoresistive sensor (mod. 5040n), wired to a specific acquisition system (evolution™ handle, tekscan, south boston, ma, usa) and opting for a sampling frequency of 1 hz. the characteristics of the cp system are described in table 3. tsa was carried out by working in lock-in mode, i.e. by synchronizing ir system frame rate to the harmonic load applied. a preliminary study was conducted to determine the correct excitation parameters. this made it possible to obtain quasiadiabatic conditions and a high signal to noise ratio (snr) thermoelastic signal. these parameters are highly related to the thermal and mechanical properties of the printing material, more specifically to the heat capacity and stiffness. consequently, the working parameters are slightly different for the sla and the fdm prototypes. tests were performed with specific pre-load and overload levels but employing the same sampling frequency of 100 hz. the ir images were taken using a flir a6751sc camera (sla-printed wheel) and a flir sc7600 one (fdmprinted wheel). both ir cameras have a spatial resolution of 640  512 pixel and an insb detector with a thermal resolution of 20 mk at the room temperature. table 4 summarizes the characteristics of the two imaging systems and the load configuration used for the tsa measurements. 3. results and discussion 3.1. 2d dic measurements as an example, figure 6 displays the full-field maps of the displacements for the airless wheels along the vertical direction (y-axis). for the ppr sample (sla-printed), the displacement is uniformly high on the beams and nodes of the angular portion of the wheel when the wheel-ground contact takes place in the table 2. details of the systems configuration used for 2d dic measurements. am technique sla fdm camera nikon d3100 canon eos 7d focus distance in mm 610 550 f-stop f/7.1 f/4.5 exposition time in s 1/40 1/30 iso* sensitivity 2000 6400 focal distance in mm 200 100 images resolution in pixel × pixel 4608 × 3072 1920 × 1080 loads in n 0 to 200 0 to 250 * international organization for standardization (iso) table 3. details of the measurement systems. cp technique sla software i-scan 5040n sensing area (l × w × d) in mm 44.0 × 44.0 × 0.1 5040n sensing resolution in sensel/cm2 100.0 image scale (8 bit) in dl 0-255 images resolution in pixel × pixel 430×430 loads in n 0 to 200 table 4. details of the systems configuration used for tsa measurements. am technique sla fdm thermal camera flir titanium sc7200 flir a6751sc software altair li flir research ir + matlab images resolution in pixel × pixel 640 × 512 640 × 512 filter low-pass median load frequency in hz 5 10 15 5 10 15 preload in n 20 50 80 50 75 100 peak-peak load in n 5 10 15 20 40 60 a) b) c) d) figure 6. displacement along the vertical direction (y-axis) for the airless wheel in different contact configurations: a) rhombic, b) mixed and c) trapezoidal for the ppr sample at 200 n and d) trapezoidal for the pla one at 250 n; specifically, c) and d) have two distinct scales, different by an order of magnitude. a) b) c) d) figure 7. strain along the vertical direction (y-axis) for the airless wheel in different contact configurations: a) rhombic, b) mixed and c) trapezoidal for the ppr sample at 200 n and d) trapezoidal for the pla one at 250 n; specifically, c) and d) have two distinct scales, different by a factor three. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 rhombic and mixed configurations (figure 6 a and figure 6 b). the trapezoidal configuration (figure 6 c) shows a displacement mainly concentrated on the external ring of the wheel rather than on the beams (i.e., the weakest region), considering the unloaded stage. furthermore, for the rhombic and trapezoidal configurations (figure 6 a and figure 6 c), the displacements appear to be symmetrical if comparing the loading line with the other sectors that seem to be relatively unloaded. finally, by taking into account the trapezoidal configurations of the two types of samples (figure 6 c and figure 6 d), they exhibit the same trend, even if the trend of the pla one (fdm-printed) has a significant reduction. indeed, even though the pla sample was subjected to a greater load (25 % more with respect to the ppr sample) the maximum displacement is an order of magnitude less than that of the ppr wheel. figure 7 reports the full-field maps of the measured strain for the airless wheels along the vertical direction (y-axis). in each contact configuration investigated, the strain is well distributed on the lattice structure but concentrated in the nodes. the strain analysis demonstrates that flexural effects prevail, with maximum positive values (i.e. tensile strain) at the vertexes of the beams and maximum negative values (i.e. compressive strain) along the interconnecting segments. a marked symmetry can be highlighted on the trapezoidal and rhombic configurations (figure 7 a and figure 7 c). only for the trapezoidal configuration, the most stressed region corresponds to the sector along the loading line. specifically, for the ppr sample (figure 7 c), the external ring of the wheel shows a high strain value on the contact area. indeed, it is also very clear the elastic behavior of the connecting beams of the lattice structure produced by relatively high loads. contrarily, although the trend is similar to the one measured in the former case, the pla sample (figure 7 d) shows a maximum strain greater than the one of the ppr sample. this could be a consequence of the specific material used for manufacturing and loading the wheel. indeed, the shaker seemed to be more inefficient in transferring the load with respect to the electromechanical material testing machine, which appeared to be more rigid. the lower level of the effective load transferred to the wheel is reflected in a higher level of noise, which sometimes prevented an accurate identification of the strains. figure 8 exhibits the trend of y-strain along the trabeculae, from the external ring to the wheel hub, in the trapezoidal configuration for the ppr sample. as already discussed for figure 7, figure 8 highlights the flexural effects on the trabeculae. indeed, the maximum positive values correspond to the intersections of the trabeculae (points b, d, f and h), while the maximum negative values (close to points c, e, g and i) to the centreline of the individual trabeculae. furthermore, the considered trabeculae, two for the trapezoidal sector (b-d and d-f) and two for the rhombic sector (f-h and h-i), present approximately a similar trend. this can be sufficiently considered the same for all trabeculae of the wheel. 3.2. tsa measurements in tsa measurements, the lock-in process allows the entire sequence of the recorded ir images to be condensed into only two images that represent the amplitude (figure 9) and the phase figure 8. y-strain along the trabeculae (“white line” in strain map) for the ppr sample in trapezoidal configuration at 200 n. a) b) c) d) figure 9. amplitude of the tsa signal for the airless wheel in different contact configurations: a) rhombic, b) mixed and c) trapezoidal for the ppr samples and d) trapezoidal for the pla ones (harmonic excitation at 5 hz, with 15 n peak-to-peak load and 80 n preload). a) b) c) d) figure 10. phase of the tsa signal for the airless wheel in different contact configurations: a) rhombic, b) mixed and c) trapezoidal for the ppr samples and d) trapezoidal for the pla ones (harmonic excitation at 5 hz, with 15 n peak-to-peak load and 80 n preload). acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 (figure 10) of the thermoelastic signal. as well known, the first one shows differential temperatures δt that are linked to the sum of the principal components of the stress tensor. also in this case, as already highlighted for the strain measurements, the loading apparatus used for transferring the load to the airless wheel hub plays a key role in determining the effective stress level to which the structure is subjected to. however, for both the ppr and the pla samples, tsa guarantees the correct identification of the most stressed regions, which are found to be located at the vertex connections of the beams and to be confined to the lower sectors of the wheel. tsa measurements were performed out of the frontal view of the wheel, i.e. by tilting the optical axis of the ir camera with respect to the wheel axis. more in detail, camera-wheel relative angles of 35 ° and 45 ° were used for the ppr and pla samples respectively. this configuration was introduced because the frontal view was subject to large edge effects due to the thin shaped beams. specifically, for the ppr sample, the rhombic and mixed configurations (figure 9 a and figure 9 b) exhibit a wide load distribution, marked at the vertex connections of the beams and at the sectors crossed by the loading line. contrarily, in the trapezoidal configuration (figure 9 c) the most stressed areas are reduced and concentrated in the lower sectors of the wheel. comparing the trapezoidal configurations of the ppr with that of the pla, the latter (figure 9 d) presents a more limited load amplitude. phase distribution (figure 10) is in line with the expected distributions for all the wheel-test configurations. 3.3. cp mapping the wheel-ground contact influences both the motion transfer and the wear of the rubber tread. therefore, the evaluation of the cps is useful for estimating both the functionality and stability of a vehicle. as an example, figure 11 shows the increment of the cps for the trapezoidal configuration of the ppr wheel. the cp maps show quite a good symmetry: the small dissimilarities are due to the connection of the airless wheel to the loading apparatus and the relative geometric tolerances. it is evident that the lattice structure has a decisive influence on the redistribution of the load. indeed, the beams, as previously emphasized, are the most rigid part of the airless wheel, hence the greatest cps are estimated in correspondence with their mark on the ground. this well matches the results obtained on the strain analysis when the wheel is tested in the same topology arrangement (figure 7 c). figure 12 compares the cp maps of the three different configurations at the maximum load (i.e. 200 n) on the ppr wheel. also in this case, it is possible to identify a good symmetry, except for the mixed configuration. the largest contact area is that of the trapezoidal configuration, while the average cp is greater for the mixed one. finally, the rhombic configuration shows the smallest contact area, but also the best symmetry. the maximum value of the cp is highlighted in correspondence with the beam of the mixed configuration. in both cases, mixed and rhombic configuration, the buckling effect would seem to be negligible due to the short arc of the external ring of the wheel between two successive beams. 3.4. finite elements analysis as a further analysis, a finite elements model (figure 13) of the airless wheel was developed. this model was created to calculate the displacements and the strains of the wheel in ideal conditions (in terms of both geometries, loads and constraints). the structure was meshed with about 2.5 million of solid tetrahedral elements, considering a fixed support at the wheel hub and a vertical static force at the wheel-ground interface. furthermore, to improve the computation resolution, the mesh quality was refined on those areas where stress and strain are expected to be more relevant (figure 14). as an example, figure 15 displays the maps relative to the computation of the y-displacement and the y-strain, for the pla airless wheel printed in fdm. the numerical simulation highlights the same trends observed in the experimental results. the computed strain closely resembles the real one, showing that the interconnection segments of the lattice structure are mainly subjected to bending, exhibiting both tensile and compression stresses. hence, the presented model might be used to analyze alternative configurations of the lattice geometry, laying the foundations for a structural optimization of the topology of the whole airless wheel. figure 11. cp map of the trapezoidal configuration for the ppr wheel. a) b) c) figure 12. comparison between the cp maps for the configurations: a) rhombic, b) mixed and c) trapezoidal of the ppr wheel at 200 n. figure 13. finite elements model of the airless wheel. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 7 4. conclusions this paper addresses the applicability of non-contact techniques, such as dic and tsa, to measure the actual stressstrain field of complex lattice structures printed in am. the chosen structure is based on the innovative airless wheel concept. this type of design could be widely applied for different vehicle models, taking advantage of its specific features. globally, a puncture-proof wheel improves safety and reliability by reducing its replacement due to critical events and preventing the rapid decrease in vehicle stability. furthermore, the mechanical response of a such structure could be enhanced by increasing the adhesion to the road surface and limiting the tread wear. specifically, the use of an airless wheel could be heterogeneous. for example, it could be a good solution for skateboards, exploiting the high deformability of the lattice structure, or for aircrafts and aerospace vehicles thanks to the possibility of reducing weight maintaining the same performance. in summary, the airless wheel should be of great interest to the next generation of full-electric cars. an airless wheel prototype, manufactured by different 3d printing technologies (fdm and sla) and made with different polymeric materials (pla and ppr) has been tested by employing dic and tsa techniques. the wheel-ground interaction has been studied by mapping the cps. a parametric finite elements model of the same wheel has been developed. furthermore, the experimental tests have been performed in two separated laboratories also using specific loading systems and instrumentation. the results achieved have demonstrated some critical aspects that should be considered in the characterization of such a system: the type of material used for manufacturing the sample, the loading system and the topology of the lattice structure. in this sense, considering a numerical model previously validated at least in terms of contact pressure distribution, can help in dealing with these issues. indeed, the proposed model might be used to analyze alternative configurations of the lattice geometry, laying the foundations for a structural optimization of the lattice topology with specific objective functions (e.g., uniform stress distribution or minimum weight). according to these outcomes, the present study has then confirmed the effectiveness of the non-contact techniques (dic and tsa) for measuring the spatial distribution of both strain and stress fields in functional and complex structures obtained from am. specifically, these investigation techniques have shown that the mechanical response of a lattice structure exhibits considerable complexity. in fact, although strain and stress are well distributed over almost all regions, concentrations have been identified in correspondence of the nodes of the trabeculae and in the lower sectors of the wheel. for example, this suggests moving towards a topological optimization that involves increasing the thickness of the trabeculae at the nodes and along the external ring, thinning the section of the trabeculae at their centerline and reducing the excess of material closer to the wheel hub. references [1] t. d. ngo, a. kashani, g. imbalzano, k. t. nguyen, d. hui, additive manufacturing (3d printing): a review of materials, methods, applications and challenges, compos. b. eng. 143 (2018), pp. 172-196. doi: 10.1016/j.compositesb.2018.02.012 [2] g. luchao, w. wenwang, s. lijuan, f. daining, damage characterizations and simulation of selective laser melting fabricated 3d re-entrant lattices based on in-situ ct testing and geometric reconstruction, int. j. mech. sci. 157–158 (2019), pp. 231–242. doi: 10.1016/j.ijmecsci.2019.04.054 [3] t. pereira, j.v. kennedy, j. potgieter, a comparison of traditional manufacturing vs additive manufacturing, the best method for the job, procedia. manuf. 30 (2019), pp. 11-18. doi: 10.1016/j.promfg.2019.02.003 [4] v. asnani, d. delap, c. creager, the development of wheels for the lunar roving vehicle, j. terramechanics 46 (2009), pp. 89103. doi: 10.1016/j.jterra.2009.02.005 [5] m. siebold, additive manufacturing for serial production of highperformance metal parts, mech. eng. 141 (2019), pp. 49-50. doi: 10.1115/1.2019-may5 [6] f. caiazzo, v. alfieri, b.d. bujazha, additive manufacturing of biomorphic scaffolds for bone tissue engineering, int. j. adv. manuf. syst. 113 (2021), pp. 2909-2923. doi: 10.1115/1.2019-may5 a) b) figure 14. details of the mesh quality of the airless wheel in a) frontal view and in b) 45° view. a) b) figure 15. finite element analysis of the airless wheel: a) y-displacement and b) y-strain at a load of 250 n on the pla sample. 0.300 0.280 0.260 0.240 0.220 0.200 0.180 0.160 0.140 0.120 0.100 0.080 0.060 y-axis displacement [mm] 0.900 0.750 0.600 0.450 0.300 0.150 0 -0.150 -0.300 -0.450 -0.600 -0.750 -0.900 y-axis strain [%] https://doi.org/10.1016/j.compositesb.2018.02.012 https://doi.org/10.1016/j.ijmecsci.2019.04.054 https://doi.org/10.1016/j.promfg.2019.02.003 https://doi.org/10.1016/j.jterra.2009.02.005 https://doi.org/10.1115/1.2019-may5 https://doi.org/10.1115/1.2019-may5 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 8 [7] e. umaras, m.s. tsuzuki, additive manufacturing-considerations on geometric accuracy and factors of influence, ifacpapersonline 50, (2017), pp. 14940-14945. doi: 10.1016/j.ifacol.2017.08.2545 [8] a. sutradhar, j. park, d. carrau, m.j. miller, experimental validation of 3d printed patient-specific implants using digital image correlation and finite element analysis, comput. biol. med. 52 (2014), pp. 8-17. doi: 10.1016/j.compbiomed.2014.06.002 [9] l.s. dimas, m.j. buehler, modeling and additive manufacturing of bio-inspired composites with tunable fracture mechanical properties, soft. matter. 10 (2014), pp. 4436-4442. doi: 10.1039/c3sm52890a [10] i. gibson, d. rosen, b. stucker, m. khorasani, design for additive manufacturing, addit. manuf. tech. (2021), pp. 555-607. doi: 10.1007/978-3-030-56127-7_19 [11] g. liu, x. zhang, x. chen, y. he, l. cheng, m. huo m, et al., additive manufacturing of structural materials, mater. sci. eng. rep. 145 (2021), 100596. doi: 10.1016/j.mser.2020.100596 [12] a. visco, c. scolaro, t. terracciano, r. montanini, a. quattrocchi, l. torrisi, n. restuccia, static and dynamic characterization of biomedical polyethylene laser welding using biocompatible nano-particles, epj web of conferences 167 (2018), 05009. doi: 10.1051/epjconf/201816705009 [13] v.m.r. santos, a. thompson, d. sims-waterhouse, i. maskery, p. woolliams, r. leach, design and characterisation of an additive manufacturing benchmarking artefact following a design-formetrology approach, addit. manuf. 32 (2020), 100964. doi: 10.1016/j.addma.2019.100964 [14] c. camposeco-negrete, j. varela-soriano, j.j. rojas-carreón, the effects of printing parameters on quality, strength, mass, and processing time of polylactic acid specimens produced by additive manufacturing, prog. addit. manuf. (2021), pp. 1-20. doi: 10.1007/s40964-021-00198-y [15] n. narra, j. valášek, m. hannula, p. marcián, g.k. sándor, j. hyttinen, j. wolff, finite element analysis of customized reconstruction plates for mandibular continuity defect therapy, j. biomech. 47 (2014), pp. 264-268. doi: 10.1007/s40964-021-00198-y [16] c. ling, a. cernicchi, m.d. gilchrist, p. cardiff, mechanical behaviour of additively-manufactured polymeric octet-truss lattice structures under quasi-static and dynamic compressive loading, mater. des. 162 (2019), pp. 106-118. doi: 10.1016/j.matdes.2018.11.035 [17] e. alabort, d. barba, r.c. reed, design of metallic bone by additive manufacturing, scr. mater. 164 (2019), pp. 110-114. doi: 10.1016/j.scriptamat.2019.01.022 [18] f. li, j. li, g. xu, g. liu, h. kou, l. zhou, fabrication, pore structure and compressive behavior of anisotropic porous titanium for human trabecular bone implant applications, j. mech. behav. biomed. mater. 46 (2015), pp. 104-114. doi: 10.1016/j.jmbbm.2015.02.023 [19] g. bi, c.n. sun, h.c. chen, f.l. ng, c.c.k. ma, microstructure and tensile properties of superalloy in100 fabricated by microlaser aided additive manufacturing. mater. des. 60 (2014), pp. 401408. doi: 10.1016/j.matdes.2014.04.020 [20] f. brenne, t. niendorf, h.j. maier, additively manufactured cellular structures: impact of microstructure and local strains on the monotonic and cyclic behavior under uniaxial and bending load, j. mater. process. technol. 213 (2013), pp. 1558-1564. doi: 10.1016/j.jmatprotec.2013.03.013 [21] b. pan, k. qian, h. xie, a. asundi, two-dimensional digital image correlation for in-plane displacement and strain measurement: a review, mst 20 (2009), 062001. doi: 10.1088/0957-0233/20/6/062001 [22] f. lo savio, m. bonfanti, g.m. grasso, d. alizzio, an experimental apparatus to evaluate the non-linearity of the acoustoelastic effect in rubber-like materials, polym. test. 80 (2019), 106133. doi: 10.1016/j.polymertesting.2019.106133 [23] d. de domenico, a. quattrocchi, d. alizzio, r. montanini, s. urso, g. ricciardi, a. recupero, experimental characterization of the frcm-concrete interface bond behavior assisted by digital image correlation, sensors 21 (2021), 1154. doi: 10.3390/s21041154 [24] d. de domenico, a. quattrocchi, s. urso, d. alizzio, r. montanini, g. ricciardi, a. recupero, experimental investigation on the bond behavior of frcm-concrete interface via digital image correlation, proc. of european workshop on structural health monitoring, july 2020 pp. 493-502. doi: 10.1007/978-3-030-64908-1_46 [25] n. vanderesse, a. richter, n. nuño, p. bocher, measurement of deformation heterogeneities in additive manufactured lattice materials by digital image correlation: strain maps analysis and reliability assessment, j. mech. behav. biomed. mater. 86 (2018), pp. 397-408. doi: 10.1016/j.jmbbm.2018.07.010 [26] g. allevi, m. cibeca, r. fioretti, r. marsili, r. montanini, g. rossi, qualification of additively manufactured aerospace brackets: a comparison between thermoelastic stress analysis and theoretical results, measurement 126 (2018), pp. 252-258. doi: 10.1016/j.measurement.2018.05.068 [27] l. capponi, j. slavič, g. rossi, m. boltežar, thermoelasticitybased modal damage identification, int. j. fatigue 137 (2020), 105661. doi: 10.1016/j.ijfatigue.2020.105661 [28] d. palumbo, u. galietti, data correction for thermoelastic stress analysis on titanium components, exp. mech. 56 (2016), pp. 451462. doi: 10.1007/s11340-015-0115-0 [29] a. quattrocchi, d. palumbo, d. alizzio, u. galietti, r. montanini, thermoelastic stress analysis of titanium biomedical spinal cages printed in 3d, proc. of qirt2020, porto, portugal, 10 july 2020, pp. 1-7. doi: 10.21611/qirt.2020.098 [30] g. allevi, p. castellini, p. chiariotti, f. docchio, r. marsili, r. montanini, a. quattrocchi, r. rossetti, g. rossi, g. sansoni, e.p. tomasini, qualification of additive manufactured trabecular structures using a multi-instrumental approach. proc. i2mtc 2019, auckland, new zealand, 20-23 may 2019, pp. 1-6. doi: 10.1109/i2mtc.2019.8826969 [31] g. allevi, l. capponi, p. castellini, p. chiariotti, f. docchio, f. freni, r. marsili, m. nartarelli, r. montanini, s. pasinetti, a. quattrocchi, r. rossetti, g. rossi, s. sansoni, e.p. tomasini, investigating additive manufactured lattice structures: a multiinstrument approach, ieee tim 69 (2020), pp. 2459-2467. doi: 10.1109/tim.2019.2959293 [32] r. montanini, g. rossi, a. quattrocchi, d. alizzio, l. capponi, r. marsili, a. di giacomo, t. tocci, structural characterization of complex lattice parts by means of optical non-contact measurements, proc. of i2mtc 2020, dubrovnik, croatia, 25-28 may 2020, pp. 1-6. doi: 10.1109/i2mtc43012.2020.9128771 [33] bridgestone airless tires. online [accessed 28 july 2022] https://www.bridgestonetire.com/tread-and-trend/tiretalk/airless-concept-tires [34] michelin, gm take the air out of tires for passenger vehicles. online [accessed 28 july 2022] https://www.michelin.com/en/press-releases/michelin-gmtake-the-air-out-of-tires-for-passenger-vehicles/ [35] h. wang, h. xie, y. li, j. zhu, fabrication of micro-scale speckle pattern and its applications for deformation measurement, mst, 3’23 (2012), 035402. doi: 10.1088/0957-0233/23/3/035402 https://doi.org/10.1016/j.ifacol.2017.08.2545 https://doi.org/10.1016/j.compbiomed.2014.06.002 https://doi.org/10.1039/c3sm52890a https://doi.org/10.1007/978-3-030-56127-7_19 https://doi.org/10.1016/j.mser.2020.100596 https://doi.org/10.1051/epjconf/201816705009 https://doi.org/10.1016/j.addma.2019.100964 https://doi.org/10.1007/s40964-021-00198-y https://doi.org/10.1007/s40964-021-00198-y https://doi.org/10.1016/j.matdes.2018.11.035 https://doi.org/10.1016/j.scriptamat.2019.01.022 https://doi.org/10.1016/j.jmbbm.2015.02.023 https://doi.org/10.1016/j.matdes.2014.04.020 https://doi.org/10.1016/j.jmatprotec.2013.03.013 https://doi.org/10.1088/0957-0233/20/6/062001 https://doi.org/10.1016/j.polymertesting.2019.106133 https://doi.org/10.3390/s21041154 https://doi.org/10.1007/978-3-030-64908-1_46 https://doi.org/10.1016/j.jmbbm.2018.07.010 https://doi.org/10.1016/j.measurement.2018.05.068 https://doi.org/10.1016/j.ijfatigue.2020.105661 https://doi.org/10.1007/s11340-015-0115-0 https://doi.org/10.21611/qirt.2020.098 https://doi.org/10.1109/i2mtc.2019.8826969 https://doi.org/10.1109/tim.2019.2959293 https://doi.org/10.1109/i2mtc43012.2020.9128771 https://www.bridgestonetire.com/tread-and-trend/tire-talk/airless-concept-tires https://www.bridgestonetire.com/tread-and-trend/tire-talk/airless-concept-tires https://www.michelin.com/en/press-releases/michelin-gm-take-the-air-out-of-tires-for-passenger-vehicles/ https://www.michelin.com/en/press-releases/michelin-gm-take-the-air-out-of-tires-for-passenger-vehicles/ https://doi.org/10.1088/0957-0233/23/3/035402 macro x-ray fluorescence analysis of xvi-xvii century italian paintings and preliminary test for developing a combined fluorescence apparatus with digital radiography acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 8 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 macro x-ray fluorescence analysis of xvi-xvii century italian paintings and preliminary test for developing a combined fluorescence apparatus with digital radiography leandro sottili1,2, laura guidorzi1,2, alessandro lo giudice1,2, anna mazzinghi3,4, chiara ruberto3,4, lisa castelli4, caroline czelusniak4, lorenzo giuntini3,4, mirko massi4, francesco taccetti4, marco nervo5, rodrigo torres6, francesco arneodo6, alessandro re1,2 1 dipartimento di fisica, università degli studi di torino, via pietro giuria 1, 10125 torino, italy 2 istituto nazionale di fisica nucleare (infn), sezione di torino, via pietro giuria 1, 10125 torino, italy 3 dipartimento di fisica e astronomia, università degli studi di firenze, via giovanni sansone 1, sesto fiorentino, 50019 firenze, italy 4 istituto nazionale di fisica nucleare (infn), sezione di firenze, via giovanni sansone 1, sesto fiorentino, 50019 firenze, italy 5 centro conservazione e restauro “la venaria reale”, piazza della repubblica, venaria reale, 10078 torino, italy 6 new york university abu dhabi, division of science, p.o. box 129188, saadiyat island, abu dhabi, united arab emirates section: research paper keywords: ma-xrf; digital radiography; pigments identification; paintings citation: leandro sottili, laura guidorzi, alessandro lo giudice, anna mazzinghi, chiara ruberto, lisa castelli, caroline czelusniak, lorenzo giuntini, mirko massi, francesco taccetti, marco nervo, rodrigo torres, francesco arneodo, alessandro re, macro x-ray fluorescence analysis of xvi-xvii century italian paintings and preliminary test for developing a combined fluorescence apparatus with digital radiography, acta imeko, vol. 11, no. 1, article 6, march 2022, identifier: imeko-acta-11 (2022)-01-06 section editor: fabio santaniello, university of trento, italy received march 7, 2021; in final form december 13, 2021; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this project has received funding from: the european union’s horizon 2020 research and innovation programme under the marie skłodowskacurie grant agreement no 754511 (phd technologies driven sciences: technologies for cultural heritage – t4c); infn-chnet and compagnia di san paolo (nexto project, progetti di ateneo 2017). corresponding author: alessandro lo giudice, e-mail: alessandro.logiudice@unito.it 1. introduction nowadays, the use of non-destructive non-invasive x-ray based techniques is well established in heritage science for analysis and conservation of artworks [1]-[3]. x-ray fluorescence (xrf) technique plays a fundamental role since it provides information on the elemental composition of painted surfaces, contributing to identify the materials employed in artworks. whenever xrf is combined with scanning capability on macroscopic surfaces, the technique is indicated as macro xray fluorescence (ma-xrf) [4]. conversely, due to the impossibility to transport most of the artworks inside a laboratory to undertake scientific analyses, e.g., for their preciousness or considerable weight, an important class of instruments is made up of portable and transportable scanners [5]. a number of ma-xrf scanners are nowadays in use in heritage science, both commercial [6] and built in-house [7]-[9]. despite the high analytical capabilities of the ma-xrf abstract using portable instruments for the preservation of artworks in heritage science is more and more common. among the techniques, macro x-ray fluorescence (ma-xrf) and digital radiography (dr) play a key-role in the field, therefore a number of ma-xrf scanners and radiographic apparatuses have been developed for this scope. recently, the infn-chnet group, the network of the infn devoted to cultural heritage, has developed a ma-xrf scanner for in-situ analyses. the instrument is fully operative, and it has already been employed in museums, conservation centres and out-door fields. in the present paper, the ma-xrf analysis conducted with the instrument on four italian artworks undertaking conservation treatments at the conservation centre ccr “la venaria reale” are presented. results on the preliminary test to combine dr with ma-xrf in a single apparatus are also shown. mailto:alessandro.logiudice@unito.it acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 technique, it is worth underlining the importance of a thorough multi-analytical approach for a better comprehension of the artworks. another well-established non-destructive non-invasive and transportable x-ray technique is the digital radiography (dr) whose potentialities are widely known [10] as a tool for conservators and art historians [11]. it is frequently used in combination with ma-xrf by means of a dedicated instrument for a more complete information of artworks, as in the case of painting on canvas and on wooden panels [12], [13]. however, the possibility to employ a single apparatus integrating xrf and dr is not yet well investigated [14]. the advantage would be to have a single x-ray tube for a straightforward combined analysis in the same area. in this work, the ma-xrf scanner [15] developed in-house by the cultural heritage network of the national institute of nuclear physics (infn-chnet) was used to analyse xvi-xvii century paintings under conservation at the centro conservazione e restauro (ccr) “la venaria reale” [16], located nearby torino. to date, the infn-chnet network gathers 18 local divisions, 4 italian partners among which the ccr “la venaria reale”, that is a second level node in the network, and international partners as the new york university of abu dhabi (uae) [17]. moreover, a flat panel detector for dr coupled with a minix-ray tube that will be used in a modified version of the infnchnet ma-xrf was tested on a painting on canvas. information obtained by means of elemental mapping and radiography were combined for a better comprehension of the realisation of the artwork. 2. experimental set-up for the measurements presented in this paper two set-up were used: the ma-xrf scanner developed by the infn-chnet group for compositional information and a mini-x-ray tube combined with a flat panel detector for dr, that will be added in a modified version of the ma-xrf scanner in the near future. 2.1. the infn-chnet ma-xrf scanner the infn-chnet ma-xrf scanner (figure 1) is a compact (60 × 50 × 50 cm3) and lightweight (around 10 kg) instrument. its main parts are the measuring head, a three axes motor stage and a case containing all the electronics for acquisition and control. the measuring head is composed by an x-ray tube (moxtek©, 40 kv maximum voltage, 0.1 ma maximum anode current, 4 w maximum power, mo anode) with a brass collimator (typically 800 µm of diameter), a silicon drift detector (amptek© xr100 sdd, 50 mm2 effective active surface, 12.5 µm thickness be window) and a telemeter (keyence ia100). the motor stage (physik instrumente©, travel ranges 30 cm horizontally (x axis), 15 cm vertically (y axis) and 5 cm in z direction) holding the measuring head is screwed on the carbonfibre case. typical operating voltage is around 30 kv. signals are collected with a multi-channel analyser (model caen dt5780) and the whole system is controlled by a laptop. the control-acquisition-analysis software is developed within the infn-chnet network and allows both an on-line and an off-line analysis. the output of the acquisition process is a file containing the scanning coordinates and, for each position, the spectrum acquired. for each map, a single element can be selected and shown in the scanned area, or in a part of it. using the raw data, for each element the relative intensities are shown in grey scale, in which the maximum intensity is in white and the lower is in black. scan is carried out on the x axis, and a step size of typically 1 mm is set on the y axis resulting in a pixel size of 1 mm2. a complete review on the instrument can be found in [15]. the instrument has already been used for a number of different applications, i.e. paintings [18]-[21], illuminated manuscripts [22], coins [23], ceramics [24], and furniture [25]. 2.2. the digital radiography set-up structural information of artworks can be obtained by a radiographic approach. although a radiography could be carried out in principle using the same x-ray tube employed in the present infn-chnet ma-xrf apparatus, for future applications a modified version with a different source will be considered. considering the higher distance from the object needed to obtain a radiography than xrf maps, and the thickness of artworks to be passed through, an x-ray tube with a slightly higher voltage and power was used. in particular, the measurements were made with a moxtek©, 60 kv x-ray tube (1 ma maximum anode current, 12 w maximum power, 0.4 mm diameter nominal focal spot size, rh anode). if not collimated, it generates a 20 cm diameter beam at about 25 cm of distance. as the present ma-xrf apparatus only has a 5 cm z travel range, the future version will be capable of a translation in z up to 30 cm to avoid the handling of the artwork between xrf and dr measurements. about x-ray imaging, a shad-o-box hs detector by teledyne, model 6k was selected. the detector contains a large active area (11,4 cm × 14,6 cm) that is fully covered by the x-ray beam at 25 cm of distance from the source; the pixel size is 49.5 µm and the maximum integration time 65 s. the video signal is digitised to 14 bits, reassembled within the camera’s fpga, and then transferred to a computer via a high-speed gigabit ethernet interface. the cmos sensor inside the detector contains a direct-contact csi scintillator, that converts x-ray photons into visible light that is sensed by the cmos photodiodes. a thin graphite cover protects the sensor from accidental damage as well as from ambient light. the shad-o-box hs camera also contains lead and steel shielding to protect its electronics from x-ray radiation. the cameras are sensitive to x-ray energies as low as 15 kev, and may be used with generators up to 225 kvp. the detector, that has already been used for x-ray imaging with conventional tubes [26], is part of the nexto project that has the aim to integrate ma-xrf, dr and x-ray luminescence (xrl) [27] in a single portable instrument. figure 1. infn-chnet ma-xrf scanner placed in front of the panel painting madonna e i santi by cristoforo roncalli, known as il pomarancio. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 3. applications at the ccr “la venaria reale” in this section different applications of the instrumentation on paintings are presented. the works of art represent case studies from italian central regions of different periods (from the beginning of the 16th to the beginning of the 17th century). they were analysed during conservation processes carried out at the ccr “la venaria reale” [28]. for the ma-xrf measurements, a collimator of 800 µm diameter was used. the vertical step was set to 1 mm and a scanning speed of 3 mm/s. furthermore, the keyence ia-100 telemeter was switched on to maintain the sample distance during the scanning process. 3.1. madonna di san rocco by francesco sparapane the first painting presented is the oil on panel madonna di san rocco depicting the virgin with the child, saint antonio from padua and saint rocco by francesco sparapane (preci, umbria region, 1530 ca.). the importance of the work of art is related to the lack of documented paintings by the author, thereby its study represents a key-feature for understanding the painting technique of the artist [29]. the ma-xrf measurements were conducted on two areas as shown in figure 2, from which a number of maps were created. the source voltage was set to 30 kv and its anode current to 20 µa. the maps around saint rocco (figure 3) show the use of lead white, most likely due to the imprimatur layer and as proper pigment in the flesh tones. from the map of copper, the green part of the hat was realised with copper-based compounds [30]. the presence of tin is also detected in this same region, although in moderate amounts, and might be due to the use of lead-tin yellow in mixture with the copper-based pigment. a more precise identification of the material cannot be made with xrf technique: for instance, it is not possible to distinguish between a mixture of tin-based yellow with malachite rather than with azurite [31]. the shadows of the flesh tones were realised with ochreearths, as can be inferred from the match between iron and manganese maps [32]. corresponding to the red tone in the checks, a high signal of mercury is present, most likely due to the use of vermilion-cinnabar (hgs) [33]. furthermore, calcium is present in the strings of the hat, the dark strips and in the eyes, that may indicate the use of bone black for darkening [34] as well as manganese in the same areas may indicate the use of manganese black [35]. the halo was made with gold (figure 3), while the corresponding presence of calcium and iron is probably due to a calcium/iron-based preparation, as discussed in [36]. the second area around the upper part of the head of s. antonio (figure 4) presents similar results. however, a marked difference is related to the sky, that is made with a copper-based compound (most likely azurite [31]) with a glaze realised with smalt, a material rarely used as a pigment from the 15th century and which was widespread from the 17th c. onwards. concisely, smalt is a blue potash glass (thus characterised also by presence of potassium and aluminium) where the chromophore is cobalt and it usually contains impurities, among others, of bismuth when produced after 1520 [37]. its presence is thus hypothesised by the maps of the corresponding elements. a similar palette was probably used for the sky in the first area; however, due to the bad conservation condition, only traces of the characteristic elements are present in the maps of copper and cobalt. 3.2. madonna con bambino e santi by pomarancio the oil on canvas madonna con bambino e santi by cristoforo roncalli, known as il pomarancio, was made in the first decade of the 17th century, and it is placed in the santa maria argentea church in norcia (umbria region, italy). the virgin and the child are depicted with the saints eutizio, fiorenzo, santolo, and spes. figure 2. painting madonna con bambino e s. antonio e s. rocco by francesco sparapane. the scanned areas are indicated in the white boxes. figure 3. ma-xrf maps of area 1 of the madonna di san rocco by francesco sparapane (size 280 mm × 70 mm). figure 4. ma-xrf maps of the area 2 of the madonna di san rocco by francesco sparapane (size 120 mm × 70 mm). acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 the focus of the analysis was the painting palette used in the flesh tones by the author [38], of which two representative areas were scanned, as shown in figure 5. the source voltage was set to 28 kv and its anode current to 20 µa. the maps realised in the first area (figure 6) show the use of lead white for the flesh tones and the book, while a high signal of iron is present corresponding to the shadows. the blue cope of saint fiorenzo shows an intense signal of copper, due to a copper-based compound, leading to the hypothesis of azurite [31]. the darkest colour is related to a high signal of calcium, that, by only means of the xrf technique, cannot lead to a precise hypothesis on the material used. furthermore, the presence of tin was detected in the squiggles as well as a higher intensity of lead, most likely due to the use of lead-tin yellow [32]. the second area (figure 7) shows a different composition: the map of mercury matches with the hand, leading to the hypothesis of vermilion-cinnabar for the glove. as opposed to the previous area, the map of iron does not show an intense signal in the hand of saint spes. the main signal of iron comes from the stick and the cope in correspondence to the yellow colour. by comparing the maps of iron, manganese, mercury, and tin, it can be noted that all of them are present in the crosier, even if tin and mercury are in the highlights, whereas the manganese and iron are in the shadows. this result can be explained by the use of vermilioncinnabar mixed with lead-tin yellow [32] in the highlights, and the use of ochre-earths [32] in the shading. iron, manganese, and tin are also present in the yellow cope. furthermore, iron and manganese are present in the green medallion, which present a strong signal from copper, related to copper-based pigments. from the map of copper, it may be seen that all the green colours in the area are related to its presence. however, as in the hat of saint rocco seen in the previous section, it is not possible to hypothesise a conclusion on the material used. 3.3. adorazione dei magi by sante peranda the oil on canvas adorazione dei magi by sante peranda (figure 8) is dated around the first decade of the 17th century. in this case the interest was focused on the blue colours. the measurements were conducted in three areas: one on the robe of the virgin, one behind the magus on the far left wearing the white dress, and the last one behind the kneeling magus. the composition detected is different for each area. the source voltage was set to 28 kv and its anode current to 30 µa. in the first area, the virgin’s robe (figure 9), cobalt is present. as in the previous sections, this may suggest the use of smalt as figure 5. painting madonna con bambino e i santi by il pomarancio. the scanned areas are indicated in white boxes. the saints are, from left, s.eutizio, s. fiorenzo, s. santolo, and s. spes. figure 6. maps of the area 1 of madonna con bambino e i santi by il pomarancio (size 130 mm × 110 mm). figure 7. ma-xrf maps of area 2 of madonna con bambino e i santi by il pomarancio (size 150 mm × 137 mm). acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 blue pigment [37]. the match between the cobalt and the silicon maps most likely indicates the use of such blue glass in this area. furthermore, the localised lack of these two elements in the area is related with the presence of a conservation intervention with a titanium-based material [32]. in the second area, shown in figure 10, the composition of the blue is similar to the previous, despite the presence of a significant iron signal, probably due to the use of ochre-earths for the shading [34]. however, with this technique alone a later retouch with prussian blue cannot be excluded [39]. beside the map of cobalt, the map of bismuth is also reported to confirm the hypothesis of smalt and, consequently, the dating of the painting [34]. moreover, it can be seen that the lα-line of lead (10.55 kev) is detected in the whole area, whereas the m-lines (2.34 kev) are present only in the robe. this is due to the different absorption for different x-ray energies (the lower is the energy and the higher is the absorption), therefore the comparison of the two maps suggests that lead white was used for the imprimatur, as well as for the white robe of the magus on the far left. a different composition is detected in the last blue area (figure 11). in this case a strong signal of copper is present, whereas no presence of cobalt was detected. for this reason, conversely to the previous cases, the use of azurite can be hypothesised for the blue tone in the area. it is also clearly visible from the map of copper its presence beneath the yellow robe, probably due to a pentimento in the back of the kneeling magus. the last hypothesis can be also supported by the maps of lead, in which its use up on the copper can be hypothesised by the detection of the m-line only in the region of the robe, whereas the rest of the area shows only the l-lines of lead. the yellow robe shows the presence of iron and lead, which may suggest a combined use of white lead and yellow ochreearths [31]. the hair of the servant present in the area shows a signal of iron and manganese, probably due to the employment of ochreearths. 3.4. madonna con bambino ed i santi crescentino e donnino by timoteo viti the last painting presented is the madonna con bambino ed i santi crescentino e donnino by timoteo viti (figure 12), dated between 1500 and 1510. the work is a tempera on canvas, its size is 168 cm × 165 cm. the painting presented bad conservation conditions on the areas around the faces of the virgin and the child. the painting technique is tempera magra [40], in which the binder tends to be absorbed by the preparatory layer. moreover, the application of a protective varnish was not envisaged, leaving the paint in direct contact with the external environment. figure 8. adorazione dei magi by sante peranda. the scanned areas are indicated in white boxes. figure 9. maps of area 1 in the robe of the virgin in figure 8 (size 100 mm × 100 mm). figure 10. maps of area 2 in the robe of the magus wearing the white dress in figure 8 (size 85 mm × 40 mm). figure 11. maps of area 3 in the blue robe behind the kneeling magus in figure 8 (size 70 mm × 65 mm). acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 the source voltage was set to 28 kv and the anode current to 30 µa. the maps of the area around the child’s head are presented in figure 14. as can be seen from figure 13, the flesh tones are characterised by a strong signal of calcium, whereas no evidence of the use of lead was detected. moreover, the shading was made most likely with ochre-earths, according to the map of iron. in addition, the highlights of the mouth and the cheeks show a signal of mercury, most likely due to the use of cinnabarvermilion. the high presence of calcium can be explained with the use of white of san giovanni (white lime) pigment or other calcium-based compounds [41]. furthermore, by creating a spectrum in the area of the face, and comparing it with a spectrum obtained outside (figure13), it can be noted a higher intensity of the 2.0 kev line compared with the kα of calcium, that can be explained with the presence of phosphorus in the flesh tone. this may be explained with the presence of bone black, a pigment used for shading [34]. the signal of lead is present in the hair of the child. furthermore, the match of the spatial distribution of tin with lead may indicate the use of lead-tin yellow for it. the landscape of the background was realised with copperbased compounds mixed with ochre-earths, while the halo was realised with gold. in addition to the ma-xrf measurements, a radiographic investigation was carried out in the same area using the set-up described in section 2.2. the voltage was set to 20 kv, the anode current to 0.6 ma and the integration time to 2 seconds. as shown in figure 15, for example in jesus christ’s hair, the image is more detailed than the ma-xrf map: this allows the visualization of warp and weft threads of the canvas. moreover, it can be observed to match with the ma-xrf map distributions of the heavy metals (pb, sn, au, cu), and only partially with the distribution of calcium. this result is due to the very thin thickness of the painting layer, typical of the tempera magra painting technique. 4. conclusions the infn-chnet ma-xrf scanner was applied on four italian paintings at the ccr “la venaria reale”. for each application, different queries were advanced during the conservation processes and the described analysis achieved important information on the painting layers. in the madonna di san rocco by francesco sparapane, the figure 12. madonna con bambino e i santi crescentino e donnino by timoteo viti. the results presented are from the area in the white box. figure 13. comparison between two spectra, one obtained selecting an area inside the face (black) and outside (red). figure 14. ma-xrf maps of the area around the face of jesus christ in madonna ed i santi crescentini e donnino painting (size 140 mm × 110 mm). figure 15. radiography of the area around jesus christ’s face. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 composition of the flesh tones of s. antonio and s. rocco were identified, even though a definite composition cannot be measured, due to the limitation of the xrf technique to detect elements lighter than sodium. a similar conclusion has been made for other parts of the painting (the sky and s. rocco’s clothes). in the madonna con bambino e i santi by il pomarancio, the wide painting palette (lead white, lead-tin yellow, cinnabar-vermilion, copper-based compounds) was measured, confirming the skills of the author. for the adorazione dei magi by sante peranda, the blue colours in the areas under study have shown different compositions. it is worth noting that a more precise identification of the materials employed in the painting layers is not possible with only xrf technique, but a further investigation with other techniques such as fiber optics reflectance spectroscopy (fors) or raman spectroscopy is needed. in the last painting, the presence of calcium-based white in the face of the child was detected. however, no signal of lead is present in that area, whereas it is present in the background. in addition, dr was conducted in the same area using a new set-up that was proven to be suitable to be combined with xrf in a single instrument. the test carried out at the ccr “la venaria reale'' is the first step for the development of this multitechnique device. the complete realisation will rely on the expertise of the infn-chnet group, which has already allowed to achieve several important technological results in heritage science applications [42]-[46]. acknowledgement this project has received funding from compagnia di san paolo (nexto project, progetto di ateneo 2017) and the infn-chnet project. this project has received funding from the european union’s horizon 2020 research and innovation programme under the marie skłodowska-curie grant agreement no 754511 (phd technologies driven sciences: technologies for cultural heritage – t4c). the authors wish to warmly thank marco manetti of the infn-fi for his invaluable technical support, the students giulia corrada, francesca erbetta, daniele dutto, giulia dilecce and their supervisors, prof.ssa gianna ferraris di celle, prof. alessandro gatti, prof. bernadette ventura, prof. antonio iaccarino idelson and the staff of the ccr “la venaria reale” for its support. references [1] v. gonzalez, m. cotte, f. vanmeert, w. de nolf, k. janssens, x‐ ray diffraction mapping for cultural heritage science: a review of experimental configurations and applications, chem. eur. j., 2020, 26, 1703. doi: 10.1002/chem.201903284 [2] m. p. morigi, f. casali, radiography and computed tomography for works of art, in handbook of x-ray imaging: physics and technology, editor p. russo, boca raton: crc press, 2018, pp. 1185-1210. [3] f. p. romano, c. caliri, p. nicotra, s. di martino, l. pappalardo, f. rizzo, h. c. santos, real-time elemental imaging of large dimension paintings with a novel mobile macro x-ray fluorescence (ma-xrf) scanning technique, j. anal. atomic spectrom., 32 (4) (2017), pp. 773-781. doi: 10.1039/c6ja00439c [4] p. ricciardi, s. legrand, g. bertolotti, k. janssens, macro x-ray fluorescence (ma-xrf) scanning of illuminated manuscript fragments: potentialities and challenges, microchemical journal, issn 0026-265x, volume 124, 2016, pp. 785-791. doi: 10.1016/j.microc.2015.10.020 [5] m. alfeld, k. janssens, j. dik, w. de nolf, g. van der snickt, optimization of mobile scanning macro-xrf systems for the in situ investigation of historical paintings, j. anal. at. spectrom., 26 (2011), 899-909. doi: 10.1039/c0ja00257g [6] m. alfeld, j. vaz pedroso, m. van eikema hommes, g. van der snickt, g. tauber, j. blass, m. haschke, k. erler, j. dik, k. janssens, a mobile instrument for in situ scanning macro-xrf investigation of historical paintings, journal of analytical atomic spectrometry, 28, 2013, 760-767. doi: 10.1039/c3ja30341a [7] e. pouyet, n. barbi, h. chopp, o. healy, a. katsaggelos, s. moak, r. mott, m. vermeulen, m. walton, development of a highly mobile and versatile large ma‐xrf scanner for in situ analyses of painted work of arts, x‐ray spectrom. 2020; 1–9. doi: 10.1002/xrs.3173 [8] s. a. lins, e. di francia, s. grassini, g. gigante, s. ridolfi, maxrf measurement for corrosion assessment on bronze artefacts, 2019 imeko tc4 international conference on metrology for archaeology and cultural heritage, metroarchaeo 2019, 2019, pp. 538-542. online [accessed 11 march 2022] https://www.imeko.org/publications/tc4-archaeo2019/imeko-tc4-metroarchaeo-2019-105.pdf [9] e. ravaud, l. pichon, e. laval, v. gonzalez, development of a versatile xrf scanner for the elemental imaging of paintworks, appl. phys. a 122, 17 (2016). doi: 10.1007/s00339-015-9522-4 [10] j. lang, a. middleton, radiography of cultural material, 2nd edition, elsevier, oxford, 2005, isbn 978-0-08-045560-0 [11] d. graham, t. eddie, x-ray techniques in art galleries and museums, a. hilger (editor), bristol, 1985, isbn 10: 0852747829 [12] m. alfeld, j. a. c. broekaert, mobile depth profiling and subsurface imaging techniques for historical paintings-a review, spectrochimica acta part b 88, 2013, pp. 211–230. doi: 10.1016/j.sab.2013.07.009 [13] m. alfeld, l. de viguerie, recent developments in spectroscopic imaging techniques for historical paintings a review, spectrochimica acta part b, 2017, pp 81-105. doi: 10.1016/j.sab.2017.08.003 [14] a. shugar, j. j. chen, a. jehle, x-radiography of cultural heritage using handheld xrf spectrometers, xray spectrom 21 (2017) pp. 311-318. doi: 10.1002/xrs.2947 [15] f. taccetti, l. castelli, c. czelusniak, n. gelli, a. mazzinghi, l. palla, c. ruberto, c. censori, a. lo giudice, a. re, d. zafiropulos, f. arneodo, v. conicella, a. di giovanni, r. torres, f. castella, n. mastrangelo, d. gallegos, m. tascon, f. marte, l. giuntini, a multi-purpose x-ray fluorescence scanner developed for in situ analysis, rendiconti lincei, scienze fisiche e naturali, 2019, 30:307-322. doi: 10.1007/s12210-018-0756-x [16] centro conservazione e restauro la venaria reale, online [accessed 11 march 2022] www.centrorestaurovenaria.it/en [17] cultural heritage network of the italian national institute for nuclear physics, online [accessed 11 march 2022] http://chnet.infn.it/en/who-we-are-2/ [18] c. ruberto, a. mazzinghi, m. massi, l. castelli, c. czelusniak, l. palla, n. gelli, m. bettuzzi, a. impallaria, r. brancaccio, e. peccenini, m. raffaelli, imaging study of raffaello's “la muta” by a portable xrf spectrometer, microchemical journal, 2016, volume 126, pp. 63-69. doi: 10.1016/j.microc.2015.11.037 [19] m. vadrucci, a. mazzinghi, b. sorrentino, s. falzone, characterisation of ancient roman wall‐painting fragments using https://doi.org/10.1002/chem.201903284 https://doi.org/10.1039/c6ja00439c https://doi.org/10.1016/j.microc.2015.10.020 https://doi.org/10.1039/c0ja00257g http://dx.doi.org/10.1039/c3ja30341a https://doi.org/10.1002/xrs.3173 https://doi.org/10.1002/xrs.3173 https://www.imeko.org/publications/tc4-archaeo-2019/imeko-tc4-metroarchaeo-2019-105.pdf https://www.imeko.org/publications/tc4-archaeo-2019/imeko-tc4-metroarchaeo-2019-105.pdf https://doi.org/10.1007/s00339-015-9522-4 https://doi.org/10.1016/j.sab.2013.07.009 https://doi.org/10.1016/j.sab.2017.08.003 http://dx.doi.org/10.1002/xrs.2947 https://doi.org/10.1007/s12210-018-0756-x http://www.centrorestaurovenaria.it/en https://www.centrorestaurovenaria.it/ https://www.centrorestaurovenaria.it/ http://chnet.infn.it/en/who-we-are-2/ http://dx.doi.org/10.1016/j.microc.2015.11.037 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 8 non‐destructive iba and ma‐xrf techniques, x‐ray spectrom. 2020; 49: pp. 668– 678. doi: 10.1002/xrs.3178 [20] a. dal fovo, a. mazzinghi, s. omarini, e. pampaloni, c. ruberto, j. striova, r. fontana, non-invasive mapping methods for pigments analysis of roman mural paintings, journal of cultural heritage, volume 43, 2020, pp. 311-318. doi: 10.1016/j.culher.2019.12.002 [21] a. mazzinghi, c. ruberto , l. castelli, c. czelusniak, l. giuntini, p. a. mandò, f. taccetti, ma-xrf for the characterisation of the painting materials and technique of the entombment of christ by rogier van der weyden, applied sciences, 2021; 11(13):6151. doi: 10.3390/app11136151 [22] a. mazzinghi, c. ruberto, l. castelli , p. ricciardi, c. czelusniak, l. giuntini, p. a. mandò, m. manetti, l. palla, f. taccetti, the importance of being little: ma‐xrf on manuscripts on a venetian island, x‐ray spectrom, 2020; pp. 1–7. doi: 10.1002/xrs.3181 [23] v. lazic, m. vadrucci, f. fantoni, m. chiari, a. mazzinghi, a. gorghinian, applications of laser-induced breakdown spectroscopy for cultural heritage: a comparison with x-ray fluorescence and particle induced x-ray emission techniques, spectrochimica acta part b: atomic spectroscopy, volume 149, 2018, pp. 1-14. doi: 10.1016/j.sab.2018.07.012 [24] s. m. e. mangani, a. mazzinghi, p. a. mandò, s. legnaioli, m. chiari, characterisation of decoration and glazing materials of late 19th-early 20th century french porcelain and fine earthenware enamels: a preliminary non-invasive study, eur. phys. j. plus, 2021, 136 (10). doi: 10.1140/epjp/s13360-021-02055-x [25] l. sottili, l. guidorzi, a. mazzinghi, c. ruberto, l. castelli, c. czelusniak, l. giuntini, m. massi, f. taccetti, m. nervo, s. de blasi, r. torres, f. arneodo, a. re, a. lo giudice, the importance of being versatile: infn-chnet ma-xrf scanner on furniture at the ccr “la venaria reale”,, applied sciences 2021;11(3):1197. doi: 10.3390/app11031197 [26] l. vigorelli; a. lo giudice.; t. cavaleri.; p. buscaglia; m. nervo; p. del vesco; m. borla; s. grassini, a. re, upgrade of the x-ray imaging set-up at ccr “la venaria reale”: the case study of an egyptian wooden statuette, proceedings of 2020 imeko tc4 international conference on metrology for archaeology and cultural heritage, trento, italy, october 22-24, 2020. online [accessed 11 march 2022] https://www.imeko.org/publications/tc4-archaeo2020/imeko-tc4-metroarchaeo2020-118.pdf [27] a. re, m. zangirolami, d. angelici, a. borghi, e. costa, r. giustetto, l.m. gallo, l. castelli, a. mazzinghi, c. ruberto, f. taccetti, a. lo giudice, towards a portable x-ray luminescence instrument for applications in the cultural heritage field, eur. phys. j. plus, 2018, pp. 133-362. doi: 10.1140/epjp/i2018-12222-8 [28] l. sottili, l. guidorzi, a. mazzinghi, c. ruberto, l. castelli, c. czelusniak, l. giuntini, m. massi, f. taccetti, m. nervo, a. re, a. lo giudice, infn-chnet meets ccr la venaria reale: first results, 2020 imeko tc4 international conference on metrology for archaeology and cultural heritage 2020, 2020, pp. 507-511. online [accessed 11 march 2022] https://www.imeko.org/publications/tc4-archaeo2020/imeko-tc4-metroarchaeo2020-096.pdf [29] d. dutto, la madonna di san rocco di francesco sparapane: problemi conservativi e intervento di restauro di un dipinto su tavola del xvi secolo proveniente dalla valnerina, msc thesis, master’s degree programme in conservation and restoration for cultural heritage, university of torino, torino, 2018 [30] e. nicholas, pigment compendium: a dictionary and optical microscopy of historical pigments, amsterdam: butterworthheinemann (editor), 2008. [31] artists’ pigments: a handbook of their history and characteristics, vol. 2, editor ashok roy, national gallery of art, washington archetype publications, london, 1993 [32] c. seccaroni, p. moioli, fluorescenza xprontuario per l’analisi xrf portatile applicata a superfici policrome, nardini editore; firenze, 2002. [33] r. j. gettens, r. l. feller, w. t. chase, vermilion and cinnabar, studies in conservation 17, no. 2, 1972, 45-69. doi: 10.2307/1505572 [34] artists’ pigments: a handbook of their history and characteristics, vol. 4, editor b. h. berrie, national gallery of art, washington archetype publications, london, 2007 [35] m. spring, r. grout, r. white, 'black earths': a study of unusual black and dark grey pigments used by artists in the sixteenth century, national gallery technical bulletin, 2003, 24, pp. 96– 114. online [accessed 11 march 2022] https://www.jstor.org/stable/42616306 [36] i. c. a. sandu, l. u. afonso, e. murta, m. h. de sa, gilding techniques in religious art between east and west, 14th -18th centuries, int. j. of conserv. sci.. 1 (2010) pp. 47-62 . [37] b. h. berrie, mining for color: new blues, yellows, and translucent paint, early science and medicine, volume 20: issue 4-6, 2015, 308–334. doi: 10.1163/15733823-02046p02 [38] f. erbetta, restaurare dopo il terremoto: il dipinto olio su tela madonna con bambino e santi del pomarancio dalla chiesa di santa maria argentea di norcia, msc thesis, master’s degree programme in conservation and restoration for cultural heritage, university of torino, torino, 2018 [39] j. kirby, d. saunders, fading and colour change of prussian blue: methods of manufacture and the influence of extenders, natl gallery tech bull. 2004, 25: 73-99. [40] g. corrada, studio interdisciplinare del dipinto a tempera magra su tela madonna con bambino e i santi crescentino e donnino, timoteo viti, msc thesis, master’s degree programme in conservation and restoration for cultural heritage, university of torino, torino, 2013 [41] s. rinaldi, la fabbrica dei colori: pigmenti e coloranti nella pittura e nella tintoria, roma, il bagatto, 1986 [42] l. guidorzi, f. fantino, e. durisi, m. ferrero, a. re, l. vigorelli, l. visca, m. gulmini, g. dughera, g. giraudo, d. angelici, e. panero, a. lo giudice, age determination and authentication of ceramics: advancements in the thermoluminescence dating laboratory in torino (italy), acta imeko, vol 10 (2021), no 1, pp. 32-36. doi: 10.21014/acta_imeko.v10i1.813 [43] e. di francia, s. grassini, g. ettore gigante, s. ridolfi, s. a. barcellos lins, characterisation of corrosion products on copperbased artefacts: potential of ma-xrf measurements of ma-xrf measurement, acta imeko, vol 10 (2021), no 1, pp. 136-141. doi: 10.21014/acta_imeko.v10i1.859 [44] a. impallaria, f. evangelisti, f. petrucci, f. tisato, l. castelli, f. taccetti, a new scanner for in situ digital radiography of paintings, applied physics a, 122, 12, 2016. doi: 10.1007/s00339-016-0579-5 [45] c. czelusniak, l. palla, m. massi, l. carraresi., l. giuntini, a. re, a. lo giudice., g. pratesi, a. mazzinghi, c. ruberto, l.castelli, m. e. fedi, l. liccioli, a. gueli, p. a. mandò, f. taccetti, preliminary results on time-resolved ion beam induced luminescence applied to the provenance study of lapis lazuli, nucl. instrum. methods phys. res. b 2016, 371, 336–339. doi: 10.1016/j.nimb.2015.10.053 [46] l. palla., c. czelusniak., f. taccetti, l. carraresi, l. castelli, m. e. fedi, l. giuntini, p. r. maurenzig, l. sottili., n. taccetti, accurate on line measurements of low fluences of charged particles, eur. phys. j. plus 2015, 130. doi: 10.1140/epjp/i2015-15039-y http://dx.doi.org/10.1002/xrs.3178 http://dx.doi.org/10.1016/j.culher.2019.12.002 https://doi.org/10.3390/app11136151 https://doi.org/10.1002/xrs.3181 https://doi.org/10.1016/j.sab.2018.07.012 https://doi.org/10.1140/epjp/s13360-021-02055-x https://doi.org/10.3390/app11031197 https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-118.pdf https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-118.pdf http://dx.doi.org/10.1140/epjp/i2018-12222-8 https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-096.pdf https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-096.pdf https://doi.org/10.2307/1505572 https://www.jstor.org/stable/42616306 https://doi.org/10.1163/15733823-02046p02 http://dx.doi.org/10.21014/acta_imeko.v10i1.813 http://dx.doi.org/10.21014/acta_imeko.v10i1.859 https://doi.org/10.1007/s00339-016-0579-5 https://doi.org/10.1016/j.nimb.2015.10.053 https://doi.org/10.1140/epjp/i2015-15039-y comparison of machine learning techniques for soc and soh evaluation from impedance data of an aged lithium ion battery acta imeko issn: 2221-870x june 2021, volume 10, number 2, 80 87 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 80 comparison of machine learning techniques for soc and soh evaluation from impedance data of an aged lithium ion battery davide aloisio1, giuseppe campobello2, salvatore gianluca leonardi1, francesco sergi1, giovanni brunaccini1, marco ferraro1, vincenzo antonucci1, antonino segreto2, nicola donato2 1 institute of advanced energy technologies “nicola giordano”, national research council of italy, salita s. lucia sopra contesse, 5 98126, messina, italy 2 university of messina, department of engineering, c.da di dio, vill. s.agata, 98166 messina, italy section: research paper keywords: machine learning; electrochemical impedance spectroscopy eis; lithium-ion battery; state of charge; state of health citation: davide aloisio, giuseppe campobello, salvatore gianluca leonardi, francesco sergi, giovanni brunaccini, marco ferraro, vincenzo antonucci, antonino segreto, nicola donato, comparison of machine learning techniques for soc and soh evaluation from impedance data of an aged lithium ion battery, acta imeko, vol. 10, no. 2, article 12, june 2021, identifier: imeko-acta-10 (2021)-02-12 section editor: ciro spataro, university of palermo, italy received january 18, 2021; in final form april 29, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was funded by the italian ministry of economic development under the programme “ricerca di sistema”, project electrochemical storage corresponding author: davide aloisio, e-mail: aloisio@itae.cnr.it 1. introduction as well known, machine learning (ml) is a subfield of computing, an artificial intelligence (ai) technique that provides machines with the ability to learn from the field data without explicit programming [1]. in particular, ml can be really useful in applications that try to extract some information or unknown properties (‘features’) from the dataset (usually called ‘training set’) coming from data warehouses or data lakes. information extracted from this kind of data analyses can be used to develop prediction models for systems behaviour (subject to certain operative conditions and under some constraints). in particular, the battery behaviour characterisation is quite complex to be described through analytical models, mainly due to many parameters act in determining the ageing evolution (e.g. charge and discharge current rates, operative temperature, depth of discharge (dod) reached, state of charge (soc) during rest periods and so on. therefore, the combination of the aforementioned parameters makes the systems hard to model via analytical equations. this is particularly evident for li-ion batteries, for which it is more difficult to describe electrochemical processes with analytical equations, due to the nonlinearities present in their behaviours. the analytical models require, in addition to input data of the actual working conditions (current, temperature, etc.), the knowledge of many parameters (geometry, density and porosity of materials, etc.) these data are not always available or easily measurable and can vary over time (e.g. due to ageing). therefore, the analytical models can be affected by inaccuracy. abstract state of charge estimation and ageing evolution of lithium ion (li-ion) batteries are key points for their massive applications in the market. however, the battery behavior is very complex to understand because many parameters act in determining their ageing evolution. therefore, traditional analytical models employed for this purpose are often affected by inaccuracy. in this context, machine learning techniques can provide a viable alternative to traditional models and a useful tool to characterize the batteries behavior. in this work, different machine learning techniques were applied to model the impedance evolution over time of an aged cobalt based li-ion battery, cycled under a stationary frequency regulation profile for grid application. the different ml techniques were compared in terms of accuracy to determine the state of charge and the state of health over the battery ageing phenomena. experimental results showed that ml based on random forest algorithm can be profitably used for this purpose. mailto:aloisio@itae.cnr.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 81 in this context, ml techniques represent a viable alternative and a useful tool for modelling the battery behaviour. ml algorithms learn directly from experimental data, reducing the complexity of modelling, usually due to the high number of parameters and empirical adjustments needed. in addition, according to the recent literature, the application of ml techniques in the prediction of the ageing of li-ion batteries shows errors in the range between 0.5% 5.5% [2]-[4]. this range of accuracy is considered a good compromise among algorithm complexity, effort spent on model development and reliability of results. many physical and electrical parameters are characteristic of the chemical reactions inside li-ion batteries. therefore, these relations could be used as tools for the battery state modelling [5], [6]. typically, these features are derived from charging and discharging curves, since typical battery management systems (bms) are able to collect current-voltage data. hence these are the most commonly used parameters for real time battery monitoring [7], [8]. various approaches based on the use of different parameters were proposed in the literature to train machine learning models. among battery parameters, singlepoints terminal voltage, current, temperature, charge/discharge profiles [2], [9], [10] or their geometrical characteristics [11] were employed for this purpose. however, much more information about the status of the battery can be extracted from the impedance spectra recorded by means of electrochemical impedance spectroscopy (eis) [12]. indeed, the impedance spectrum of a lithium cell contains rich information on all materials properties, interfacial phenomena and electrochemical reactions. from a practical point of view, many of these can be extrapolated by the nyquist diagram in which inverse imaginary part of impedance is plotted against the real one for each investigated frequency (of solicitations). in the case of li-ion batteries, the nyquist diagram consists of four distinct regions typically belonging to the frequency range between 10 mhz to 10 khz [13]. in the low frequency region, an almost linear trend in the nyquist plot is representative of the solid diffusion of lithium ions through the electrodes material. in the medium-high frequencies range, one or more semicircles usually represent the impedance of either charge transfer phenomena or passivation layers on the electrodes surface (solid electrolyte interphase-sei). the intersection of the impedance spectrum with the real axis (pure ohmic impedance) represents the cell internal resistance. finally, the high frequency region is representative of inductive phenomena. since each one of these phenomena are strictly related to temperature, to soc and to the state of health (soh) of the cell, then the analysis of the impedance data can be used to monitor the status of the battery [14]. however, due to the large number of data involved in a single eis spectrum and the amount of information it can contain, the use of conventional data analysis methods may be difficult. also, because of the difficulties in measuring the impedance while the cells are active, eis is not widely used [15]. to overcome this drawback, increasing attention is paid to the implementation of ml approaches, either to aid the fitting of the parameters of equivalent circuits able to describe the battery impedance [16], or by directly modelling the entire impedance spectrum [17]. in this paper, ml is addressed to identify possible methodologies to estimate the soc and soh of li-ion battery from eis data, mainly aiming at developing a feasible model easy to be integrated in a battery management system (bms). implementation in bmss of techniques able to extend batteries useful life, estimating the possible replacement time (estimation of remaining useful life, or rul), is considered a key research activity in the field [18]. in section 2, some state-of-the-art of ml techniques applied to soh and rul estimation are reviewed. section 3 describes the experimental procedures employed to age the li-ion cell; the main parameters extrapolated to create the dataset for the algorithm; and the methods for their collection. section 4 describes the methodology used to carry out the first selection of ml algorithm and the validation of the model. section 5 presents the main results related to the use of different classifiers to model both soc and capacity loss of the li-ion cell. finally, in section 6, the main observations are summarised. 2. ml algorithms for state of health (soh) and remaining useful life (rul) evaluation: a brief review thanks to the remarkable computational capabilities of today’s systems, learning algorithms applied to large quantities of data have often become the preferred approach in the search and identification of complex system behaviour, and therefore represent a valid tool for soh estimation of batteries. in these techniques, a large amount of data, constituted by main battery parameters, are collected continuously up to the end of their life. the dataset analysis of the battery life, performed by learning algorithms, allows extracting non-linear relationships among the various parameters. the knowledge derived from this kind of information can allow a careful management of the battery, helping to extend the useful life and giving reliable prediction on possible replacement times, with obvious positive impact on costs and investments. ml techniques such as fuzzy logic (fl), support vector machine (svm) and artificial neural networks (ann) have extensively been applied for the estimation of the health of batteries, and a brief review can be found in [3]. in most cases, soh is estimated by determining battery capacity and internal resistance, parameters strictly related to soh, from input variables behaviour analysis (current, temperature, voltage, etc.) an application of fuzzy logic with a potential use in portable devices is reported in [19], where electrochemical impedance spectroscopy (eis) technique was used for the dataset creation. however, improper hypotheses in the fuzzy rules [3] and reduced set of observations can lead to substantial errors. the support vector machine is a regression algorithm which converts nonlinearities in a lower dimension space to a linear model developed in a higher-dimensional one [20]. examples of application of this technique applied to soh are reported in [21][25]. in particular, in [25] an online method for soh estimation was developed determining support vectors by means of pieces of charging curves. soh with less than 2% error for 80% of all the cases for commercial nmc li-ion batteries was achieved. the accuracy of the results is strongly dependent on the noise and operational conditions; hence, other data manipulation techniques (particle filter, bayesian technique) have to be used in conjunction with svm to increase the robustness of the estimation [26], thus increasing the complexity of implementation. relevance vector machine (rvm) is suggested as a possible improvement of this approach in [20]. artificial neural networks (anns) are probably the most used approach, inspired by the biological functioning of the human brain, for modeling nonlinear systems. soh estimation using an independently recurrent neural network (in rnn) was realised in acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 82 [3]. here soh was predicted accurately with a root means square error (rmse) of 1.33% and mean absolute error (mae) of 1.14%. the main limitation is the need of a detailed analysis on experimental dataset. different chemistries can require a precise identification and understanding of input parameters. in [27], an improved neural network method based on the combination of a lstm (long-short-term memory) and (pso) particle swarm optimisation was developed. the methodology proposed here uses some additional techniques in each part of the learning process, such as pso for optimisation of the weights, dynamic incremental learning for soh model updating, ceemdan method to denoise raw data, with the aim of increasing the accuracy of the model [27]. another hybrid approach can be found in [28], where false nearest neighbour method was used in conjunction with a mixed lstm and convolutional neural network (cnn) as a solution for unreliable sliding window sizes, a problem commonly present in data-driven rul evaluation approaches. the complexity and topology of ann used in these works is actually classified as deep learning, an evolution of the machine learning concept coined for neural networks which exploits the concept of multilayer perceptron (mlp). a comparison of deep learning and different other common techniques showing its potential and advantages of data driven approaches was presented in [4]. the outcomes showed the goodness of deep neural networks (dnn), which are suitable when high accuracy is needed. however, also this technique is not easy to be implemented due to higher computational complexity and resources needed [4]. a lot of other techniques and approaches can be found in the literature. although out of the scope of the present work, the goal of a possible implementation in bms suggests the choice of low-complexity approaches to reduce computational resources needed and thus leading to lower energy consumption [29]. a possible alternative is given by random forest algorithms. they generally use reduced computation resources, and thus can result preferable in comparison to other techniques analysed, based on svm and nn. in general, linear regressors or random forest response is faster than complex model and is easily interpreted. however, it has to be underlined that the accuracy in random forest models is related to the number and size of trees and therefore to the availability of memory [1], [30]. 3. dataset collection and creation the present work was aimed at the development of a method to identify the degradation level induced by the use of li-ion batteries in a primary frequency regulation (fr) service. more precisely, the activity was focused on the identification of the main parameters indicating the state of battery degradation. for this purpose, cylindrical-type 18650 li-ion-ion cells (table 1) were cycle aged according to a test profile extrapolated by the standard iec 61427-2 [31]. the standard profile requires that the storage system is able to provide symmetrical charging and discharging phases at constant power of 500 kw and 1000 kw, respectively, with a voltage range of 400–600 v. therefore, the profile was adapted to the single cell characteristics. moreover, in order to enhance the degradation of the cell (thus limiting the overall duration required for data collection) fr ageing tests were accelerated by operating at an ambient temperature of 45 °c. in fact, the degradation processes of li-ion batteries are accelerated by temperature increase [32]. the ageing tests were performed by a dual-channel bitrode ftv1 battery cycler. in addition, the cell was tested under temperature-controlled atmosphere in an angelantoni discovery dm 340 bt climatic chamber. the fr ageing profile with actual power steps imposed to the cell is shown in figure 1. the full ageing protocol consisted of a first charge of the cell up to 100% soc and then an execution of the fr profile. once the cell reached the lower voltage cut-off threshold (discharged), a recharge up to 100% soc was performed and then the cycle was restarted. the ageing level was defined in terms of residual capacity retained by the cell. this information was obtained from periodic check-ups carried out on the cell, approximately every 10 days of operation. parametric check-ups of the cells performed the extraction of residual capacity and impedance evaluations by means of eis technique. both analyses were carried out through a high reliability autolab 302n potentiostat/galvanostat (whose potential accuracy and current accuracy are both ±0.2% of the full-scale value). it is worth noting that, due to instruments calibration and performance, measurements were considered reliable and having no impact of uncertainty on the model. the robustness of the model will be investigated in a future work. capacity tests, constituted by a galvanostatic discharge at nominal c-rate and room temperature, allowed to extrapolate the characteristic parameters indicative of the soh of the cell. the recorded discharge curves at the begin of life (bol) and different soh levels are reported in figure 2a. in particular, residual capacity (cd) and residual energy (ed) were collected and used as output variables of the database. the value of cd was obtained by integrating the actual current (id) between begin of discharge (t0) and end of discharge (tf), within the upper and lower voltage cut off limit 𝐶𝑑 = ∫ 𝐼𝑑(𝑡)d𝑡 𝑡𝑓 𝑡𝑜 . (1) table 1. main characteristics of the tested li-ion cell. description value nominal voltage 3.7 v nominal capacity 1.1 ah max charge current 4 a max discharge current 10 a maximum voltage 4.2 v minimum voltage 2.5 v discharge temperature -30 ÷ 60 °c charge temperature 0 ÷ 60 °c chemistry licoo2-linicomno2/graphite figure 1. power profile used to age the battery according to a frequency regulation profile extrapolated from the international standard iec 61427-2. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 83 the quantity ed was obtained by integrating the actual power (id) between the begin of discharge (t0) and the end of discharge (tf), within the upper and lower voltage cut off limits. 𝐸𝑑 = ∫ 𝑃𝑑(𝑡) d𝑡 𝑡𝑓 𝑡𝑜 , (2) where 𝑃𝑑(𝑡) = 𝑉(𝑡) ∙ 𝐼𝑑(𝑡), with 𝑉(𝑡) and 𝐼𝑑(𝑡) representing the instantaneous values of voltage and current, respectively. the recorded discharge curves at begin of life (bol) and different soh levels are reported in figure 2a. soh levels were defined as capacity loss of the cell identified during each parametric check-up. as input variables of the algorithm, the complex impedance values were collected at different frequencies and soh levels of the cell. such information came from eis analysis carried out in correspondence of parametric check-ups. to create the database, the impedance of the cell was recorded at different socs (100%, 75%, 50%, 25%, 0%) at bol and every ten days of operation under fr cycle, until a loss of capacity (closs) of about 8% was reached. moreover, the loss of capacity (closs, as effect of ageing) was used as indicative parameter of the cell soh. nyquist plots of the impedance recorded for different socs at bol and five different soh levels are reported in figure 2b. the impedance was recorded in the frequency range between 10 mhz to 10 khz with ten points per decade, which leads to 61 values for each soc. finally, the data set used for the case study consists of 1830 impedance measurements. table 2 contains some statistical information on the dataset used. 4. methodology the above-mentioned dataset has been used to test various classification and regression techniques through the scikit-learn tool [33], an open-source library for machine learning developed in python. among them, k-nearest neighbors (knn), linear discriminant analysis (lda), gaussian naive bayes (gnb), support vector classification (svc), decision tree (dt), linear regressor, lasso, ridge and random forest were considered for performances comparison. in order to avoid an influence on the results by the particular previous partitioning, a cross-validation technique was also used. in particular, in this phase the original data set was partitioned into 5 subsets (folds) used for tests and training. in the case of regressors, the value of the mae and the determination coefficient (r2) was calculated for each round. similarly, the accuracy (acc) was measured for the classifiers. the models were then compared on the basis of the average values of the aforementioned metrics obtained in the 5 validation rounds. the standard deviation (std) was also determined from the same metrics, which provides information on the robustness of the model (in fact, lower values of std generally correspond to more robust models). 5. results 5.1. data analysis first, correlation coefficients were analysed to investigate relationships among impedance measurements and the corresponding soc and closs values. correlation coefficients of soc and closs, specifically achieved for the rf cycle, are summarised in table 3 for both rectangular and polar forms of the impedance. the analysis of the correlation coefficients shows that the highest correlation value is between the closs measurements and the real part of the impedance (re(z)), for which a correlation coefficient of 0.471 was obtained. it is also possible to observe that the correlation coefficient obtained between closs and the impedance module (abs(z)) is just smaller (0.456). the similarity between these two correlation coefficients suggests that, for the purpose of closs modelling, it is possible to use either the module or the real part of the impedance. in the case of soc, the highest correlation value is obtained with the impedance phase values (arg(z)) for which a correlation coefficient of 0.337 was obtained. rectangular coordinate values, on the other hand, are uncorrelated to soc. therefore, it can be assumed that, for soc modelling, the phase values of impedance are the most useful, at least for this set of data. as a consequence, it is to be expected that machine learning algorithms will perform better with the use of impedance values represented in polar coordinates rather than rectangular ones. table 2. statistical data of used dataset. f (hz) re{z} (ω) im{z} (ω) soc (%) c_loss (%) range min-max 0.01-10000 0.041-0.207 -0.072-0.067 0-100 0-8.27 mean 797 0.0762 0.0019 50.00 4.54 std 1951.6 0.0212 0.0175 35.36 2.69 figure 2. a) discharge curves for extrapolation of output variables; b) nyquist plot of the impedance used as input variables of the database. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 84 the above analysis was repeated considering only eis impedance data corresponding to frequency values lower than 350 hz. henceforward, in this work, we will refer to this data set as filtered data. indeed, as also reported in [34] where a similar lithium cell was used, the most important features induced by ageing on physico-chemical processes were observed only for the negative imaginary part of the impedance, which, in our case, matches with the selected filtered frequency range. also, it is well known that eis at moderate and high frequency is strongly dependent on the experimental setup and cables, thus leading to measurement errors and scattered data [35]. accordingly, the comparison of correlation coefficients reported in table 3 and table 4 (for original and filtered data, respectively) reveals a marked improvement in soc correlation when only low frequency measures are considered. in particular, in the case of filtered data, a correlation coefficient of 0.706 was obtained between the soc and the impedance phase, which is relatively higher in comparison to the value of 0.337 obtained for the original dataset. as a consequence, machine learning algorithms are expected to provide higher performance results when trained with the filtered dataset. 5.2. comparison of machine learning algorithms performance of several machine learning classifiers and regressors were evaluated and compared. among them, knearest neighbors (knn), linear discriminant analysis (lda), gaussian naive bayes (gnb), support vector classification (svc) and decision tree (dt), were considered as representative classification algorithms. the aforementioned algorithms were compared in terms of accuracy achieved on both original dataset and filtered dataset, using a cross-validation method on 5 folds. table 5 shows the average values and the standard deviation of the accuracy obtained for the above classifiers in the case of soc prediction obtained by training the algorithms with the original dataset. it is possible to observe how the use of polar representation leads to an improvement in accuracy for all classifiers with an increase between 40% and 270%, depending on the classifier. for both representations (rectangular/polar), the decision tree (dt) exhibits a better performance, obtaining an average accuracy equal to 0.915 with the polar representation. this analysis was repeated considering the filtered dataset, i.e. removing high frequency points from the original dataset. as can be seen from table 6, filtering improves accuracy of almost all classifiers (for the sake of clarity, in table 6 only results for polar coordinates are reported). for better comparison, in figure 3, a box plot shows the median value (orange line), quartiles, and range of accuracy values (minimum and maximum values) in the case of algorithm trained considering filtered (figure 3b) and original (figure 3a) datasets. from the comparison between figure 3a and figure 3b it can be observed that for the lda classifier the use of filtered values leads to a marked improvement in performance. moreover, in the case of dt, in addition to an increase of the average accuracy value, there is also a significant reduction of data dispersion that justifies the reduction of the standard deviation in table 6, obtained in the case of filtered data. similar considerations can be carried out using for comparison purpose the f1 metric [36]. in fact, as shown in figure 4 where the macro-average f1 score obtained for the same classifiers (for both filtered and un-filtered datasets) is reported, dt classifier achieves better results even considering the macro-averaged f1 metric instead of the accuracy. finally, figure 5 shows the confusion matrix obtained for the dt classifier in the case of the filtered dataset for an 80/20 distribution, i.e. with 80% of the data used for training and 20% for testing. a total of 272 soc values were tested and only 16 of them were wrongly classified, thus obtaining an accuracy on the specific test set of 94.31%. therefore, the achieved classification can be effectively used to evaluate the state of charge of the battery starting from the impedance and, in particular, to predict when the state of charge is below 50%. it is worth mentioning that, the choice of using classifiers instead of regressors is related to the specific application. in a few cases, in fact, classifiers able to simply detect discrete values of soc can be useful for detecting when specific critical threshold levels have been reached, i.e. the 20% of capacity reduction commonly used for automotive applications. it is worth noting that, in the previous analysis, the default values of scikit-learn was used for all classifiers, i.e. all classifiers table 3. correlation matrix for impedance measures evaluated on original/unfiltered data re{z} (ω) im{z} (ω) abs{z} (ω) arg{z} (ω) closs +0.471 +0.044 +0.456 -0.002 soc -0.166 -0.103 -0.170 -0.337 table 4. correlation matrix for impedance measures evaluated only on lowfrequency data. re{z} (ω) im{z} (ω) abs{z} (ω) arg{z} (ω) closs +0.477 +0.119 +0.458 -0.001 soc -0.213 -0.1239 -0.215 -0.706 table 5. accuracy of some classifiers used for modeling the soc starting for unfiltered data in rectangular and polar representation. classifier representation z mean std lda rectangular 0.234 0.027 lda polar 0.333 0.045 gnb rectangular 0.196 0.020 gnb polar 0.370 0.053 svc rectangular 0.192 0.014 svc polar 0.380 0.068 knn rectangular 0.222 0.072 knn polar 0.383 0.088 dt rectangular 0.329 0.020 dt polar 0.915 0.047 table 6. accuracy of some classifiers used for modeling the soc in polar representation for filter and unfiltered data. classifier mean std mean std (original data) (filtered data) lda 0.333 0.045 0.602 0.192 gnb 0.370 0.053 0.397 0.027 svc 0.380 0.068 0.374 0.066 knn 0.383 0.088 0.392 0.084 dt 0.915 0.047 0.938 0.024 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 85 were applied without any optimisation. this fact partially justifies why most classifiers exhibit poor performances. in addition, it is well known that lda, like other linear classifiers and regressors such as ridge and lasso, adapts well to linear models while the dependence of soc on impedance curves does not. nevertheless, it is generally better to test and compare them due to their lower computational complexity. therefore, a similar analysis was carried out for linear, lasso, elastic, ridge, gradient boosting, ada boost and random forest regressors with the main difference that, in the case of regressors, performance was measured in terms of mae and the determination coefficient (r2). the regressor with the best performance in terms of both r2 and mae was the random forest. the distributions of the predicted values by random forest regressor, when trained with the filtered data, are reported in figure 6a and figure 6b for the modeling of soc and closs, respectively. figure 6 also reports the average values and standard deviations of r2 and mae. in particular, in the case of soc, an average value of r2 of 0.98 was achieved (see figure 6a). in comparison to [37], which considered unfiltered data, a significant reduction was obtained in the mae, from 2.65 to 1.87. in addition, as discussed in the following subsection, the use of filtered data leads to models with lower complexity. 5.3. analysis of the random forest parameters different tradeoffs between performance and complexity of machine learning algorithms can be obtained by a proper tuning of related parameters. in the specific case of the random forest, the most important parameters that impact on both performance and overall complexity are the number of trees (n_estimators) and the maximum depth of trees (max_depth). generally, increasing one or both of such parameters improve performance at the cost of greater complexity and estimation time. table 7 shows the r2 and mae metrics obtained with random forest for some combinations of n_estimators and max_depth considering the original set, i.e. unfiltered data. it is possible to observe that r2 and mae metrics are most affected by the max_depth parameter. in particular, a maximum value of r2 equal to 0.97 is achieved by setting max_depth = 30. the use of higher values increases computational complexity without significant performance advantages. as regards the other parameter investigated (i.e., n_estimators), there is no substantial difference in the values of r2 and mae obtained by fixing max_depth = 30 and using n_estimators values higher than 100. this analysis leads to conclude that, in the case of unfiltered data, the optimal values of the random forest parameters that maximise performance are max_depth = 30 and n_estimators = 100, which are the parameters used in [37]. the same analysis was conducted for filtered data, and the related results are summarised in table 8. in this case, better a) b) figure 3. accuracy of machine learning algorithms on the soc estimation a) unfiltered, b) filtered data. a) b) figure 4. f1 metric results for a) unfiltered and b) filtered data. figure 5. confusion matrix of dt classifier. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 86 results are achieved even with lower values of the parameters. for instance, the performance obtained using filtered data with max_depth=10 and n_estimators=10 is better than when using unfiltered data for max_depth=30 and n_estimators=100. thus, by training the algorithm with filtered data we obtained models with better performance and lower complexity. 6. conclusions starting from impedance measurements, different machine learning techniques were analysed as predictors of the state of charge and the loss of capacity of a lithium battery, subjected to a frequency regulation profile for grid applications. according to the results, the following conclusions can be drawn: for the training of machine learning techniques, the use of impedance values expressed in polar form is to be preferred; decision trees and random forest provided superior performance compared to the other machine learning techniques analysed; using low frequency data for training random forest regressor improved performance in terms of r2 and mae for both state of charge and capacity loss prediction and largely reduced overall complexity. acknowledgement special thanks to the italian ministry of economic development for funding this activity. references [1] g. hackeling, mastering machine learning with scikit-learn: packt publishing, 2014. [2] l. ren, l. zhao, s. hong, s. zhao, h. wang, l. zhang, remaining useful life prediction for lithium-ion battery: a deep learning approach, ieee access 6 (2018), pp. 50587-50598. doi: 10.1109/access.2018.2858856 [3] p. venugopal, state-of-health estimation of li-ion batteries in electric vehicle using indrnn under variable load condition, energies 12(22) (2019), art. 4338. doi: 10.3390/en12224338 [4] p. khumprom, n. yodo, a data-driven predictive prognostic model for lithium-ion batteries based on a deep learning algorithm, energies 12(4) (2019), art. 660. doi: 10.3390/en12040660 [5] j. meng, g. luo, m. ricco, m. swierczynski, d.i. stroe, r. teodorescu, overview of lithium-ion battery modeling methods for state-of-charge estimation in electrical vehicles, applied sciences 8(5) (2018), art. 659. doi: 10.3390/app8050659 [6] c. lin, a. tang, w. wang, a review of soh estimation methods in lithium-ion batteries for electric vehicle applications, energy procedia 75 (2015), pp. 1920-1925. doi: 10.1016/j.egypro.2015.07.199 [7] c. weng, y. cui, j. sun, h. peng, on-board state of health monitoring of lithium-ion batteries using incremental capacity analysis with support vector regression, journal of power sources, 235 (2013), pp. 36-44. doi: 10.1016/j.jpowsour.2013.02.012 [8] r. r. richardson, c. r. birkl, m. a. osborne, d. a. howey, gaussian process regression for in situ capacity estimation of lithium-ion batteries, ieee transactions on industrial informatics 15(1) (2019), pp. 127-138. doi: 10.1109/tii.2018.2794997 [9] x. xu, n. chen, a state-space-based prognostics model for lithium-ion battery degradation, reliability engineering and system safety 159 (2017), pp. 47-57. doi: 10.1016/j.ress.2016.10.026 [10] m. a. patil, p. tagade, k. s. hariharan, s. m. kolake, t. song, t. yeo, s. doo, a novel multistage support vector machine based approach for li ion battery remaining useful life estimation, applied energy 159 (2015), pp. 285-297. doi: 10.1016/j.apenergy.2015.08.119 table 7. r2 and mae values obtained by random forest technique varying parameters n_estimators and max_depth (on unfiltered data). n_estimators max_depth r2 mae 10 5 0.79 (0.01) 10.47 (0.29) 10 10 0.93 (0.01) 4.49 (0.38) 10 30 0.96 (0.01) 3.32 (0.50) 10 50 0.96 (0.01) 3.34 (0.26) 25 30 0.97 (0.01) 3.11 (0.24) 50 50 0.97 (0.01) 3.02 (0.34) 100 5 0.80 (0.01) 10.43 (0.24) 100 10 0.93 (0.01) 4.43 (0.29) 100 30 0.97 (0.01) 3.02 (0.36) 100 50 0.97 (0.01) 3.03 (0.34) 1000 30 0.97 (0.01) 2.99 (0.30) table 8. r2 and mae values obtained by random forest technique varying parameters n_estimators and max_depth (on filtered data). n_estimators max_depth r2 mae 10 5 0.95 (0.01) 4.62 (0.30) 10 10 0.98 (0.00) 1.94 (0.22) 10 30 0.98 (0.00) 1.94 (0.22) 10 50 0.98 (0.00) 1.84 (0.29) 25 30 0.98 (0.00) 1.89 (0.15) 50 50 0.98 (0.00) 1.91 (0.18) 100 5 0.95 (0.01) 4.50 (0.20) 100 10 0.98 (0.00) 1.89 (0.13) 100 30 0.98 (0.00) 1.8 (0.21) 100 50 0.98 (0.00) 1.87 (0.22) 1000 30 0.98 (0.00) 1.83 (0.18) a) b) figure 6. random forest distribution on filtered data for a) soc and b) capacity loss. https://doi.org/10.1109/access.2018.2858856 https://doi.org/10.3390/en12224338 https://doi.org/10.3390/en12040660 https://doi.org/10.3390/app8050659 https://doi.org/10.1016/j.egypro.2015.07.199 https://doi.org/10.1016/j.jpowsour.2013.02.012 https://doi.org/10.1109/tii.2018.2794997 https://doi.org/10.1016/j.ress.2016.10.026 https://doi.org/10.1016/j.apenergy.2015.08.119 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 87 [11] c. lu, l. tao, h. fan, li-ion battery capacity estimation: a geometrical approach, journal of power sources 261 (2014), pp. 141-147. doi: 10.1016/j.jpowsour.2014.03.058 [12] d. i. stroe, m. swierczynski, a. i. stan, v. knap, r. teodorescu, s. j. andreasen, diagnosis of lithium-ion batteries state-of-health based on electrochemical impedance spectroscopy technique, 2014 ieee energy conversion congress and exposition (ecce), pittsburgh, pa, 14-18 september 2014, pp. 4576-4582. doi: 10.1109/ecce.2014.6954027 [13] d. andre, m. meiler, k. steiner, c. wimmer, t. soczka-guth, d. u. sauer, characterization of high-power lithium-ion batteries by electrochemical impedance spectroscopy. i. experimental investigation, journal of power sources 196(12) (2011), pp. 53345341. doi: 10.1016/j.jpowsour.2010.12.102 [14] f. huet, a review of impedance measurements for determination of the state-of-charge or state-of-health of secondary batteries, journal of power sources 70(1) (1998), pp. 59-69. doi: 10.1016/s0378-7753(97)02665-7 [15] i. masmitjà rusinyol, j. gonzález, g. masmitjà, s. gomáriz, j. delrío-fernández, power system of the guanay ii auv, acta imeko 4(1) (2015), pp. 35-43. doi: 10.21014/acta_imeko.v4i1.161 [16] s. buteau, j. r. dahn, analysis of thousands of electrochemical impedance spectra of lithium-ion cells through a machine learning inverse model, journal of the electrochemical society 166(8) (2019), art. a1611. doi: 10.1149/2.1051908jes [17] y. zhang, q. tang, y. zhang, j. wang, u. stimming, a. a. lee, identifying degradation patterns of lithium ion batteries from impedance spectroscopy using machine learning, nature communications 11 (2020), art. 1706. doi: 10.1038/s41467-020-15235-7 [18] f. liu, x. liu, w. su, h. lin, h. chen, m. he, an online state of health estimation method based on battery management system monitoring data, international journal of energy research 44(8) (2020), pp. 6338-6349. doi: 10.1002/er.5351 [19] p. singha, r.vinjamuria, x. wangb, d. reisner, design and implementation of a fuzzy logic-based state-of-charge meter for li-ion batteries used in portable defibrillators, journal of power sources 162(2) (2006), pp. 829-836. doi: 10.1016/j.jpowsour.2005.04.039 [20] s. b. sarmah, p. kalita, a. garg, x.-d. niu, x.-w. zhang, x. peng, d. bhattacharjee, a review of state of health estimation of energy storage systems: challenges and possible solutions for futuristic applications of li-ion battery packs in electric vehicles, journal of electrochemical energy conversion and storage 16(4) (2019), art. 040801. doi: 10.1115/1.4042987 [21] a. nuhic, t. terzimehic, t. soczka-guth, m. buchholz, and k. dietmayer, health diagnosis and remaining useful life prognostics of lithium-ion batteries using data-driven methods, journal of power sources 239 (2013), pp. 680-688. doi: 10.1016/j.jpowsour.2012.11.146 [22] z. chen, m. sun, x. shu, r. xiao, j. shen, online state of health estimation for lithium-ion batteries based on support vector machine, applied sciences 8(6) (2018), art. 925. doi: 10.3390/app8060925 [23] v. klass, m. behm, g. lindbergh, a support vector machinebased state-of-health estimation method for lithium-ion batteries under electric vehicle operation, journal of power sources, vol. 270 (2015), pp. 262-272. doi: 10.1016/j.jpowsour.2014.07.116 [24] j. meng, l. cai, g. luo, d.-i. stroe, r. teodorescu, lithium-ion battery state of health estimation with short-term current pulse test and support vector machine, microelectronics reliability 88-90 (2018), pp. 1216-1220. doi: 10.1016/j.microrel.2018.07.025 [25] x. feng, c. weng, x. he, x. han, l. lu, d. ren, m. ouyang, online state-of-health estimation for li-ion battery using partial charging segment based on support vector machine, ieee transactions on vehicular technology 68(9) (2019), pp. 85838592. doi: 10.1109/tvt.2019.2927120 [26] m. berecibar, i. gandiaga, i. villarreal, n. omar, j. van mierlo, p. van den bossche, critical review of state of health estimation methods of li-ion batteries for real applications, renewable and sustainable energy reviews 56 (2016), pp. 572-587. doi: 10.1016/j.rser.2015.11.042 [27] j. qu, f. liu, y. ma, j. fan, a neural-network-based method for rul prediction and soh monitoring of lithium-ion battery, ieee access 7 (2019), pp. 87178-87191. doi: 10.1109/access.2019.2925468 [28] g. ma, y. zhang, c. cheng, b. zhou, p. hu, y. yuan, remaining useful life prediction of lithium-ion batteries based on false nearest neighbors and a hybrid neural network, applied energy, 253 (2019), art. 113626. doi: 10.1016/j.apenergy.2019.113626 [29] r. la rosa, a. y. s. pandiyan, c. trigona, b. andò, s. baglio, an integrated circuit to null standby using energy provided by mems sensors, acta imeko 9(4) (2020), p. 144 -150. doi: 10.21014/acta_imeko.v9i4.741 [30] g. campobello, d. dell’aquila, m. russo, a. segreto, neurogenetic programming for multigenre classification of music content, applied soft computing 94 (2020), art. 106488. doi: 10.1016/j.asoc.2020.106488 [31] international standard iec 61427-2: secondary cells and batteries for renewable energy storagegeneral requirements and methods of test part 2: on-grid applications, ed, 2015. [32] s. ma, m. jiang, p. tao, c. song, j. wu, j. wang, t. deng, w. shang, temperature effect and thermal impact in lithium-ion batteries: a review, progress in natural science: materials international 28(6) (2018), pp. 653-666. doi: 10.1016/j.pnsc.2018.11.002 [33] f. pedregosa, g. varoquaux, a. gramfort, v. michel, b. thirion, o. grisel, m. blondel, p. prettenhofer, r. weiss, v. dubourg, scikit-learn: machine learning in python, the journal of machine learning research 12 (2011), pp. 2825-2830. online [accessed 09 june 2021] http://jmlr.org/papers/v12/pedregosa11a.html [34] v. j. ovejas, impedance characterization of an lconmc/graphite cell: ohmic conduction, sei transport and charge-transfer phenomenon, batteries 4(3) (2018), art. 43. doi: 10.3390/batteries4030043 [35] t. f. landinger, g. schwarzberger, a. jossen, a novel method for high frequency battery impedance measurements, ieee international symposium on electromagnetic compatibility, signal & power integrity (emc+sipi), new orleans, la, usa, 22-26 july 2019, pp. 106-110. doi: 10.1109/isemc.2019.8825315 [36] m. l. zhang, z. h. zhou, a review on multi-label learning algorithms, ieee transactions on knowledge and data engineering 26(8) (2014), pp. 1819–1837. doi: 10.1109/tkde.2013.39 [37] d. aloisio, g. campobello, s. g., leonardi, a. segreto, n. donato, a machine learning approach for evaluation of battery state of health, 24th imeko tc4 international symposium and 22nd international workshop on adc and dac modelling and testing, palermo, italy, 14-16 september 2020, pp. 129-134. online [accessed 09 june 2021] https://www.imeko.org/publications/tc4-2020/imeko-tc42020-25.pdf https://doi.org/10.1016/j.jpowsour.2014.03.058 https://doi.org/10.1109/ecce.2014.6954027 https://doi.org/10.1016/j.jpowsour.2010.12.102 https://doi.org/10.1016/s0378-7753(97)02665-7 http://dx.doi.org/10.21014/acta_imeko.v4i1.161 https://doi.org/10.1149/2.1051908jes https://doi.org/10.1038/s41467-020-15235-7 https://doi.org/10.1002/er.5351 https://doi.org/10.1016/j.jpowsour.2005.04.039 https://doi.org/10.1115/1.4042987 https://doi.org/10.1016/j.jpowsour.2012.11.146 http://dx.doi.org/10.3390/app8060925 https://doi.org/10.1016/j.jpowsour.2014.07.116 https://doi.org/10.1016/j.microrel.2018.07.025 https://doi.org/10.1109/tvt.2019.2927120 https://doi.org/10.1016/j.rser.2015.11.042 https://doi.org/10.1109/access.2019.2925468 https://doi.org/10.1016/j.apenergy.2019.113626 http://dx.doi.org/10.21014/acta_imeko.v9i4.741 https://doi.org/10.1016/j.asoc.2020.106488 https://doi.org/10.1016/j.pnsc.2018.11.002 http://jmlr.org/papers/v12/pedregosa11a.html https://doi.org/10.3390/batteries4030043 https://doi.org/10.1109/isemc.2019.8825315 https://doi.org/10.1109/tkde.2013.39 https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-25.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-25.pdf journal contacts acta imeko issn: 2221-870x june 2021, volume 10, number 2 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 0 journal contacts about the journal acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are mainly based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. editor‐in‐chief francesco lamonaca, italy founding editor‐in‐chief paul p. l. regtien, netherlands associate editor dirk röske, germany editorial board section editors (vol. 7 10) leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france paolo carbone, italy lorenzo ciani, italy catalin damian, romania pasquale daponte, italy luca de vito, italy luigi ferrigno, italy edoardo fiorucci, italy alistair forbes, united kingdom helena geirinhas ramos, portugal sabrina grassini, italy fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom yasuharu koike, japan francesco lamonaca, italy massimo lazzaroni, italy fabio leccese, italy rosario morello, italy michele norgia, italy pedro miguel pinto ramos, portugal nicola pompeo, italy sergio rapuano, italy gustavo ripper, brazil maik rosenberger, germany dirk röske, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy enrico silva, italy krzysztof stepien, poland ronald summers, uk marco tarabini, italy yvan baudoin, belgium francesco bonavolonta, italy giuseppe caravello, italy carlo carobbi, italy marcantonio catelani, italy mauro d’arco, italy egidio de benedeto, italy alessandro depari, italy alessandro germak, italy min-seok kim, korea momoko kojima, japan koji ogushi, japan vilmos palfi, hungary franco pavese, italy jeerasak pitakarnnop, thailand jan saliga, slovakia emiliano sisinni, italy ciro spataro, italy oscar tamburis, italy jorge c. torres-guzman, mexico ioan tudosa, italy ian veldman, south africa rugkanawan wongpithayadisai, thailand claudia zoani, italy about imeko the international measurement confederation, imeko, is an international federation of actually 42 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses principal contact prof. francesco lamonaca university of calabria department of computer science, modelling, electronic and system science via p. bucci, 41c, vi floor, arcavacata di rende, 87036 (cs), italy e-mail: f.lamonaca@dimes.unical.it acta imeko giuseppe caravello, e-mail: giuseppe.caravello02@unipa.it ciro spataro, e-mail: ciro.spataro@unipa.it support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig, germany e-mail: dirk.roeske@ptb.de mailto:f.lamonaca@dimes.unical.it mailto:giuseppe.caravello02@unipa.it mailto:ciro.spataro@unipa.it mailto:dirk.roeske@ptb.de announcement of acta imeko second issue 2022 acta imeko issn: 2221-870x may 2022, volume 11, number 2, 1 acta imeko | www.imeko.org may 2022 | volume 11 | number 2 | 1 announcement of acta imeko second issue 2022 francesco lamonaca1 1 department of department of computer science, modeling, electronics and systems engineering (dimes), university of calabria, ponte p. bucci, 87036, arcavacata di rende, italy section: editorial citation: francesco lamonaca, announcement of acta imeko second issue 2022, acta imeko, vol. 11, no. 2, article 1, may 2022, identifier: imeko-acta11 (2022)-02-01 received may 4, 2022; in final form may 4, 2022; published may 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: francesco lamonaca, e-mail: editorinchief.actaimeko@hunmeko.org dear readers, the second issue of volume 11 is started. for acta imeko, this is the first editorial announcing the start of an issue, since from now the submitted papers will be published as soon as they are ready. this new publication policy is in line with the actions towards the speed-up of the publication time and the attracting of high-value papers. this issue will include the papers of the general issue together with the special issue segment of selected papers from the two events organized by tc17, the imeko technical committee on robotic measurement and managed by prof zafar taqvi, research fellow at the university of houston clear lake in texas. annually tc17 organizes "international symposium on measurements and control in robotics" (ismcr), a full-fledged event, focusing on various aspects of international research, applications, and trends of robotic innovations for benefit of humanity, advanced human-robot systems, and applied technologies, e.g. in the allied fields of telerobotics, telexistance, simulation platforms, and environment, and mobile work machines as well as virtual reality (vr), augmented reality (ar), and 3d modelling and simulation. during the imeko congress years, tc17 organizes only "topical events." in 2021, tc17 organized two virtual topical events, both following the covid-19 restrictions. ismcr2021 had a theme "virtual media technologies for the post covid19 era" and the other tc17-vrise was a jointly organized event with the theme "robotics for risky interventions and environmental surveillance.” these symposia are forums for the exchange of recent research results and futuristic ideas in robotics technologies and applications. it is of interest to a wide range of participants from government agencies, relevant international institutions, universities, and research organizations, working with futuristic applications of automated vehicles. the presentation is also of interest to the media as well the general public. the papers in the special issue segment were specially selected from the above two events. we are sure that acta imeko readers will found this special issue a further source of ideas for their specific research fields. furter information about the published papers will be given in june, as usual, in the introductory note. we hope that you will enjoy your readings and that you can confirm acta imeko as your main source to find new solutions and ideas and a valuable resource for spreading your results. zafar taqvi chairperson of imeko tc17 francesco lamonaca editor in chief mailto:editorinchief.actaimeko@hunmeko.org journal contacts acta imeko issn: 2221-870x december 2021, volume 10, number 4 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 journal contacts about the journal acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are mainly based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. about imeko the international measurement confederation, imeko, is an international federation of actually 42 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses principal contact prof. francesco lamonaca university of calabria department of computer science, modelling, electronic and system science via p. bucci, 41c, vi floor, arcavacata di rende, 87036 (cs), italy e-mail: editorinchief.actaimeko@hunmeko.org support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig, germany e-mail: dirk.roeske@ptb.de editor‐in‐chief francesco lamonaca, italy founding editor‐in‐chief paul p. l. regtien, netherlands associate editor dirk röske, germany copy editors egidio de benedetto, italy silvia sangiovanni, italy layout editors dirk röske, germany leonardo iannucci, italy domenico luca carnì, italy editorial board leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france paolo carbone, italy lorenzo ciani, italy catalin damian, romania pasquale daponte, italy luca de vito, italy luigi ferrigno, italy edoardo fiorucci, italy alistair forbes, united kingdom helena geirinhas ramos, portugal sabrina grassini, italy fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom yasuharu koike, japan francesco lamonaca, italy massimo lazzaroni, italy fabio leccese, italy rosario morello, italy michele norgia, italy franco pavese, italy pedro miguel pinto ramos, portugal nicola pompeo, italy sergio rapuano, italy gustavo ripper, brazil maik rosenberger, germany dirk röske, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy enrico silva, italy krzysztof stepien, poland ronald summers, uk marco tarabini, italy section editors (vol. 7 10) yvan baudoin, belgium piotr bilski, poland francesco bonavolonta, italy giuseppe caravello, italy carlo carobbi, italy marcantonio catelani, italy mauro d’arco, italy egidio de benedeto, italy alessandro depari, italy alessandro germak, italy istván harmati, hungary min-seok kim, korea bálint kiss, hungary momoko kojima, japan koji ogushi, japan vilmos palfi, hungary jeerasak pitakarnnop, thailand jan saliga, slovakia emiliano sisinni, italy ciro spataro, italy oscar tamburis, italy jorge c. torres-guzman, mexico ioan tudosa, italy ian veldman, south africa rugkanawan wongpithayadisai, thailand claudia zoani, italy acta imeko | www.imeko.org december 2021 | volume 10 | number 4 reviewers acta imeko would like to gratefully acknowledge the eminent work done by reviewers during the peer-review process. their contribution is a fundamental service for the benefit of this journal and for the whole scientific community. each of the reviewers listed below provided at least one review for the 2021 issues. 2021 acta imeko reviewers efstathios adamopoulos imran ahmed gregorio andria emma angelini marco arnesano giovanni artale grigor babayan valerio baiocchi eszter szatmáriné bakonyi eric benoit marta berardengo piotr bilski ileana bodini alessandro bosman marius branzila thomas bruns tatyana bubela tamás bubonyi bruno bueno domenico carnì carlo carobbi miguel carrasco andrea cataldo umberto cesaro paolo chiariotti giovanni chiorboli in-mook choi lorenzo ciani alfredo cigada francesco clementi nicola conci valentina cosentino gloria cosoli marija cundeva-blajer livio d’alvia mauro d’arco leonardo d'acquisto stuart davidson egidio de benedetto tilde de caro geert de cubber raffaella de marco silvio del pizzo michail delagrammatikas carolina del-valle-soto alessandro depari yufan ding john peter djungha nicola donato aime lay ekuakille leila es sebar antonio esposito laura fabbiano giuseppe ferro antonio formisano cristian fosalau péter gàbor salvatore gaglione antonella gaspari nicola giaquinto emese gincsainé szádeczky-kardoss sabrina grassini anna maria gueli giulia guidi rishi gupta szilvia gyongyosi istván harmati jan holub leonardo iannucci ilaria ingrosso zsolt kemény yeongdae kim hyeonseok kim bàlint kiss tokihiko kobata yasuharu koike momoko kojima pawel komada naoki kuramoto francesco lamonaca marco laracca massimo lazzaroni fabio leccese fei liu christophe lohr andrea mariscotti eugenio martinelli giancarlo micheli gabriele milani rosario morello antonio moschitta ákos nagy hideaki nozato amman oglat dinko oletic vincenzo paciello imre paniti marco parvis nicola pasquino gabriele patrizi franco pavese francesco picariello enrico picariello giovanni pilato cristina piselli jeerasak pitakarnnop emanuele piuzzi antonino quattrocchi sergio rapuano luis miguel blanes restoy likit sainoo alexandru salceanu jan saliga constantin sarmasanu jurek sasiadek andrea scorza carmelo scuro federico seri enrico silva janko slavic han wook song pedro vieira souza santos roberta spallone ronald summers màrton szemenyei kostiantyn torokhtii roman trisch ioan tudosa moise avoci ugwiri marjan urekar alberto vallan zacharias vangelatos ian veldman valentina venuti zsolt viharos jian wang samyong woo bernhard zagar emanuele zappa cristian zet lulu zhang giulio zuccaro introductory notes for the acta imeko special issue on the 23rd international symposium on measurement and control in robotics organized by tc17 acta imeko issn: 2221-870x september 2021, volume 10, number 3, 1 2 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 1 introductory notes for the acta imeko special issue on the 23rd international symposium on measurement and control in robotics organised by tc17 bálint kiss1, istván harmati1 1 budapest university of technology and economics, műegyetem rkp. 3., 1111 budapest, hungary section: editorial citation: bálint kiss, istván harmati, introductory notes for the acta imeko special issue on the 23rd international symposium on measurement and control in robotics organized by tc17, acta imeko, vol. 10, no. 3, article 1, september 2021, identifier: imeko-acta-10 (2021)-03-01 editor: francesco lamonaca, university of calabria, italy received september 1, 2021; in final form september 27, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: bálint kiss, e-mail: bkiss@iit.bme.hu dear readers, measurement and control techniques are crucial for achieving reliable and safe autonomous features in robotics. recent developments in both fields are key enablers for the constantly widening use of robots in industrial, medical, military and service-oriented applications. faithful to its traditions, the 23rd edition of the international symposium on measurement and control in robotics (ismcr), organised by imeko technical committee 17, has provided a forum for the exchange of the latest research results and novel ideas in robotic technologies and applications, this time with a special emphasis on smart mobility. the symposium focused on various aspects of research, applications and trends in relation to robotics, advanced human–robot systems and applied technologies in the fields of robotics, telerobotics, autonomous vehicles and simulator platforms, as well as vr/ar and 3d modelling and simulation. the symposium was hosted by the budapest university of technology and economics in budapest, hungary. due to the covid-19 pandemic, the symposium was held in a hybrid format; authors outside hungary participated remotely, while those in hungary had the choice between online and in-person attendance at the event in accordance with the current regulations. a total of 49 submissions were received from 11 different countries. the review process, involving 106 external reviews, resulted in 40 accepted papers. a special technical session was devoted to the topic of robotised intervention in risky (chemical, biological, radiological and nuclear) environments. in accordance with the symposium’s main topics, three invited plenary lectures were given by specialists from the industry (kuka robotics, thyssenkrupp components technology) and academia. topics included the virtualised stability analysis of mechatronic systems, human–robot collaboration in industrial production and new standardisation trends in the navigation of industrial mobile robots. based on their technical and scientific value and the evaluation of the reviewers, the authors of ten contributions were invited to submit extended versions of their papers for this special issue. the paper entitled ‘vision-based reinforcement learning for lane-tracking control’, authored by kalapos et al., applies aibased techniques to solve the lane-following and obstacle avoidance problem of autonomous vehicles, successfully implementing the results in the onboard computers of reducedsized testbed vehicles. staying with autonomous vehicles, in the paper ‘using coverage path planning methods for car park exploration’ by ádám et al. exploration methods to find the optimal traversal of an unknown parking area to identify free parking spaces are presented. reinforcement learning can also be used in the control of multi-agent robotic systems, as suggested by the paper by paczolay entitled ‘a2cm: a new multi-agent algorithm’, which presents an optimised and modified version of the so-called synchronous actor–critic algorithm. de cubber et al. address a similar optimisation problem in their contribution entitled ‘distributed coverage optimisation for a fleet of unmanned maritime systems’. the authors propose a methodology that optimises the coverage of a fleet of unmanned maritime agents, thereby maximising the chances of identifying potential threats. high-level autonomous functions and human–robot collaboration must be reliably and safely supported by platforms (robotic arms, drones, vehicles, etc.); hence, a second group of mailto:bkiss@iit.bme.hu acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 2 papers is devoted to the presentation of related results. szabó et al. report an identification method for friction parameters in their paper entitled ‘dynamic parameter identification method for robotic arms with static friction modelling’. the author considered friction models which are linear in terms of unknown parameters. the paper entitled ‘uncertain estimation-based motion planning algorithms for mobile robots’, authored by gyenes et al., proposes the extension of two obstacle avoidance methods, the velocity obstacle technique and the artificial potential field method, to take into consideration the time-varying uncertainty of the measured data in relation to the localisation of static and dynamic obstacles. the autonomous delivery of items by drones requires suitable gripping devices and grasping strategies. in their paper entitled ‘a lightweight magnetic gripper for a delivery aerial vehicle: design and applications’, sutera et al. report the design of a lowpower and lightweight magnetic gripper that takes into consideration the size and weight of the of transported objects. füchter et al. studied the possibilities of using ar techniques in specific phases of pilot training. their paper entitled ‘aeronautic pilot training and augmented reality’ reports the design experience of a mobile/tablet application prototype that reproduces the flight panel of a cessna 150 aircraft. the paper entitled ‘human–robot collision predictor for flexible assembly’ by paniti et al. presents a prediction-based collision warning system for a cobot scenario in which a robotic arm and human operators share a common workspace that also takes communication delays into consideration. another type of human–robot interaction is the user interface for controlling a robotic arm. the paper entitled ‘a 3d head pointer: a manipulation method that enables spatial position and posture for supernumerary robotic limbs’ by oh et al. addresses the specific problem of controlling a wearable robotic arm using face orientation and head motion. it should be noted that this contribution received the best paper award of the symposium. we would like to express our gratitude to all the authors for their contributions and their participation at the ismcr2021 symposium despite the unprecedented conditions created by the pandemic. we must also thank prof. francesco lamonaca, editor-in-chief of acta imeko, and his team for their help and support during the editorial process of this special issue. it has been a great honour to serve as guest editors for this special issue, and we hope that the papers will inspire future research in the imeko tc17 area of expertise and beyond. bálint kiss, istván harmati guest editors introduction to the acta imeko special issue on the ‘imeko tc4 international conference on metrology for archaeology and cultural heritage’ acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 2 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 introduction to the acta imeko special issue on the ‘imeko tc4 international conference on metrology for archaeology and cultural heritage’ fabio santaniello1, michele fedel2, annaluisa pedrotti1 1 labaaf laboratorio bagolini archeologia, archeometria, fotografia; ceasum – centro di alti studi umanistici, dipartimento di lettere e filosofia, università di trento, via tommaso gar n°14, 38122, trento (italy). 2 dipartimento di ingegneria industriale, università di trento, via sommarive n° 9, 38123 trento (italy). section: editorial citation: fabio santaniello, michele fedel, annaluisa pedrotti, introduction to the acta imeko special issue on the ‘imeko tc4 international conference on metrology for archaeology and cultural heritage’, acta imeko, vol. 11, no. 1, article 2, march 2022, identifier: imeko-acta-11 (2022)-01-02 received march 30, 2022; in final form march 30, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: fabio santaniello, e-mail: fabio.santaniello@unitn.it dear readers, this special issue of acta imeko is the result of the 5th international conference on metrology for archaeology and cultural heritage. the conference was originally planned to be held in trento (italy) hosted by the department of humanities of trento university, on october 22-24, 2020, but due to the sanitary emergency caused by covid-19, the organisers decided to hold the conference online. despite the unexpected situation, the conference has been a great success with 158 initial submissions, 126 accepted papers, 431 authors from 19 countries, 4 invited plenary speakers, 13 special sessions, 3 tutorial sessions and 11 patronages. out of the numerous presented papers, a selection has been made by the scientific committee to realize this issue. particular attention has been paid to the papers which inspire the exchange of knowledge and expertise between “human sciences” and “hard sciences”. after the review process, seventeen papers have been accepted for publication encompassing several research fields and methodological approaches. more in detail, several papers are devoted to material characterization by means of different analytical techniques. six of them are focused on the analysis of archaeological artefacts. particularly, zerai gebremariam et al. analysed the pottery assemblage from the site of adulis (eritrea): colorimetric measurements show new technological and manufacturing insights used for ceramic production during the roman age. an egyptian wooden statuette stored in the museo egizio di torino has been analysed by vigorelli et al., who compared the results of a multi-analytical strategy based on both non-invasive and micro-invasive procedures to investigate the original artistic techniques and the ancient restorations of the artefact. the paper by stagno and capuano compares micro-mri, diffusion-nmr and portable nmr data to highlight the diagnostic features of roman archaeological woods. es sebar et al. analysed different metal tools used during the construction of the santa maria del fiore cupola in florence, pca analyses performed on xrf data allow to determine different alloys depicting new details about the renaissance technology. tavella et al. used a graphic elaboration software to calculate the capacity of several prehistoric vessels from northeastern italy, suggesting possible functions and/or cultural traditions related to the potteries. the article by mazzoccato et al. shows the significance of laser scanning microprofilometry for surface analysis and 3d printing in the study of archaeological pottery. passing from the archaeology to the artworks, sottili et al. present an interesting contribution based on the combination of ma-xrf and dr to study painting layers and colours composition. a second group of eight papers is more related to the study and also to the management and valorisation of architectural heritage. baiocchi et al. proposed an approach to realize a geomatic survey by smartphones useful to create digital twins and virtual models. this approach has been tested on the intihuatana stone in machu picchu (peru) providing intriguing results and possibilities. brienza and fornaciari combined gis and photogrammetry to study the masonry of the bagni di eliogabalo (rome). their detailed data offer a wide reconstructive hypothesis allowing to point out roman construction techniques and expedients. the history and the architectural transformations of the bridge of canosa di puglia (italy) have been analysed through http://? acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 archival documentation and field surveys by germanò, who finally hypnotize the original configuration of the bridge. doria et al. present the result of a multi-steps program related to the study and the conservation of the castiglioni chapel in pavia, focusing on the digital survey and the creation of an immersive 3d model with different levels of analysis and visualization. the paper by antolini focuses on the development of a wide approach for the reconstruction of the ephemeral apparatuses, the latter has been applied to the case study of the funeral apparatus realized in rome for cardinal mazarin. several banded vaults systems in turin baroque atria have been analysed by natta. the author proposes an integrated approach involving metric survey by laser scanning and digital drawing in order to investigate the original constructive methodologies and the changes due to time. moving to more recent structures, the paper by gabellone shows an interesting 3d reconstruction of an underground oilmil in the town of gallipoli, which has been used to develop shared virtual visits during the covid-19 emergency. pirinu et al. presents the results of an extended survey activity related to the military architectures built in sardinia during the second world war. the collected data allow to analyse the historical construction techniques as well as to recover a peculiar heritage which is part of the contemporary landscape. bertola discusses a methodology that by starting from archival documentation and using bim allows to reproduce a 3d model of due case a capri by aldo morbelli. eventually, the article by weththimuni et al. deals with the preservation of cultural heritage buildings by using zro2-doped zno-pdms nanocomposites as protective coatings for the stone materials, providing interesting future perspectives. the contributions of this special issue provide an overview of the significant impact achieved by a more intense synergy between metrology and human sciences. moreover, following the constraint given by the international situation that occurred since 2020, this issue stressed the importance to promote a diffuse accessibility of cultural heritage thanks to the virtualization and digitalization of archaeological artefacts, human landscapes, historical documents and so on. to conclude we hope that this special issue catches the attention of the readers thanks to its interdisciplinarity. actually, we strongly believe that the intermingling of competencies is the way to look beyond contemporary research and sketch both the opportunities and the path of the cultural heritage in the future. hope you will have an exciting reading! fabio santaniello, michele fedel, annaluisa pedrotti guest editors characterization of laser doppler vibrometers using acousto-optic modulators acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 361 acta imeko issn: 2221-870x december 2020, volume 9, number 5, 361 364 characterization of laser doppler vibrometers using acousto-optic modulators michael gaitan1, jon geist2, benjamin j. reschovsky3, ako chijioke4 1 nist, gaithersburg md, usa, michael.gaitan@nist.gov 2 nist, gaithersburg md, usa, jon.geist@nist.gov 3 nist, gaithersburg md, usa, benjamin.reschovsky@nist.gov 4 nist, gaithersburg md, usa, akobuije.chijioke@nist.gov abstract: we report on a new approach to characterize the performance of a laser doppler vibrometer (ldv). the method uses two acousto-optic modulators (aoms) to frequency shift the light from an ldv by a known quantity to create a synthetic velocity shift that is traceable to a frequency reference. results are presented for discrete velocity shifts and for sinusoidal velocity shifts that would be equivalent to what would be observed in an ideal accelerometer vibration calibration. the method also enables the user to sweep the synthetic vibration excitation frequency to characterize the bandwidth of an ldv together with its associated electronics. keywords: laser vibrometer; vibration calibration; acousto-optic modulator; 1. introduction following iso standard 16063-41 method for the calibration of vibration and shock transducers – part 41 calibration of laser vibrometers [1], laser doppler vibrometers (ldvs) are calibrated by a comparison-type measurement to a laser homodyne interferometer that is defined as the primary standard. not covered by iso 16063-41, but for the case where all the components of the ldv system and their associated uncertainties are known, methods can be employed for the direct determination of measurement uncertainty of the ldv [2], [3] or by using a combination of a heterodyne with a homodyne-quadrature configuration [4]. the technology for manufacturing commercial ldv systems has matured as well as their use in commercially available primary vibration calibration systems. these systems require calibration by the manufacturer over a periodic time interval that is typically one year and are traceable to the système international d'unités (si) through the manufacturer. from the end user perspective, the commercially manufactured ldv system is like a “black box”, meaning that the design and internal components of the ldv are not known in detail by the user. therefore, following an uncertainty determination approach described in [2] or [3] is not possible for such commercial black box systems especially if their internal workings are considered proprietary by the manufacturer. the only possibility provided by the standard for calibrating a such commercial ldv systems is therefore to follow iso 16063-41 and compare it to a primary heterodyne system, resulting in the ldv system being considered a secondary system. a challenge therefore remains in the adoption of cost-effective commercial ldv systems by national measurement institutes (nmis) who are responsible for direct determination of uncertainty. towards this end, we have recently reported on the calibration of laser heterodyne velocimeters using shock excitations and total distance travelled [5], [6]. one advantage of that method is that it characterizes the entire measurement system under the same conditions that would be used in an accelerometer shock calibration and could as well be included as part of the accelerometer shock calibration. however, a drawback of the method is that it does not characterize the frequency response and bandwidth of the ldv. the bandwidth of the excitation must not exceed the bandwidth of the ldv in order to produce accurate accelerometer shock calibrations. this drawback motivated us to develop a new method to characterize the bandwidth of the ldv system and resulted in the method that we present in this report. 2. experimental design figure 1 shows a diagram and photograph of the acousto-optic modulator (aom)-based ldv characterization system that we have developed. the laser light is first collimated using 300 mm and 30mm lenses to create a beam diameter that is compatible with the aperture of the aoms. the http://www.imeko.org/ mailto:michael.gaitan@nist.gov mailto:jon.geist@nist.gov mailto:benjamin.reschovsky@nist.gov mailto:akobuije.chijioke@nist.gov acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 362 beam passes through the 1st aom where it is downshifted in frequency f. next it passes through the 2nd aom where it is upshifted by a frequency f+. the beam is then reflected back along its path with a mirror, doubling the effect of the aoms, and delivering light to the ldv that is shifted in frequency by: 2𝛿 = 2 (𝑓 + 𝛿 − 𝑓). (1) the reason why we use two aoms to produce the frequency shift is that a single aom cannot generate frequency shifts of order 1 mhz or less as required. the velocity v reported by the ldv is related by the doppler equation with a frequency shift of 2𝛿 and the wavelength of the laser by the relationship: 𝑣 = ½ 𝜆 (2𝛿) = 𝜆 𝛿 , (2) where the laser wavelength 𝜆 = 632.81 nm [7]. 3. results the experimental results that we report were obtained using [8] a commercially available ldv system that includes a polytec ofv-503 sensor head, a ofv-5000 vibrometer controller, and a polytec data management system that were interfaced with the design shown in figure 1. the aoms (brimrose tef-110-50-633) were driven by two national instruments pxie-5650 sinusoidal radio frequency (rf) signal generators. an agilent 3458a digital multimeter was also used to measure root mean squared (rms) voltage for sinusoidal excitations. the signals from the rf signal generators were amplified using mini-circuits zhl-2010+ rf amplifiers connected to each of the aoms. the base frequency f for our results was selected to be 110 mhz, corresponding to the center frequency of the aoms. the zero frequency (dc) and transient velocity readings that we report were obtained using the polytec data management system. the rms readings that we report were obtained using the rms multimeter. 3.1. results for fixed frequency shift figure 2 shows the relationship between the frequency shift 𝛿, the reported velocity from the ldv, and the calculated velocity using the doppler equation (2). these results were obtained with vibrometer controller set to vd-09 with a corresponding amplification factor of 0.5 m/s/v. figure 1: diagram (upper) and photograph (lower) of the acousto-optic modulator (aom)-based ldv characterization system. http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 363 figure 2: plot of the velocity reported by the ldv as a function of frequency shift 𝛿 in blue and the corresponding calculated values using the doppler equation (2). the resulting dc voltage was sampled 2048 times at 204800 samples/s with no filtering using the polytec data management system. the data were averaged, and the standard deviation was determined. the ldv exhibited a 0.0012 m/s offset when directed onto a non-moving surface without the aoms in the beam path as well as when the frequency shift 𝛿 was set to zero. this offset was subtracted from the measured results depicted in the figure. the data in figure 2 is replotted in figure 3 in terms of the percent difference between the velocity reported by the ldv (with the offset subtracted) and the calculated velocity from the doppler shift equation (2). the data show a maximum percent difference of  0.04 % over the frequency range that was tested. figure 3: percent difference between the velocity reported by the ldv (with the offset subtracted) and the calculated velocity using the doppler shift equation (2). the manufacturer of the ldv reports a 1% uncertainty in their instrument specifications while the frequency shift that we can produce with the signal generators is orders of magnitude smaller in uncertainty. 3.2. results for sinusoidal excitation in this experiment the 2nd aom was excited using the national instruments pxie-5650 rf signal generator with a sinusoidal frequency modulation to create a synthesized vibration measurement. the goal of this experiment was to characterize the bandwidth of the ldv system. in this experiment, the root mean squared (rms) voltage from the analog output of the polytec ofv5000 vibrometer controller was measured using the rms multimeter and converted to rms velocity using the vd-09 gain factor of 0.5 m/s/v. the sinusoidal modulation at the 110 mhz base frequency was swept from 100 hz to 3 mhz. figure 4 shows that the ldv vibrometer controller has a uniform response up to 1 mhz and drops off beyond that frequency. figure 4: frequency response of the ldv using sinusoidal frequency modulation of the 2nd aom to create a synthesized vibration measurement condition. 3.3. results for velocity step function excitation in this experiment the 2nd aom was excited using an hp 83650b 10 mhz to 50 ghz rf swept signal generator to provide a capability to frequency modulate an arbitrary analog signal. an agilent 33250a arbitrary waveform generator signal generator was used to produce a 1 hz square wave alternating from 0 mv to 300 mv for frequency modulation to simulate a step function for synthesized velocity. the analog velocity signal from the vibrometer controller was digitized using the polytec data management system set at its maximum sampling rate of 204800 samples/s. the resulting response shown in figure 5 includes the effects of the ldv as well as the digital acquisition system, which would be expected to have a maximum bandwidth of 102400 hz. http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 364 4. summary our results show that this approach is immediately useful as a tool for characterizing dc, sinusoidal steady state, and transient response of an ldv as a system as a whole, together with its data acquisition and control electronics and amplifiers. the dc response we reported exhibited a maximum of  0.04 % difference between the measured value and the calculated value based on the doppler equation, which is in good agreement to what we had reported earlier using shock excitations and total distance travelled [5], and is well within the 1 % accuracy specified by the manufacturer. the ldv system bandwidth of 1 mhz determined by sinusoidal excitations is in good agreement to what the manufacturer specifies. lastly, the velocity step function experiment serves as an example that it is possible to create complex velocity profiles to test the response of the ldv together with its data acquisition, control electronics, and amplifiers. our future work is focused on further improvements on the system design and carrying out a full uncertainty analysis. one improvement that we envision in the design is measure the frequency shift 𝛿 with a photodiode rather than reading it from the signal generators driving the aoms. this could capture any offsets or fluctuations between the signal generators and the light entering the ldv, e.g. due to refractive index fluctuation. we anticipate that after further investigation this method can be used as a tool for primary calibration of laser doppler vibrometers. 5. references [1] iso standard 16063-41 method for the calibration of vibration and shock transducers – part 41 calibration of laser vibrometers, iso 1606341:2011(e). [2] g. siegmund, “sources of measurement error in laser doppler vibrometers and proposal for unified specifications”, proc. of spie vol. 7098, 70980y-1, 2008. [3] n. vlajic, a. chijioke, “traceable dynamic calibration of force transducers by primary means,” metrologia 53, s136–s148, 2016. [4] t. bruns, f. blume, a. täubner, “laser vibrometer calibration at high frequencies using conventional calibration equipment”, xix imeko world congress, lisbon, portugal, september 6-11, 2009. [5] m. gaitan, m. afridi, j. geist, “on the calibration of laser heterodyne velocimeters using shock excitations and total distance travelled”, imeko xxii world congress, belfast, 2018. [6] m. afridi, j. geist, m. gaitan, “primary calibration of the low frequency response of a laser heterodyne velocimeter used in a pendulumbased shock excitation system by si traceable distance measurement”, j res natl inst stan, in press. [7] iso standard 16063-11 methods for the calibration of vibration and shock transducers – part 11 primary vibration calibration by laser interferometry. [8] certain commercial equipment and instruments are identified in this article in order to describe the experimental procedure adequately. such identification is not intended to imply recommendation or endorsement by the national institute of standards and technology, nor is it intended to imply that the materials or equipment identified are necessarily the best available for the purpose. figure 5: response of the ldv system to a synthesized velocity step produced by frequency modulation of a 1 hz square wave on the 2nd aom. the time depicted on the x-axis has been magnified to observe the transient of the velocity step. http://www.imeko.org/ editorial to selected papers from the 2019 imeko tc4 international conference on metrology for archaeology and cultural heritage acta imeko issn: 2221-870x march 2021, volume 10, number 1, 1 4 acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 1 editorial to selected papers from the 2019 imeko tc4 international conference on metrology for archaeology and cultural heritage eulalia balestrieri1, carlo carobbi2, ioan tudosa1 1 department of engineering, university of sannio, benevento, italy 2 department of information engineering, university of florence, firenze, italy section: editorial citation: eulalia balestrieri, carlo carobbi, ioan tudosa, editorial to selected papers from the 2019 imeko tc4 international conference on metrology for archaeology and cultural heritage, acta imeko, vol. 10, no. 1, article 1, march 2021, identifier: imeko-acta-10 (2021)-01-01 editor: francesco lamonaca, university of calabria, italy received march 19, 2021; in final form march 19, 2021; published march 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding authors: eulalia balestrieri, e-mail: balestrieri@unisannio.it, carlo carobbi, e-mail: carlo.carobbi@unifi.it, ioan tudosa, e-mail: itudosa@unisannio.it dear reader, this issue of acta imeko is dedicated to papers that have been selected from those presented at “2019 imeko tc4 international conference on metrology for archaeology and cultural heritage”, metroarchaeo, held in florence in december 2019. measurements are essential to access knowledge in every field of investigation, from industry to quality of life, science and innovation. as a consequence, metrology plays a crucial role for archaeology and cultural heritage, addressing issues related to the collection, interpretation and validation of the data, to the different physical, chemical, mechanical or electronic methodologies used to collect them and to the associated instruments. metroarchaeo brings together researchers and operators in the enhancement, characterization and preservation of archaeological and cultural heritage with the main objective of discussing production, interpretation and reliability of measurements and data. the conference is conceived to foster exchanges of ideas and information, create collaborative networks and update innovations on “measurements” suitable for cultural heritage for archaeologists, conservators and scientists. thirty-three selected papers from metroarchaeo 2019 are presented in three sections (part 1, part 2 and part 3) part 1 – section editor: eulalia balestrieri the first section, edited by eulalia balestrieri, includes the following ten scientific contributions. the first contribution, by valentino sangiorgio et al, “historical masonry churches diagnosis supported by an analytic-hierarchyprocess-based decision support system” presents a new procedure based on analytical hierarchy processes (ahp) aimed at carrying out rapid on-site measurements and diagnostic of masonry churches through a series of condition assessment indexes. the proposed procedure has been successfully validated by a comparison with a standard diagnostic workflow. in the second paper, by sveva longo et al, “clinical computed tomography and surface-enhanced raman scattering characterisation of ancient pigments”, a systematic and complete chemical-physical characterisation of painted pigments has been carried out using multi-slice x-ray computed tomography (msct) and surfaceenhanced raman scattering (sers) techniques. thanks to the proposed approach, the identification and characterization of both inorganic and organic materials present on the wooden tablets have been carried out. the third contribution, by simone tiberti and gabriele milani, “creating a finite element mesh of non-periodic masonry from the measurement of its geometrical characteristics: a novel automated procedure”, illustrates an automated procedure for the generation of a finite element (fe) mesh directly from the rasterized sketch of a generic masonry element, which is particularly suitable for complex and irregular (non-periodic) masonry bonds that can be observed in heritage buildings or found in archaeological sites. two procedures are set for the creation of 2d and 3d fe meshes. in the fourth paper, by laura guidorzi et al, “age determination and authentication of ceramics: advancements in the thermoluminescence dating laboratory in torino (italy)”, the thermoluminescence (tl) laboratory developed at the physics department of the mailto:balestrieri@unisannio.it mailto:carlo.carobbi@unifi.it mailto:itudosa@unisannio.it acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 2 university of torino is presented. the laboratory was set up in collaboration with tecnart s.r.l. and is also currently operating within the infn (national institute of nuclear physics) chnet network. some example of dating and authenticating results carried out at the laboratory on archaeological sites and artworks are also discussed and compared, when possible, with radiocarbon dating. the fifth contribution, by marialuisa mongelli et al, “comparison and integration of techniques for the study and valorisation of the corsini throne in corsini gallery in roma”, presents the investigation of an integrated approach involving non-invasive technologies, photogrammetry and structured light, during the development of the "weact3" project (acting together technology for art, culture, tourism and territory) jointly signed by civita association and the national barberini and corsini galleries. such technologies have been used to build the 3d model of the corsini throne, preserved at the corsini gallery in rome. the sixth work, by carmelo scuro et al, “study of an ancient earthquake-proof construction technique monitoring via an innovative structural health monitoring system”, illustrates an innovative method to monitor and obtain in real time the mechanical properties of an anti-seismic construction widespread in southern calabria and patented by pasquale frezza, also minimizing measurement uncertainty. the considered type of anti-seismic construction consists of masonry walls built with bricks and fictile tubules, arranged in staggered and alternating manner, all contained in a timber wooden frame. the seventh contribution, by renato s. olivito et al, “inventory and monitoring of historical cultural heritage buildings on a territorial scale: a preliminary study of structural health monitoring based on the cartis approach”, presents a preliminary study aimed at defining an integrated methodology for inventory, monitoring, transmission, and data management of heritage buildings to provide important information about their structural integrity. the eighth paper, by maria federica caso et al, “improvement of enea laser-induced fluorescence prototypes: an intercalibration between a hyperspectral and a multispectral scanning system”, presents the hyperspectral and multispectral laser induced fluorescence (lif) scanning systems developed at enea diagnostic and metrology laboratory and in particular their intercalibration, along with the data analysis of calibration samples and a software to automatically correct imaging data. in the ninth contribution, by alejandro roda-buch et al, “fault detection and diagnosis of historical vehicle engines using acoustic emission techniques”, the results of the first phase of the acume_hv (acoustic emission monitoring of historical vehicles) project focused on the development of a protocol for the use of acoustic emission (ae) during cold tests are presented. the project represents the first use of ae as non-invasive technique for the diagnostic of historical vehicles to carry out an objective, human-independent method. the tenth and last contribution of this part 1, by sandro parrinello and raffaella de marco, “digital surveying and 3d modelling structural shape pipelines for instability monitoring in historical buildings: a strategy of versatile mesh models for ruined and endangered heritage”, illustrates the application of a fast and reliable structural documentation pipeline to historical built heritage, as in the case study of the church of the annunciation in pokcha (russia), by reviewing the declination of integrated products of 3d survey into reality-based models. part 2 – section editor: carlo carobbi the second section, edited by carlo carobbi, includes the following ten scientific contributions. the first contribution, by valeria croce et al, “from survey to semantic representation for cultural heritage: the 3d modelling of recurring architectural elements”, illustrates an approach to be followed in the transition from 3d survey information, derived from laser scanner and photogrammetric techniques, to the creation of semantically enriched 3d models. the proposed approach is based on the recognition -segmentation and classificationof elements on the original raw point cloud, and on the manual mapping of nurbs elements on it. in the second work, by francesco boschin et al, “geometric morphometrics reveal relationship between cut-mark morphology and cutting tools”, two groups of slicing cut mark cross-sections were experimentally produced. the resulting sets of striae show different depths and different cross-sectional shapes. it turns out that the difference in shape between the two groups of striations is probably a function of the way in which the tool penetrated the bone. the third contribution, by gabriella caroti et al, “the use of image and laser scanner survey archives for cultural heritage 3d modelling and change analysis”, elaborates on the methodology used for integration of a metric 3-d model with information present in archive surveys of lost architectural volumes. the presented methodology frames historical plans, representing the survey object, in the reference system of uav surveys for an opensource gis environment. in the fourth work, by yufan ding et al, “provenance study of the limestone used in the construction and restoration of the batalha monastery (portugal)”, stone samples were investigated, through different methods (energy-dispersive x-ray fluorescence spectroscopy, powder x-ray diffractometry and thermogravimetric analysis) obtaining indication of the source of samples collected from different parts the monastery. the fifth contribution, by leila es sebar et al, “raman investigation of corrosion products on roman copper-based artefacts”, illustrates a case study related to the characterization of corrosion products present on recently excavated artefacts. results coming from raman spectroscopy investigation can help in assessing the conservation state of the artefacts and defining the correct restoration strategy. in the sixth work by elisabetta di francia et al, “characterisation of corrosion products on copper-based artefacts: potential of ma-xrf measurements”, a novel portable ma-xrf scanner prototype has been tested on artificially corroded copper samples to assess its analytical capabilities on corroded metals and yielding information on the spatial distribution of the corrosion products grown on the metal’s surface. in the seventh contribution by giuseppe schirripa spagnolo et al, “fringe-projection profilometry for recovering 2.5d shape of ancient coins”, a surface profile measurement system for small objects of cultural heritage, where it is important not only to detect the shape with good accuracy but also to capture and archive the signs due to ageing, is illustrated. the potentiality of the proposed scheme for recovering 2.5d shape of cultural heritage is demonstrated. in the eighth contribution by anna maria gueli et al, “modelling and simulations for signal loss evaluation during sampling phase for thermoluminescence authenticity tests”, the percentage of the intensity signal loss in thermoluminescence emission, due to local temperature increase caused by drilling, is investigated. the acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 3 optimal parameters that should be used during sampling phase are identified. in the ninth work by zacharias vangelatos et al, “finite element analysis of the parthenon marble block–steel clamp system response under acceleration”, finite element analysis is employed to provide a tool for the assessment of the conservation potential of the marble blocks, in parts of the monument that require specific attention. simulation results highlight the importance of intrinsic stresses, the existence of which may lead to fracture of the marble blocks under otherwise harmless loading conditions. in the tenth and last contribution of this part 2, by andrea zanobini, “metrological characterisation of a textile temperature sensor in archaeology”, the study of a new generation textile temperature sensor, in two different heated ovens to evaluate temperature and both temperature and humidity, is presented. the results show many metrological characteristics proving that the sensor is a resistance temperature detector. part 3 – section editor: ioan tudosa the third section, edited by ioan tudosa, includes the following thirteen scientific contributions. the first contribution, by sebastiano d’amico et al, “a combined 3d surveying, xrf and raman in-situ investigation of the conversion of st paul painting (mdina, malta) by mattia preti”, presents the results of three different approaches applied to the newly-restored titular painting the conversion of st paul, the main altarpiece in the cathedral of mdina in malta. the study was aimed at showing the potentialities of the combined use of 2d/3d photogrammetric surveys and spectroscopy (xrf and raman techniques) in order to, on one side, get reconstruction, and, on the other side, achieve, at different spatial domains. in the second work, by luisa caneve et al, “non-invasive diagnostic investigation at the bishop’s palace of frascati: an integrated approach”, a novel methodology aimed to detect and locate materials due to previous restoration actions and to monitor the degradation processes evolution remotely, is proposed. the possibility of a preventive monitoring by the application of the presented approach to reduce the eventual induced damage has been put in light. in the third contribution, by sofia ceccarelli et al, “thermographic and reflectographic imaging investigations on baroque paintings preserved at the chigi palace in ariccia”, two different midinfrared imaging techniques, operating in the 3-5 µm spectral range, are applied to the study of three paintings on canvas, dating back to the xvii century, preserved at the chigi palace in ariccia (italy). presented results allow the evaluation of the conservative status of the support and the detection of graphical and pictorial features hidden beneath the surface layer. in the fourth work, by daniele moro et al, “mineral diagnostics: sem-eds monte carlo strategy for optimised measurements of ultrathin fragments in cultural heritage studies”, a detailed study of the effects related to microand nanometric sizes of glass and gold alloys fragments on sem-eds microanalysis is presented. monte carlo simulations of different kind of elongated glass fragments with square section, from 0.1 to 10 µm thick, and of some gold alloys showed a strong influence of the fragment sizes and operational conditions (beam energy, detector position, etc.). this work can be used to devise the appropriate and optimized measurement strategy. the fifth contribution, by luisa spairani, “measure by measure, they touched heaven”, illustrates a case study in photography of the law of leavitt. in the sixth work by giacomo fiocco et al, “chemometrics tools for investigating complex synchrotron radiation ftir micro-spectra: focus on historical bowed musical instruments”, a method describing how the synchrotron radiation (sr) micro-ftir spectroscopy in reflection geometry and chemometrics were combined to investigate six cross-sectioned micro-samples detached from four bowed string instruments, is presented. in the seventh contribution by m. faifer et al, “laboratory measurement system for pre-corroded sensors devoted to metallic artwork monitoring”, a measurement system for the development and testing of sensor for atmospheric corrosivity monitoring is presented. the developed system allows to monitor metal corrosion higher than 3 nm in the temperature range from 23 °c to 39 °c. the performed analysis allows to state that the system is an efficient laboratory setup for the development and characterization of sensor for metal corrosion monitoring. the eighth contribution by maria legut-pintal et al, “methodological issues of metrological analysis of planned medieval towns and villages”, proposes the usage of cosine quantogram, which has rarely been used to study of urban layout, for the identification of units of measurement in medieval regular towns. in the ninth work by roberta spallone et al, “digital strategies for the valorisation of archival heritage”, a study that aims at creating a sort of digital model-museum, in which to insert all the historical information useful to tell the story of the evolution of the artefact over time, and allowing users, through the use of personal devices, to live interactive and immersive experiences, through virtual and augmented reality, is presented. in the tenth paper by tilde de caro et al, “application of µraman spectroscopy to the study of the corrosion products of archaeological coins”, a study case of the corrosion products formed on archaeological bronze artefacts excavated in tharros (sardinia, italy) is presented. the experimental findings allow to acquire, through micro-raman spectroscopy, a better knowledge on the environmental factors that may cause the degradation of archaeological bronzes in soil. the eleventh contribution, by leila es sebar et al, “in-situ multi-analytical study of ongoing corrosion processes on bronze artworks exposed outdoors”, presents a long-term in-situ monitoring campaign of contemporary bronze statuary exposed outdoor. the authors demonstrate the importance of the use of portable instruments offering the possibility to perform in situ measurements, thus avoiding any sampling and assessing of the degradation of the material directly in contact with the environment to which the artwork is always exposed to. the twelfth contribution by máté sepsi et al, “non-destructive pole-figure measurements on workshop-made silver reference models of archaic objects”, reports the non-destructive pole figure method as a sufficient way to distinguish between metal objects formed in different ways. the specific forming modes result in specific pole figures, and therefore, by producing and examining a sufficient number of reference materials, the mode of production of archaic objects can also be reconstructed. the authors state that the obtained pole figures by the robot diffractometer are completely identical to the figures of the previously validated g3r diffractometer. the thirteenth and last contribution of this part 3, by maria grazia d'urso, “a combination of terrestrial laser-scanning point clouds acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 4 and the thrust network analysis approach for structural modelling of masonry vaults”, presents how the geometric and geo-referenced 3d models are obtained by processing laser-scanning measurements. a model built on a coherent geometric basis, which contemplates the methodological complexities of the detected objects is reported. a semi-automated method that allows one to switch from point cloud to an advanced three-dimensional model, able to contain all the geometrical and mechanical characteristics of the built object, is proposed in the paper. it was a great honour for us to act as guest editors for this issue of acta imeko, a high-profile scientific journal devoted to the enhancement of academic activities of imeko and a wider dissemination of scientific output from imeko tc events. we would like to sincerely thank all the authors for their valuable contributions, and we hope the readers will be inspired by the themes and proposals that have been selected and included in this special section related to innovations in metrology for archaeological and cultural heritage. eulalia balestrieri, carlo carobbi, ioan tudosa guest editors a lightweight magnetic gripper for a delivery aerial vehicle: design and applications acta imeko issn: 2221-870x september 2021, volume 10, number 3, 61 65 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 61 a lightweight magnetic gripper for an aerial delivery vehicle: design and applications giuseppe sutera1, dario calogero guastella1, giovanni muscato1,2 1 dipartimento di ingegneria elettrica elettronica e informatica, university of catania, catania, italy 2 istituto di calcolo e reti ad alte prestazione, consiglio nazionale delle ricerche, icar-cnr, palermo, italy section: research paper keywords: low-power gripper; pick and place; rapid prototyping; permanent magnets; mobile robot application citation: giuseppe sutera, dario calogero guastella, giovanni muscato, a lightweight magnetic gripper for a delivery aerial vehicle: design and applications, acta imeko, vol. 10, no. 3, article 10, september 2021, identifier: imeko-acta-10 (2021)-03-10 section editor: bálint kiss, budapest university of technology and economics, hungary received january 15, 2021; in final form september 6, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this research was partially funded by the ‘safe and smart farming with artificial intelligence and robotics programma ricerca di ateneo unict 2020 -‐ 22 linea 2’ project. corresponding author: giuseppe sutera, e-mail: giuseppe.sutera@unict.it 1. introduction unmanned aerial vehicles or drones represent the future of many evolving sectors. autonomous delivery is one of these, and this sector is developing rapidly thanks to new platforms that allow the transportation of weights heavier than that of the platforms themselves. these vehicles are often used to transport wares to areas that are difficult to reach quickly using standard means of transport. in order to make the entire delivery process autonomous, it is necessary to use an easily coupling system to attach the package to the drone. in the literature, different techniques have been described, which will be analysed in detail in this paper. the snap-fit method requires a high level of accuracy in positioning, as there are a number of fixing pins that must be 1 website: https://www.mbzirc.com/. inserted perfectly into the relevant holes. adhesion is an excellent solution for picking up and placing objects with metal parts. for this reason, electro-adhesion [1] and electromagnets [2]-[4] have been analysed, which represent a valid choice in terms of ease of coupling, but they require a high operating current to create a magnetic field. in terms of energy consumption, this can cause a reduction in flight time. in this approach [3], the use of an electro-permanent magnet reduces energy absorption since it requires a current only in the release phase. however, the typical shape and weight of these devices require the design of bulky housings that do not allow a plate to be used that is suitable for the intended purpose. for the present study, a plate was developed in accordance with the specifications of the ‘mohamed bin zayed international robotics challenge 2020’ (mbzirc1). one of the challenges in this competition was to create a drone capable of lifting different types of bricks off the abstract in recent years, drones have become widely used in many fields. their vertical flight capability makes these systems suitable for carrying out a variety of tasks. in this paper, the delivery service they provide is analysed. the delivery of goods quickly and to remote areas is a relevant application scenario; however, the systems proposed in the literature use electromagnets, which affect the duration of the flight. in addition, these devices are heavy and suffer from high energy consumption, which reduces the maximum transportable payload. this study proposes a new lightweight magnetic plate composed of permanent magnets, capable of collecting and positioning any object as long as it has a ferromagnetic surface on the top. this plate was developed for the mohamed bin zayed international robotics challenge 2020, an international robotics competition for multi-robot systems. challenge two of this competition required a drone capable of picking up different types of bricks and assembling them to build a wall according to an assigned pattern. the bricks were of different colours and sizes, with weights ranging from 1 to 2 kg. in light of this, it was concluded that weight was the most relevant specification to consider in drone design. mailto:giuseppe.sutera@unict.it acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 62 ground and arranging them to build a wall according to an assigned pattern. for this purpose, a magnetic plate was created using passive magnets inserted inside a 3d-printed structure. the release system was incorporated into the plate and consisted of several levers operated by a single servo control. this setup allows the detachment of ferromagnetic objects without high energy consumption and requires a power supply only in the release phase in order to activate the servomotor. furthermore, thanks to its design, the developed gripper allows the operators to optimise the weight and energy consumption while still guaranteeing a lifting capacity exceeding 2 kg. 2. mechanical design the prototype in this study had a flat profile and an attractive force comparable to a commercial device, despite its lightness. the supporting structure was built with rapid prototyping techniques using the zortrax m200 printer, which is equipped with an extruder with a 0.4 mm nozzle and a layer resolution of 90–390 microns. of the materials available, z-ultrat was selected for this project, a blend of zortrax filaments created to enhance the properties of acrylonitrile butadiene styrene (abs) in terms of durability. the entire printing process took about 20 hours, to which 30 minutes were added to integrate the servomotor and the other mechanical parts (such as pins, ball bearings and screws). the considerable difference between printing and assembly times prompted a search for a solution to speed up the printing process. it was therefore decided to divide the prototype into several smaller components. although this solution increased the overall printing time by about 25 % (due to the increased number of prints and, hence, of printer initialisations), each component part had a printing time ranging from 30 to 120 minutes. the resulting modular structure of the prototype made it quick to repair. as will be explained later, the bottom part of the plate was the area that was exposed the most to wear, as it was in contact with the ground and the ferrous coupling surfaces. the dimensions of the realised prototype fulfilled the specific requirements of the mbzirc 2020 competition, where the magnetic plate was tested. the setup used for the competition had a length of 15 cm, a width of 10 cm, a height of 5 cm and weighed 195 g. however, the modularity of the prototype made it possible to reduce the contact surface to 9 × 9 cm so that it was suitable for smaller drones while still maintaining the same supporting structure. the plate consisted of four pieces, two for each of the two layers that composed it. the first layer was 0.6 cm high and consisted of two smaller pieces that were connected with a dovetail joint (figure 1). during the design phase, a set of beams was designed with a two-fold purpose: 1) increasing the rigidity to compensate for the thinness and 2) creating the slots in which to insert the magnets and the supports for the release levers (figure 2). two commercial magnets were tested with the same width (10 mm) and length (20 mm) but different heights (2 and 5 mm) and, hence, different attraction forces (table 1). the design choice of creating a covering layer in the lower part made the external surface smooth and free of friction. the magnets were securely fixed in custom housings that prevented them from coming out once the object to be lifted had been hooked (figure 3). the release system was operated by a servomotor located in a central position in order to guarantee the centring of the weight. once activated by a digital pin from flight control unit (fcu) the rotation starts by sending a pwm signal, from rest position up to release position and back. an mg995 commercial servomotor was used, capable of developing a torque of 10 kg/cm (6 v) on the shaft required to operate the cascade of l-shaped levers, located along the length of the plate, to move and push the object away from the magnets. the increase in distance produced a decrease in the force of attraction, and the object is released by gravity. the release phase lasts 1 second. during this period the servomotor draws 600 ma, after which the motor returns to the rest position where the consumption is reduced to 10 ma. in the proposed solution, neodymium magnets are used for their capability to attract ferrous surfaces. the operating principle was based on the force of attraction, which follows the equation below: 𝐹 = 𝐵2𝐴 2 𝜇0 , (1) where f is the force in n, a is the surface area of the magnetic pole in m2, b is the magnetic flux density in tesla and 𝜇0 is the permeability of air. the second law of dynamics defines the force of gravity proportional to the mass m, therefore f = m · g. from this it follows that the lifting mass m is given by 𝑚 = 𝐵2 𝐴 2 𝜇0 𝑔 . (2) figure 1. mechanical dovetail interlocking. figure 2. structure with reinforcement beams, release levers and magnets (in green). figure 3. cross-section view. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 63 another relevant factor for this approach is the speed of grip and release. during the development of the drone, there was a focus on the coupling speed because, during the approach phase, the drone had to fly over an object placed on the ground and lift it up. during the preliminary tests, two issues were observed when flying close to the ground: 1) the displacement of the object and 2) the instability of the drone. both issues were ascribed to the rotors’ downwash [5], which is the air flow generated by the propulsion system. in the first case, the object to be grabbed moved when the drone flew a few centimetres above it, so it was necessary for the drone to ascend and repeat the grabbing procedure. this repositioning, in view of the challenge, increased the time required to complete the task. furthermore, the instability caused by the turbulence of the propulsion system interfered with the final manoeuvres needed for the proper positioning of the object. the use of an electromagnet would have introduced a further delay due to the need to maintain the position of the object near the ferrous surface during activation. driven by the need to reduce operating times, the choice of using permanent magnets, which are constantly ‘active’, also allowed for a reduction in this additional latency. during the release phase, this problem did not arise, and it was possible to drop the object even at a short distance from the ground. another relevant aspect that required specific investigation was the coupling system between the drone and the magnetic plate. two approaches were tested, one consisting of a sliding prismatic-like joint with a cardan system near the plate and a damped system, which was the final solution adopted. in the first approach, shown in figure 4, the variable length of the sliding joint allowed the drone to pick up objects up to 50 cm below its flight altitude. however, the considerable variation in the centre of mass once the object was gripped caused oscillations that increased energy consumption, as the autopilot had to continuously correct the vehicle’s position. the final proposed solution consisted of four cables tensioned by as many springs, thus dampening the mechanical shocks and vibrations on the object due to oscillations during transportation. furthermore, the cables allowed the plate to be attracted to the metal sheet on the object, as the plate could move freely (within a range of ± 5 cm) along the vertical axis and in the horizontal plane (within ± 3 cm). 3. experiments the dji f550 hexacopter was chosen for the final setup, which offered a good compromise in terms of maximum payload and limited downwash effect. in fact, the adoption of this gripping solution with a larger drone (dji s900) was abandoned because the excessive downwash produced by the rotors tended to push away the bricks below. the magnets chosen for the final implementation of the plate were 5 mm with a declared attraction force of approximately 3.8 kg for each single magnet, as shown in table 1. the nominal value of the force of attraction is confirmed if the object is made of iron. for steel or other alloys, this value can be reduced by more than 30 %. often, this given value decreases due to the coating or surface irregularities of the magnets. another factor is the thickness of the metal surface to be lifted, which must not be too thin, or the force of attraction cannot be fully exploited. however, in this study, the presence of the ultrat air gap between the magnet and the ferrous surface to be lifted led to a considerable deterioration in the attraction force, which decreased very rapidly with increasing distance. ten magnets of the above model were inserted into the plate, as shown in figure 2, with alternating orientation in the magnetic field. conducting the tensile test with the aid of a digital dynamometer, it was necessary to apply a force of 4.9 kg to detach the plate from the thick ferrous surface (0.5 cm). given the above considerations and the degradation in the attraction obtained for each individual magnet, this force was still well within the acceptable limits. the use of passive magnets ensured a strong grip without any detachments during the transport phases despite the thin separation layer between the ferromagnetic surface and the magnets. the final configuration was tested with copies of bricks, as per the challenge rules, with different weights of up to 2.0 kg and with thinner ferromagnetic surfaces (0.1–0.2 cm) on top compared with those used for the preliminary tensile test. during these tests, it was possible to lift the different types of bricks, but the orange bricks, shown in figure 5, could not be lifted because the ferromagnetic surface was not thick enough to ensure a firm grip. however, since these bricks were 1.80-m long, it might not be possible to lift them with just one drone, and therefore it was decided not to address this issue. table 1. overview of the properties of the magnets used. property magnet 1 magnet 2 material ndfeb ndfeb weight 3.04 g 7.6 g shape parallelepiped parallelepiped dimensions 20 x 10 x 2 mm 20 x 10 x 5 mm surface of the poles 20 x 10 mm 20 x 10 mm coating nickel plated nickel plated magnetisation n45 n45 force of attraction about 2.1 kg about 3.8 kg figure 4. approach with sliding prismatic-like joint with a cardan system near the plate. figure 5. types of bricks according to the challenge rules. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 64 the release occurs by activating the servomotor, which produces an increase in the air gap (figure 3) between the magnets and the metal surface of the gripped object. the detachment occurs naturally due to the contribution of gravity, so a small servo can detach objects in the order of 1–2 kg. this means that as the weight of the object decreases, it is necessary to increase the air gap produced by the movement of the levers. a short release sequence is shown in figure 6. this magnetic plate is presented in [6], and in the current version it has been improved with the integration of a sheet of magnetic shield material. this is an alloy with good magnetic permeability that allows the shielding of magnetic fields. this solution allows the upper part of the plate to be shielded, thus avoiding any interference with the autopilot and other system components. the proposed system, as shown in figure 7, allows excellent pick-and-place missions to be conducted. furthermore, the decision not to use electromagnets avoids the intermittent generation of magnetic fields, which could negatively influence the behaviour of the autopilot and reduce flight times as a result of the power consumption during the gripping phase. the final assembly with the damping system with cables and springs guarantees a firm grip between the brick and the magnetic plate, dampening the vibrations caused by transportation. the tension springs chosen for this project increase the extension length by 5 cm; as soon as the plate is within a range of 5 cm, it is attracted to the ferrous surface and stretches, leading to automatic coupling. when the springs are fully extended, the drone is able to lift the brick gradually. moreover, our damping system provides a compensation to payload vertical accelerations and decelerations respectively during take-off and landing. the tests performed on the gripper during the challenge showed that it was able to grab, transport and place a large number of bricks during the time allowed for the trials. 4. conclusions in this article, a drone equipped with a pick-and-place system for objects with ferrous surfaces has been presented, and the different approaches used to carry out the delivery service using drones have been evaluated. based on this analysis, it was decided to proceed using the technique of passive permanent magnets in order to eliminate the power consumption during the transport phase. this technique has been combined with a custom design in order to obtain a flat profile and to guarantee the lightness of the prototype as a result of using 3d printing. from the literature, it appears that this model represents the most compact passive magnetic gripping system developed. in the future, a lighter and flexible version of the prototype, capable of lifting objects with limited curved faces, will be developed. moreover, as the current vision system for brick detection is placed under an arm, the field of view is partially hidden by the plate. therefore, as a future development, the camera will be integrated into the plate in order to improve visualisation and ensure alignment up to a few centimetres from the object to be grasped. in addition, distance sensors will be installed to constantly monitor the distance during the gripping phase. based on the positive results from mbzirc and the latest improvements, this drone can be employed in the field in case of emergency to transport goods in dangerous or remote areas. acknowledgement this research was partially funded by the ‘safe and smart farming with artificial intelligence and robotics programma ricerca di ateneo unict 2020 -‐ 22 linea 2’ project. figure 6. the release phase, in which it is possible to see how the plate, thanks to the proposed solution, returns to its original position immediately after release without affecting the posture of the drone. figure 7. final assembly with the damping system and cable tensioner. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 65 references [1] d. longo, g. muscato, adhesion techniques for climbing robots: state of the art and experimental considerations, advances in mobile robotics, proc. of the eleventh international conference on climbing and walking robots and the support technologies for mobile machines (clawar), coimbra, portugal, 8 – 10 september 2008, pp. 6-28. doi: 10.1142/9789812835772_0003 [2] a. rodríguez castaño, f. real, p. ramón soria, j. capitán fernández, v. vega, b. c. arrue ullés, a. ollero baturone, alrobotics team: a cooperative multi-unmanned aerial vehicle approach for the mohamed bin zayed international robotic challenge, journal of field robotics 36 (2019) pp. 104-124. doi: 10.1002/rob.21810 [3] a. gawel, m. kamel, t. novkovic, j. widauer, d. schindler, b. p. von altishofen, r. siegwart, j. nieto, aerial picking and delivery of magnetic objects with mavs, proc. of the 2017 ieee international conference on robotics and automation (icra), singapore, 29 may-3 june 2017, pp. 5746-5752. doi: 10.1109/icra.2017.7989675 [4] k. tai, a. r. el-sayed, m. shahriari, m. biglarbegian, s. mahmud, state of the art robotic grippers and applications, robotics 5 (2016). 20 pp. doi: 10.3390/robotics5020011 [5] c. g. hooi, f. d. lagor, d. a. paley, height estimation and control of rotorcraft in ground effect using spatially distributed pressure sensing, journal of the american helicopter society 61 (2016) pp. 1-14. doi: 10.4050/jahs.61.042004 [6] g. sutera, d. c. guastella, g. muscato, a novel design of a lightweight magnetic plate for a delivery drone, proc. of the 23rd international symposium on measurement and control in robotics (ismcr), budapest, hungary, 15-17 october 2020, pp. 1-4. doi: 10.1109/ismcr51255.2020.9263730 https://doi.org/10.1142/9789812835772_0003 https://doi.org/10.1002/rob.21810 https://doi.org/10.1109/icra.2017.7989675 https://doi.org/10.3390/robotics5020011 https://doi.org/10.4050/jahs.61.042004 https://doi.org/10.1109/ismcr51255.2020.9263730 a metrological approach for multispectral photogrammetry acta imeko issn: 2221-870x december 2021, volume 10, number 4, 111 116 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 111 a metrological approach for multispectral photogrammetry leila es sebar1, luca lombardo2, marco parvis2, emma angelini1, alessandro re3,4, sabrina grassini1 1 dipartimento di scienza applicata e tecnologia, politecnico di torino, corso duca degli abruzzi 24, 10129, turin, italy 2 dipartimento di elettronica e telecomunicazioni, politecnico di torino, corso duca degli abruzzi 24, 10129, turin, italy 3 dipartimento di fisica, università degli studi di torino, via pietro giuria 1, 10125, turin, italy 4 infn, sezione di torino, via pietro giuria 1, 10125, turin, italy section: research paper keywords: photogrammetry; multispectral imaging; reference object; metrology; cultural heritage citation: leila es sebar, luca lombardo, marco parvis, emma angelini, alessandro re, sabrina grassini, a metrological approach for multispectral photogrammetry, acta imeko, vol. 10, no. 4, article 19, december 2021, identifier: imeko-acta-10 (2021)-04-19 section editors: umberto cesaro and pasquale arpaia, university of naples federico ii, italy received november 4, 2021; in final form december 6, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: leila es sebar, e-mail: leila.essebar@polito.it 1. introduction in the last few years, digitalization techniques and related 3d imaging systems have acquired major importance in several fields, like industry, medicine, civil engineering, architecture, and cultural heritage. for the last mentioned one, in particular, such technologies can provide multiple contributions, in terms of conservation, data archiving, enhancement, and web sharing [1][3]. the existing three-dimensional imaging systems, which acquire measurements through light waves, can be discriminated on the basis of the ranging principle employed [4]. among the several techniques, photogrammetry is a remote image-based technique, that became widely diffused. in particular, this technique allows for the collection of reliable 3d data of an object, regarding its surface (color and texture) and its geometrical features without requiring any mechanical interaction with the object itself [5]. indeed, a 3d model is constructed starting from digital images of the object, leading to the creation of its virtual replica. with the increasing diffusion of digitalization techniques and the aims of many users to the creation of 3d models, several concerns have been raised about the results that can be achieved. therefore, even though digitalization practices are widely diffused and they can provide realistic replicas of an object, the factors that impact the uncertainty of the final 3d models are several and they must be further investigated. some authors have summarized the most important factors that affect the uncertainty in 3d imaging and modeling systems [4], [6]. nevertheless, precision and accuracy evaluation of 3d models has not been supported by internationally recognized standards which are of major importance to avoid archiving and sharing wrong information [7]. some publications have presented different test artifacts or new systems that could be used to test the performances of the photogrammetry approach [6],[8]. in some cases, the accuracy of a final model is determined by comparing the results with some reference data, acquired with active systems such as laser scanners [8]-[10]. otherwise, the results are also evaluated on the basis of statistical parameters generated by the employed reconstruction software [11]. abstract this paper presents the design and development of a three-dimensional reference object for the metrological quality assessment of photogrammetry-based techniques, for application in the cultural heritage field. the reference object was 3d printed, with nominal manufacturing uncertainty of the order of 0.01 mm. the object was realized as a dodecahedron, and in each face, a different pictorial preparation was inserted. the preparations include several pigments, binders, and varnishes, to be representative of the materials and techniques used historically by artists. since the reference object’s shape, size and uncertainty are known, it is possible to use this object as a reference to evaluate the quality of a 3d model from the metric point of view. in particular, verification of dimensional precision and accuracy are performed using the standard deviation on measurements acquired on the reference object and the final 3d model. in addition, the object can be used as a reference for uv-induced visible luminescence (uvl) acquisition, being the materials employed uv-fluorescent. results obtained with visible-reflected and uvl images are presented and discussed. mailto:leila.essebar@polito.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 112 nevertheless, there is no unique and recognized way to define the quality of a reconstructed model. this paper presents preliminary but promising results, achieved through the design and realization of a low-cost 3d printed reference object specifically designed for assessing position accuracy and dimensional uncertainty of photogrammetric reliefs. moreover, the reference object, realized in collaboration with the “centro conservazione e restauro la venaria reale”, was created with special insets in which different pictorial preparations were inserted in order to be representative of the materials and techniques used historically by artists. the proposed object could be therefore proposed also as a reference sample for multispectral imaging applications, a widely diffused 2d technique for the characterization and identification of historic-artistic materials [12], [13]. generally, photogrammetry and multispectral imaging are applied as separate techniques, but their combined application is becoming more and more frequent in the cultural heritage field. indeed, this approach exploits the benefit of mapping multispectral imaging data onto 3d models for complete documentation of the conservation state of an object [14], [15]. even though several references can be used in different fields [16]-[18], in this case a specific reference object has to be employed. the reference object was tested using a photogrammetric measuring system, that allows acquisition of both the visible-reflected (vis) and the uv-induced luminescence (uvl) images. in particular, the experimental setup is composed by an ad-hoc modified digital camera capable of working in a wide spectral range (350-1100 nm), several different lighting sources and filters, and an automatic rotating platform. meshroom [19], an open-source software, was used to make the photogrammetric reconstruction of the reference object. the obtained results were then compared to the physical 3d object in order to estimate the accuracy of the final 3d replica. in addition, a comparison of two different approaches for the realization of the uvl model is presented. 2. 3d reference object the reference object consists of a 3d printed polymeric dodecahedron. figure 1 shows the prototype, that was designed with the wings 3d [20] software. then, the reference object was realized with a projet 2500 plus (3d systems) printer, employing the visijet® m2r-gry resin. the printer allows obtaining an object with a nominal uncertainty of the order of 0.01 mm. the reference object was designed to be suitable for photogrammetry survey, indeed its shape is properly created for achieving specific information regarding the reconstruction geometrical accuracy. furthermore, the object was designed in order to have twelve pentagonal slots, in which different pigment preparations can be inserted. in particular, twelve different pigments were chosen to be representative of the principal artistic materials. all the pigments employed are provided by kremer pigmente gmbh & co. kg [21]. in particular, the pigments employed are lead white, white barium sulfate, bone black, magnetite black, raw sienna italian, lead-tin yellow, minium, lac dye, azurite, malachite, verdigris, and lapis lazuli. the twelve painting preparations were realized in order to reproduce the techniques employed in real historical artifacts. therefore, each one consists of several consecutive layers: support, preparation layer, underdrawings, pictorial layer, and varnishes. the preparation layer is made of stucco, realized adding gypsum to saturate a solution of water and animal glue (14:1 ratio in weight). then, three different underdrawings were applied directly on top of the stucco layer. in particular, the materials employed are charcoal, sanguine, and iron gall ink. these first two layers, namely stucco preparation, and underdrawings are the same for all the twelve mock-ups. each of the twelve sections was designed to host one single pigment/dye in nine different combinations. indeed, each section was divided into three subsections, based on the binder employed: arabic gum, egg tempera, and linseed oil. then each section is further divided into three subsections: one with a historical varnish (i.e. mastic), one with a modern one, and one left unprotected. this choice is of particular interest for the uvl imaging techniques because it allows discriminating between the fluorescence of the different pictorial preparations with and without varnish. figure 2 shows the proposed reference object and a scheme of the pictorial preparation. figure 1. drawing of the reference object realized by means of wings 3d software. figure 2. on the left: top view and scheme of the pictorial preparation. on the right, the proposed reference object. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 113 3. 3d acquisition system the system employed to acquire the images of the reference object is composed of a modified digital camera, a set of suitable light sources, and an automatic rotating platform. 3.1. acquisition setup the images were acquired with a fujifilm xt-30 digital camera coupled with a minolta mc rokkor-pf 50mm f/1.7 lens. the camera is modified to be suitable for ultraviolet-visibleinfrared photography acquisition, in the range from 350 nm to 1100 nm. for both vis and uvl measurements the camera was equipped with a hoya ir uv-cut filter and a schott bg40 filter. the image acquisition was performed in a room where it was possible to create absolute darkness, in order to avoid any possible interference from unwanted light sources. led uv a365 nm filtered sources were employed as sources for the acquisition of uvl images, while standard halogen lamps were used for vis images. table 1 reports the main parameters employed in the image acquisition and figure 3 shows the complete acquisition setup [22]. the acquisition system is completed by the rotating platform which allows to automatically take images of the object at specified rotation angles. the platform is composed of a circular rotating plate that hosts the object. the plate is connected to a stepper motor (type nema 17) by employing a suitable gearbox (1:18 ratio) in order to increase the torque and the angular resolution and to reduce the rotation speed. a stepper motor driver chip (a4988, allegro microsystems) is used to drive the motor and to move the platform at specified angular positions with a resolution of 0.1°. the platform is controlled by an arduinouno development board connected to a computer, where a dedicated application allows the users to set up all the acquisition parameters (such as image number, angle, speed, etc.) and to carry out some basic image pre-processing on the acquired images. furthermore, the platform features a camera shot trigger output which, connected to the camera, allows the platform to automatically trigger the camera shot at each one of the specified object positions. this greatly improves and simplifies the image acquisition procedure dramatically reducing the user manual intervention. 3.2. data processing the reconstruction of the 3d models was performed by means of meshroom (version 2021.1.0). this software has a particular embedded feature, called live reconstruction that allows to directly import images while they are acquired and to augment the previous structure from motion coverage with an iterative process. in this study, the images were iteratively added in groups of four for each step. the first block of images was acquired frontally respect to the artifact with an angular step of 15°. subsequently, two additional sets of images were acquired after flipping the artifact on different sides in order to improve the reconstruction of all the faces and their details. hence, a total of 72 reflected vis images plus 72 uvl images were collected and processed with a standard pipeline in meshroom. in particular, the following steps were carried out: camera initialization, natural feature extraction, image matching, features matching, structure from motion (sfm) creation, prepare dense scene, depth map estimation, depth map filtering, meshing, meshing filtering, texturing. the parameters of reconstruction were all set to default values. figure 3. photogrammetry system for multispectral images acquisitions. figure 4. from left to right: 3d model from reflected vis images; 3d model obtained after re-texturing of the reflected-vis model with uvl images; and 3d model obtained using uvl images directly. table 1. the employed acquisition parameters. parameter value image size 6240 × 4160 sensor size/type 23.5mm × 15.6mm (aps-c)/ x-trans cmos effective pixels 26 megapixels image format .raf iso 200 focal length 50 mm aperture f/16 shutter speed 2.0 s acquired images 72 (3 revolutions of 24 images) acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 114 the aforementioned procedure was applied to reconstruct models from both vis and uvl images. therefore, two different models were obtained. nevertheless, meshroom allows to texture a 3d model also using a different set of images with respect to the ones used to generate the point cloud and the mesh. indeed, it is possible to duplicate the computed dense scene node and import a new folder of images. in this way, the software will generate a texture for the model, with a different set of images. in order to test this feature, the procedure was applied to the vis model, to retexture it with the uvl images. the three textured 3d models were exported in the obj format and properly scaled by employing the open-source software wings 3d. to scale each model, three different dimensions of the object were measured, and the mean scaling factor was computed. figure 4 shows an image of the final models. 4. experimental validation in order to assess the uncertainty of the 3d reconstructed models from reflected vis images, several distances on the real artifact were taken by using a caliper and compared with the same distances on the 3d models. figure 5 shows the distances measured, which are distributed all around the artifact itself. the distances were chosen in correspondence of the edges and between opposite faces of the artifact since they can be easily measured. the measurements of the artifact were collected with a 1/20 mm caliper. whereas the software wings 3d was employed to measure the corresponding distances on the virtual replica. the model uncertainty was estimated according to the difference δ as reported in (1):  = |𝐷𝑟 −𝐷𝑚 | , (1) where 𝐷𝑟 is the distance on the reference object and 𝐷𝑚 is the one on the 3d model. in addition, also the relative uncertainty was calculated as shown in (2): % = |𝐷𝑟 −𝐷𝑚 | 𝐷𝑟 × 100 % . (2) finally, the overall standard deviation was evaluated, as in (3): table 2. uncertainty estimation of the 3d model from vis images. the measurement on the reference object, on the vis model, and on the uvl model are reported.  indicates the difference between the measurements (equation (1)), and  is the relative uncertainty (equation(2)). dimension reference object (mm) vis-3d model (mm) δ (mm) ε (%) uvl-3d model (mm) δ (mm) ε (%) a 44.2 44.0 0.23 0.52 44.1 0.13 0.30 b 44.3 44.5 0.19 0.43 44.8 0.31 0.70 c 44.1 44.2 0.11 0.25 44.7 0.491 1.11 d 99.0 98.7 0.31 0.31 100.6 1.91 1.94 e 71.4 71.2 0.25 0.35 71.6 0.45 0.63 f 44.3 44.0 0.35 0.79 44.8 0.85 1.93 g 44.3 44.6 0.33 0.74 44.6 0.03 0.07 h 44.4 44.5 0.08 0.18 44.8 0.32 0.72 i 121.6 122.2 0.55 0.45 125.3 3.15 2.58 l 71.4 71.4 0.05 0.07 72 0.65 0.91 m 71.2 71.3 0.14 0.20 71.4 0.06 0.08 n 98.8 98.9 0.12 0.12 100.8 1.88 1.90 o 115.0 114.9 0.14 0.12 116.7 1.84 1.60 p 71.2 70.5 0.67 0.94 71.2 0.67 0.95 q 114.5 114.8 0.34 0.30 116.6 1.76 1.53 r 70.8 70.9 0.19 0.27 71.8 0.86 1.21 s 70.8 70.5 0.27 0.38 71.9 1.37 1.94 t 70.9 70.9 0.03 0.04 71.7 0.77 1.09 u 44.2 44.2 0.06 0.14 44.5 0.29 0.66 v 70.9 70.8 0.02 0.03 71.9 1.07 1.51 figure 5. original vis images of the reference object with validation measurement distances. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 115 𝜎 = √ 1 𝑁 − 1 ∑(𝛿𝑖 2 ) 𝑁 𝑖=1 (3) regarding the multispectral reconstruction, two different approaches were tested. one uvl model was obtained retexturing the mesh already obtained for the vis model. this means that the coordinates of the measured points did not change and that the uvl-model is perfectly superimposable. therefore, there is no need to perform a metric evaluation of this model. the second uvl-model was instead reconstructed starting directly from uvl images. for the estimation of the quality of the model, the procedure previously presented was applied. in particular, the differences between the distances measured on the uvl 3d model and the ones measured on the reference object were computed. in table 2, the distance difference δ in mm and the corresponding relative error ε are reported. on the basis of these results, the reconstructed visible models are quite reliable with maximum dimensional uncertainties lower than 1 % for the visible model and lower than 2 % for the uv model. nevertheless, the average uncertainties are lower, reaching about 0.33 % and 1.17 % for the visible and the uv models, respectively, while the standard deviations  of the differences between the real model and the reconstructed one are of about 170 µm and 690 µm, respectively. the higher uncertainty obtained for the uv model is probably due to a higher color uniformity of the acquired images which affected the reconstruction process. therefore, the reconstruction accuracy can be probably improved by tuning the image acquisition procedure and the reconstruction parameters. 5. conclusions this paper presented the design and development of an artifact, that can be used as a metric reference object for the assessment of the accuracy and dimensional uncertainty of the 3d model obtained through photogrammetry. the object is 3d printed and has twelve insets, in which several pictorial preparations were inserted. indeed, the object is suitable also to be a reference for multispectral imaging. the reference object was employed in order to test the photogrammetric measurement system employed, from a metric point of view. a comparison between several distances acquired both on the mechanical reference object and on the reconstructed vis 3d model was realized. the maximum dimensional uncertainty is lower than 1 % and the average uncertainty reached about 0.33 %. moreover, the reconstruction of the model from uvl images was performed, using two different approaches. the obtained results were compared with the real object. from the comparison between the obtained results, it is possible to state that the approach that involves the creation of a vis model and the subsequent re-texturing with uvl images achieved the best results. acknowledgement the authors would like to acknowledge dr. paola buscaglia from centro conservazione e restaura “la venaria reale” for the support related to the realization of the pictorial preparation. references [1] m. russo, f. remondino, g. guidi, principali tecniche e strumenti per il rilievo tridimensionale in ambito archeologico, archeologia e calcolatori, 22 (2011) (in italian), pp. 169-198, issn 1120-6861. [2] l. es sebar, l. iannucci, c. gori, a. re, m. parvis, e. angelini, s. grassini, in-situ multi-analytical study of ongoing corrosion processes on bronze artworks exposed outdoors, acta imeko 10(1) (2021) pp. 241-249. doi: 10.21014/acta_imeko.v10i1.894 [3] i. m. e. zaragoza, g. caroti, a. piemonte, the use of image and laser scanner survey archives for cultural heritage 3d modelling and change analysis, acta imeko 10(1) (2021) pp. 114-121. doi: 10.21014/acta_imeko.v10i1.847 [4] j. a. beraldin, m. rioux, l. cournoyer, f. blais, m. picard, j. pekelsky, traceable 3d imaging metrology, proc. spie videometrics ix 6491 (2007) doi: 10.1117/12.698381 [5] t. schenk, introduction to photogrammetry, the ohio state university, columbus, 2005, 106. online [accessed 3 december 2021] https://www.mat.uc.pt/~gil/downloads/introphoto.pdf [6] j. a. beraldin, f. blais, s. el-hakim, l. cournoyer, m. picard, traceable 3d imaging metrology: evaluation of 3d digitizing techniques in a dedicated metrology laboratory, proceedings of the 8th conference on optical 3-d measurement techniques, july 912, 2007, zurich, switzerland 2007, pp. 310-318. [7] i. toschi, a. capra, l. de luca, j. a. beraldin, on the evaluation of photogrammetric methods for dense 3d surface reconstruction in a metrological context, in isprs technical commission v symposium, wg1 2(5) (2014) pp. 371-378. [8] g. j. higinio, b riveiro, j. armesto, p. arias, verification artifact for photogrammetric measurement systems, optical engineering 50(7) (2011), art. 073603. doi: 10.1117/1.3598868 [9] c. buzi, i. micarelli, a. profico, j. conti, r. grassetti, w. cristiano, f. di vincenzo, m. a. tafuri, g. manzi, measuring the shape: performance evaluation of a photogrammetry improvement applied to the neanderthal skull saccopastore 1, acta imeko 7(3) (2018). doi: 10.21014/acta_imeko.v7i3.597 [10] a. koutsoudis, b. vidmar, g. ioannakis, f. arnaoutoglou, g. pavlidis, c. chamzas, multi-image 3d reconstruction data evaluation, journal of cultural heritage 15(1) (2014) pp. 73-79. doi: 10.1016/j.culher.2012.12.003 [11] a. calantropio, m.p. deseilligny, f. rinaudo, e. rupnik, evaluation of photogrammetric block orientation using quality descriptors from statistically filtered tie points, international archives of the photogrammetry, remote sensing & spatial information sciences 42(2) (2018). [12] j. dyer, g. verri, j. cupitt, multispectral imaging in reflectance and photo-induced luminscence modes: a user manual, british museum, 2013. [13] a. cosentino, identification of pigments by multispectral imaging; a flowchart method, herit sci 2(8) (2014). doi: 10.1186/2050-7445-2-8 [14] s. b. hedeaard, c. brøns, i. drug, p. saulins, c. bercu, a. jakovlev, l. kjær, multispectral photogrammetry: 3d models highlighting traces of paint on ancient sculptures, in dhn (2019), pp. 181-189. [15] e., nocerino, d. h. rieke-zapp, e. trinkl, r. rosenbauer, e. m. farella, d. morabito, f. remondino, mapping vis and uvl imagery on 3d geometry for non-invasive, non-contact analysis of a vase, international archives of the photogrammetry, remote sensing and spatial information sciences isprs archives, 42 (2), (2018) pp. 773-780. doi: 10.5194/isprs-archives-xlii-2-773-2018 [16] m. parvis, s. corbellini, l. lombardo, l. iannucci, s. grassini, e. angelini, inertial measurement system for swimming rehabilitation, 2017 ieee international symposium on medical http://dx.doi.org/10.21014/acta_imeko.v10i1.894 http://dx.doi.org/10.21014/acta_imeko.v10i1.847 https://doi.org/10.1117/12.698381 https://www.mat.uc.pt/~gil/downloads/introphoto.pdf https://doi.org/10.1117/1.3598868 http://dx.doi.org/10.21014/acta_imeko.v7i3.597 https://doi.org/10.1016/j.culher.2012.12.003 https://doi.org/10.1186/2050-7445-2-8 http://dx.doi.org/10.5194/isprs-archives-xlii-2-773-2018 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 116 measurements and applications (memea), rochester, mn, usa, 8-10 may 2017, pp. 361-366. doi: 10.1109/memea.2017.7985903 [17] a. gullino, m. parvis, l. lombardo, s. grassini, n. donato, k. moulaee, g. neri, employment of nb2o5 thin-films for ethanol sensing, 2020 ieee international instrumentation and measurement technology conference (i2mtc), dubrovnik, croatia, may 25-28, 2020, pp. 1-6. doi: 10.1109/i2mtc43012.2020.9128457 [18] l. iannucci, l. lombardo, m. parvis, p. cristiani, r. basseguy, e. angelini, s. grassini, an imaging system for microbial corrosion analysis, 2019 ieee international instrumentation and measurement technology conference (i2mtc), auckland, new zealand, may 20-23, 2019, pp. 1-6. doi: 10.1109/i2mtc.2019.8826965 [19] alicevision, meshroom 3d reconstruction software. online [accessed 3 december 2021] https://alicevision.org/#meshroom [20] wings 3d. online [accessed 3 december 2021] http://www.wings3d.com [21] kremer pigmente. online [accessed 3 december 2021] https://www.kremer-pigmente.com/en [22] l. es sebar, s. grassini, m. parvis, l. lombardo, a low-cost automatic acquisition system for photogrammetry, 2021 ieee international instrumentation and measurement technology conference-i2mtc, (2021) pp. 1-6. doi: 10.1109/i2mtc50364.2021.9459991 https://doi.org/10.1109/memea.2017.7985903 https://doi.org/10.1109/i2mtc43012.2020.9128457 https://doi.org/10.1109/i2mtc.2019.8826965 https://alicevision.org/#meshroom http://www.wings3d.com/ https://www.kremer-pigmente.com/en https://doi.org/10.1109/i2mtc50364.2021.9459991 interlaboratory comparison results of vibration transducers between tubitak ume and roketsan acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 401 acta imeko issn: 2221-870x december 2020, volume 9, number 5, 401 406 interlaboratory comparison results of vibration transducers between tübi̇tak ume and roketsan s. ön aktan1, e. bilgiç2, i̇.ahmet yüksel, k.berk sönmez, t. kutay veziroğlu, t. torun 1 department of calibration laboratory, roketsan missiles industries inc., ankara, turkey, son@roketsan.com.tr 2 tübi̇tak ulusal metroloji enstitüsü (ume), gebze, kocaeli, turkey, eyup.bilgic@tubitak.gov.tr abstract: this paper presents an interlaboratory comparison on vibration metrology field which can be used as a powerful method of checking the validity of results and measurement capabilities according to iso 17025 [1]. in this standard it is advised to participate in an interlaboratory comparison or a proficiency test in order to prove measurement capabilities of calibration providers. in this study it is aimed to statistically evaluate the measurements results in the scope of sinusoidal acceleration between tübi̇tak ume (national metrology institute of turkey) and roketsan as per related international standards. after statistical evaluation, for unsatisfactory results, root cause analyses and corrections to improve measurement quality are presented and conceptually explained. keywords: vibration transducer; vibration comparison; vibration metrology; vibration uncertainty 1. introduction human began to manufacture machines, and especially motors were used to strengthen them, engineers encountered vibration isolation and reduction techniques [2]. contrary to this, vibration can be generated intentionally for testing purposes to understand functional and physical response and resistibility of any system in vibration environments. for above protective or testing purposes, acceleration sensors are used to measure acceleration, vibration and shock values, which are one of the most important components used in navigation systems of missiles, aircrafts, ships and submarines. as a result of significant developments on these industries such as automotive, defence, aviation and space, the need for accurate measurement has increased and over the years, there have been several improvements in vibration measurement methods [3], [4], [5], [6] with many innovations. accelerometers can be used in varied applications and the most commonly used ones in the market are piezoelectric and capacitive type accelerometers. piezoelectric accelerometers have a more widespread usage due to their advantages such as large measuring frequency range, no need for power supply, reliability, robust design, and longterm stability. a typical response curve of a piezoelectric accelerometer is given in figure 1 [7]. figure 1. response-curve of an accelerometer the limits of the usable range are both mechanical and electrical including frequency (f), acceleration (a), velocity (v), and displacement (d); and also force of a vibration generating system. the displacement amplitude for a given acceleration is inversely proportional to the frequency 𝑑 = 𝑣 2𝜋𝑓 = 𝑎 (2𝜋𝑓)2 . (1) while displacement measurements require attention at low frequencies, it is necessary to pay attention to the acceleration level at high frequencies [8], [9]. when selecting the vibration transducer for a specific application, it is essential to pay attention to the parameters such as number of axes, measurement range, overload or damage limits, mass, sensitivity, impedance and frequency range. for reliable usage of accelerometers, a calibration plan shall be scheduled periodically by producing http://www.imeko.org/ mailto:son@roketsan.com.tr mailto:eyup.bilgic@tubitak.gov.tr acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 402 right test results to assure the process and provide metrological traceability. accurate equipment, metrological traceability, trained personnel, well defined methods, documentation, uncertainty evaluation, internal verifications can play an important role to provide accurate test results however it is also necessary to prove that the laboratory can actually produce results externally by going through comparison tests [10]. interlaboratory comparison (ilc) is the regulation, implementation and evaluation of tests or measurements of two or more laboratories on the same or a similar substance according to predetermined conditions [11]. interlaboratory comparison tests are planned according to iso 17043 [11] and the performances, 𝑬𝒏 values or zeta scores ζ of the participating laboratories are evaluated according to iso 13528 [12]. 2. back-to-back calibration method and application the purpose of vibration transducer comparison is to compare the sensitivity of accelerometer by using as a secondary level (back to back method) iso 16063:21 vibration calibration by a reference standard [13]. calibration of an accelerometer is to determine the sensitivity values at various frequencies. the reference (double ended transducer) and device under test (dut) are firmly coupled on a shaker so that both are exposed to the same mechanical motion. back-to-back calibration requires a shaker (vibration exciter), power amplifier, signal generator and fft frequency meter. currently, automated systems are also available in the market with advantages of easy operation, user friendly and short calibration time requirement. basic configuration of roketsan’s vibration transducer calibration system is illustrated in figure 2 below: figure 2. roketsan vibration transducer calibration system as shown in table 1, the system used is an automatic vibration transducer calibration system that operates between 10 hz to 5000 hz and also, the accuracy values are illustrated. brüel&kjaer 8305 s is used as a reference accelerometer. table 1. specifications performance features accuracy (10 to 2000) hz : 0.7 % ( >2 to 5 ) khz : 1.1 % acceleration, max 110 m/s2 max. transducer weight 60 g force 45 n max. displacement 8 mm environmental conditions of roketsan vibration laboratory are (23 ± 3) °c for temperature and maximum 75 % rh for relative humidity. according to iso 16063:21 the frequency and acceleration values are given below: 1. frequencies (hz): frequencies are selected from one-third-octave frequency series. in case exact frequency values are required, they are calculated for the 1/3 octave bands [14] with the formula below 𝑓 = 𝑓𝑟 ∙ 10 ( 𝑛 10 ) 𝑓r = 1000 hz (2) where n= -20, -19, …, 7 for 10 hz to 5 khz. 2. acceleration (m/s2): 1, 2, 5, 10 or their multiple of tens. 100 m/s2 is recommended. the main principle of back-to-back calibration is direct comparison of indicated sensitivity values between reference transducer and dut transducer. the applied vibration to each transducer is identical and if the sensitivity of the reference transducer is known, the sensitivity of the dut can be obtained by using the following equation: 𝑆𝐷𝑈𝑇 = 𝑆𝑅𝐸𝐹 · 𝑉𝐷𝑈𝑇 𝑉𝑅𝐸𝐹 (3) 𝑆𝐷𝑈𝑇 : sensitivity of device under test 𝑆𝑅𝐸𝐹 : sensitivity of reference accelerometer 𝑉𝐷𝑈𝑇 : electrical output of device under test 𝑉𝑅𝐸𝐹 : electrical output of reference accelerometer even though above approach is suitable for a single frequency value, it may take excessive time to perform this operation at all frequency values. hence, dual channel fft analysis is used to monitor fast frequency response functions in amplitude and phase angle in shorter time period. 3. uncertainty approach as it can be observed over the uncertainty budget given in table 2, one of the maximum uncertainty contribution comes from reference transducer set. furthermore, voltage ratio measurement affects the measurement results. influence on voltage ratio http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 403 from temperature variation, gravitational acceleration, distortion, transverse acceleration, non-linearity effects shall be added to the uncertainty budget. the following contributions are taken into account while calculating the measurement uncertainty budget in the calibration of accelerometers. table 2. uncertainty budget quantity 𝑿𝒊 definition standard uncertainty 𝒖(𝒙𝒊) probability distribution sensitivity coefficient 𝒄𝒊 uncertainty contribution 𝒖𝒊(𝒚) u(sref) the calibration uncertainty of reference transducer set 0.5 normal 1 0.250 u(sref,s) the uncertainty due to drift of reference transducer set and amplifier 0.08 rectangular 1 0.046 u(sa,kal) the calibration uncertainty of conditioning amplifier 0.09 rectangular -1 0.045 u(vr) the uncertainty from voltage ratio 0.08 rectangular 1 0.046 u(vr,t) temperature influence on voltage ratio measurement 0.2 rectangular 1 0.173 u(vr,s) voltage ratio measurement from maximum difference in reference level 0.2 rectangular 1 0.115 u(vr,n) voltage ratio measurement from mounting parameters 0.3 rectangular 1 0.173 u(vr,d) voltage ratio measurement from acceleration distortion 0.0024 rectangular 1 0.001 u(vr,v) voltage ratio measurement from transverse acceleration 1.2 special 1 0.283 u(vr,e) voltage ratio measurement from base strain 0.05 rectangular 1 0.029 u(vr,r) voltage ratio measurement from relative motion 0.05 rectangular 1 0.029 u(vr,l) voltage ratio measurement from non-linearity of transducer 0.03 rectangular 1 0.017 u(vr,i) voltage ratio measurement from non-linearity of amplifiers 0.03 rectangular 1 0.017 u(vr,g) voltage ratio measurement from gravity 0.03 rectangular 1 0.017 u(vr,b) voltage ratio measurement from magnetic field effect of the vibration exciter 0.03 rectangular 1 0.017 u(vr,e) voltage ratio measurement from other environmental effects 0.03 rectangular 1 0.017 u(vr,r) voltage ratio measurement from residual effects 0.03 rectangular 1 0.017 u(vr,re) repeatability 0.17 normal 1 0.098 combined uncertainty of measurement 𝑢𝑡 0.48 expanded uncertainty of measurement u, k = 2 0.97 http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 404 reference transducer and if exists, its conditioner should be calibrated as a set by a primary level laboratory. the measurement uncertainty 𝑢(𝑆𝑅𝐸𝐹) is presented in the calibration certificate of the reference transducer shall be added to uncertainty budget as divided by reliability coefficient (95% confidence level, k = 2). 𝑢(𝑉𝑅,𝑣 ) is the uncertainty contribution due to transverse accelerations. transverse vibration 𝑎𝑇 is maximum 10 % for vibration exciter. transverse sensitivity for reference transducer 𝑆𝑣,𝑅𝐸𝐹 is maximum 2 % and the device under test 𝑆𝑣,𝐷𝑈𝑇 is maximum 5 %. using the formula below, the uncertainty can be evaluated as 1.2 %. 𝜎 = √(𝑆𝑣,𝐷𝑈𝑇 2 + 𝑆𝑣,𝑅𝐸𝐹 2 ) 𝑎𝑇 2 (4) repeatability ( 𝑢(𝑉𝑅,𝑅𝐸𝑃 ) ) is an experimental standard deviation of the arithmetic mean to the uncertainty. it is an inevitable contribution for an uncertainty budget. the model function is given below. 𝑆𝐷𝑈𝑇 = 𝑆𝑅𝐸𝐹 ∙ 𝑆𝐴1 𝑆𝐴2 ∙ 𝑉𝐷𝑈𝑇 𝑉𝑅𝐸𝐹 ∙ 𝐼1 ∙ 𝐼2 … ∙ 𝐼𝑀 (5) 𝐼𝑖 = 1 − 𝑒2,𝑖 1 − 𝑒1,𝑖 (6) 𝑒𝑖 ; indicates the i th error contribution the uncertainty budget for vibration transducer is in table 2 for 10 hz to 1000 hz. combined uncertainty (𝑢𝑡 ) and the expanded uncertainty (𝑈) (k=2, 95% confidence level) can be calculated with following formulas (7) and (8) according to ea-4/02 [15]: 𝑢𝑡 = √𝑢 2(𝑆𝑅𝐸𝐹 ) + 𝑢 2(𝑆𝑅𝐸𝐹,𝑆) + (𝑢𝑉𝑅 ) 2+.. (7) 𝑈 = 2 ∙ 𝑢𝑡 (8) 4. comparison results the technical protocol [16] specifies in detail, the aim of the comparison, the transfer standard used, time schedule, the measurement conditions and other subjects. the frequency range covered by the requirements stated in the technical protocol has been carried out with tübi̇tak ume and roketsan. the model of transfer standard used is b&k 4371. the pilot laboratory is tübi̇tak ume, which is primary laboratory in turkey. since roketsan performs related measurements with lower uncertainty than any other secondary level calibration providers, an interlaboratory comparison with tübi̇tak ume as primary level laboratory was necessary to understand roketsan’s reliability of accuracy level. figure 3 presents the calibration results of sensitivity obtained for a transfer standard. the results can be observed in table 3. figure 3. measurement results from tübi̇tak ume and roketsan table 3. measurement results from tübi̇tak ume and roketsan frequency ume; reference value roketsan 𝑬𝒏 (hz) (pc/(m/s2) (pc/(m/s2) 10 1.0046 1.0063 0.13 12.5 1.0042 1.0110 0.51 16 1.0043 1.0087 0.33 20 1.0029 1.0080 0.38 25 1.0016 1.0063 0.35 31.5 1.0007 1.0070 0.47 40 0.9989 1.0060 0.54 50 0.9981 1.0060 0.60 63 0.9956 1.0040 0.63 80 0.9935 1.0010 0.57 100 0.9919 1.0010 0.69 125 0.9905 0.9993 0.67 160 0.9893 1.0010 0.89 200 0.9872 0.9957 0.65 250 0.9847 1.001 1.24 315 0.9818 0.9923 0.80 400 0.9813 0.9918 0.80 500 0.9801 0.9905 0.80 630 0.9800 0.9883 0.64 800 0.9776 0.9873 0.75 1000 0.9771 0.9854 0.53 1250 0.9746 0.9834 0.53 1600 0.9759 0.9826 0.40 2000 0.9757 0.9802 0.27 2500 0.9729 0.9814 0.51 3150 0.9762 0.9849 0.52 4000 0.9800 0.9842 0.25 5000 0.9800 0.9814 0.08 0,95 0,96 0,97 0,98 0,99 1 1,01 1,02 1 0 1 6 2 5 4 0 6 3 1 0 0 1 6 0 2 5 0 4 0 0 6 3 0 1 0 0 0 1 6 0 0 2 5 0 0 4 0 0 0 s e n si ti v it y ( p c /( m /s 2 ) frequency (hz) roketsan ume http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 405 the uncertainty values for laboratories are given below in table 4: table 4. uncertainty values ume; reference laboratory frequency (hz) f ≤ 1250 1250 < f ≤ 5000 uncertainty 0.9% 1.1% roketsan frequency (hz) f ≤ 1000 1000 < f ≤ 5000 uncertainty 0.97% 1.3% the performances of the measurements can be evaluated as described in iso 13528 standard with the following equation (9). the most common statistical approach to understand capability of a laboratory is calculating 𝐸𝑛 values and shall be below or equal to one. 𝐸𝑛 = 𝑋rok − 𝑋ume √𝑈rok 2 + 𝑈ume 2 (9) 𝑋rok: the mean value of roketsan 𝑋ume : the mean value of reference laboratory (ume) 𝑈ume: the measurement uncertainty of reference laboratory (ume) 𝑈rok: the measurement uncertainty of roketsan calculated 𝐸𝑛value at 250 hz is 1.24 and the rest of the other frequencies are satisfactory |𝐸𝑛 | < 1 in table 3. after receiving unsatisfactory result only for 250 hz frequency, root cause analysis or some corrective actions will be carried out to improve measurement system of roketsan inc. 5. corrective action nonconformity in iso 9001 is defined as the failure to meet one or more requirements [17]. as9131 “nonconformance data definition and documentation” classify the nonconformity process codes (shipping and transportation, manufacturing, document preparation), cause codes (machine, management, people, material, method, environment, measurement), corrective action codes (machine, management, people, material, method, environment, measurement) [18]. root cause analysis is the process of identifying casual factors using a structured approach with techniques designed to provide a focus for identifying and resolving problems [19]. it is essential to determine the root cause and create a corrective action plan in order to eliminate the causes of nonconformity before occurring again or in another field. principles of continuous improvement and monitoring of efficiency are important for the continuity of management systems. when comparison results are not satisfactory, a non-conformance record shall be issued and action process shown in figure 4 shall be started in order to find a solution to keep system reliable. a method such as pareto chart, 5 whys, fishbone diagram, scatter plot diagram, failure mode and effects analysis (fmea) for determining root cause of problem should be applied to gain appropriate vision for detecting and removing problem. figure 4. nonconformance process among all error source possibilities, sensitivity value of the reference transducer set had top priority to check since calibration status was close to calibration due date. after forwarding this equipment to primary laboratory for re-calibration, although 1 year has passed between two calibrations, it has been observed that previous sensitivity value at 250 hz had been changed from 0.1312 pc/(m/s2), to 0.1307 pc/(m/s2). when considering last two calibration certificates difference, higher measurements result change have been obtained and the reference value has shifted unlike assumption for drift may occurred during 1 year. above condition was considered as main reason for the detected nonconformity. verification of the vibration system has a vital role for getting accurate measurements, the reference accelerometer used in calibration (working standard) is connected back-to-back with the reference accelerometer. subsequent verifications compare the first results to the new results and accept the results whether it deviates less than 0.8 %. the controller checks that the standard deviation of the measurements is less than 0.2 %. when the fishbone diagram is applied, the root causes are seen in figure 5. after an extensive training for all operators, gage r&r application indicators showed competency of appraisers are satisfactory. after this, temperature gradient of measurement room was examined and it has seen that there is no need for action on temperature subject. regarding mechanical effects, the torque value was adjusted to 2 n m as desired precisely and it is definition of nonconformance determining root cause identification of corrective action effectiveness of corrective action http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 406 confirmed that requirement of standard have been met. figure 5. fishbone diagram as a result of system review as detailed above, verification and calibration issue had been estimated as only root cause which gave rise to unsatisfactory 𝐸𝑛 value at 250 hz frequency measurement with the new calibration results, we have confirmed that there is a drift in the value of the reference. as a result of evaluation, it is decided to perform detailed investigation on root cause of drift on reference sensor. since the reason is not fully understood, it is decided to reorganize interlaboratory measurement to receive satisfactory 𝐸𝑛 values. 6. summary further aspect could be considered to understand the unsuccessful results at 250 hz. further study may cover participation in a new comparison test and in case of another insufficient result, decreasing the calibration period, increasing the measurement uncertainty due to drift of reference transducer set can be taken as further actions. the results produced by the laboratory become valid with comparison tests as well as the method of measurement, competency of appraisers, and calculated measurement uncertainty, suitability of the equipment used, calibration and traceability. since iso 17025 wants also a risk and opportunity based approach, proficiency testing can be used as a training and risk tool. 7. references [1] iso 17025:2017 general requirements for the competence of testing and calibration laboratories [2] measuring vibration, brüel&kajer, www.bk.com [3] x.bai, “absolute calibration device of the vibration sensor”, the journal of engineering, 2018 [4] v.mohanan, b.k. roy, v.t. chitnis, “calibration of accelerometer by using optic fiber vibration sensor”, applied acoustics, 28, pp. 95-103, 1989 [5] r. r. bouche, “calibration of vibration and shock measuring transducer”, the shock and vibration information center, 1979 [6] k.havewasam, h.h.e. jayaweera, c.l. ranatunga, t.r. ariaratne, “development and evaluation of a calibration procedure for a 2d accelerometer as a tilt and vibration sensor”, proceedings of the technical sessions, 25, pp. 53-62, 2009 [7] c. vogler, “calibration of accelerometer vibration sensitivity by reference, college of engineering, 2015 [8] w. ohm, l.wu, p. henes, g. wonk , “generation of low-frequency vibration using a cantilever beam for calibration of accelerometers”, journal of sound and vibration 289, pp. 192–209, 2006 [9] n. garg, m.i. schiefer, “low frequency accelerometer calibration using an optical encoder sensor”, measurement, vol. 111, pp. 226-233, 2017 [10] k. b. sönmez, t. o. kılınç, i̇.a.yüksel, s.ö.aktan, “inter-laboratory comparison on the calibration of measurements photometric and radiometric sensors”, international congress of metrology, 2019 [11] iso/iec 17043:2010, “conformity assessment general requirements for profiency testing” [12] iso 13528:2015, “statistical methods for use in profiency testing by interlaboratory comparisons” [13] iso 16063:21, “vibration calibration by a reference standard” [14] iso 266, “acoustics preferred frequencies” [15] ea-4/02, “evaluation of the uncertainty of measurement in calibration” [16] ume-g2ti-2018-01, “technical protocol of the interlaboratory comparison on acceleration”, 2018 [17] iso 9001:2015, “quality management systems” [18] as9131:2012, “nonconformance data definition and documentation” [19] m.a.m.doggett, “a statistical comparison of three root cause analysis tools”, journal of industrial technology, vol.20, no:2,2004 inexperienced personnel temperature change during calibration power, other utility variations calibration drift of reference value verification torque value mounting personnel measurement environment setup 250 hz en >1 http://www.imeko.org/ http://www.bk.com/ a low-cost table-top robot platform for measurement science education in robotics and artificial intelligence acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 5 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 a low-cost table-top robot platform for measurement science education in robotics and artificial intelligence hubert zangl1,2, narendiran anandan1, ahmed kafrana1 1 institute of smart systems technologies, sensors and actuators, university of klagenfurt, klagenfurt, austria 2 silicon austria labs, aau sal use lab, klagenfurt, austria section: research paper keywords: education; robot perception; introductory lab experiments citation: hubert zangl, narendiran anandan, ahmed kafrana, a low-cost table-top robot platform for measurement science education in robotics and artificial intelligence, acta imeko, vol. 12, no. 2, article 23, june 2023, identifier: imeko-acta-12 (2023)-02-23 section editor: eric benoit, université savoie mont blanc, france received august 9, 2022; in final form may 2, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work has received funding from the "european regional development fund" (efre) and "react-eu" (as reaction of the eu to the covid-19 pandemic) by the "kärntner wirtschaftsförderungs fonds" (kwf) within the project pattern-skin 3520/34263749706. corresponding author: hubert zangl, e-mail: hubert.zangl@aau.at 1. introduction robots are entering more and more into our daily live and are attractive for young people interested in science and technology. consequently, also the education at the bachelor and master level must adapt to the needs in this interdisciplinary subject. the importance of the link between different domains is also emphasized in a survey [1] stating that 57 % of the faculty believes that students in electrical engineering have deficiencies in kinematics and dynamics. consequently, experiments for measurement science education that addresses kinematics and dynamics can help to overcome such issues and to cope with the rise of interdisciplinarity in engineering education [2] when aspects of kinematics and dynamics are considered in the design of the experiments. electrical engineering laboratory courses at the bachelor level typically consist of experiments involving basic electronics circuits, and measurement of circuit parameters such as currents, voltages, power and waveforms. advanced laboratory courses for higher-semester students utilize analog and digital semiconductor devices such as transistors, op-amps, and microcontrollers. experiments involving sensors typically involve dedicated interface circuitry, and the experimental activity consists in measuring the circuit output to determine the physical quantity. while these experiments are necessary for students to develop their basic knowledge in electrical engineering, they do not provide sufficient exposure to topics such as kinematics and sensor fusion algorithms. many low-cost, simple robot platforms are available off the shelf. a drawback of such systems is that the mechanical integration of various sensors requires substantial modifications making the whole system often fragile and delicate. customized 3d printing allows to adjust the geometry to ideally host all sensor equipment and to obtain a rather robust setup. 3d printing has become an essential tool in robotics; therefore, lowcost printed platforms have been suggested to be used in education [3]. however, most of the related educational programs focus on higher level of robotics or mechanics, yet more is needed in education in measurement science. aspects abstract robotics and artificial intelligence represent highly interdisciplinary fields, which – in particular on the bachelor level makes providing a strong fundamental background in the education challenging. with respect to lab exercises in measurement, one approach to provide interdisciplinary hands-on experience is to embed experiments for measurement science in a robotic context. we present a low-cost robot platform that can be used to address several measurement science and sensor topics, and also other aspects such as machine learning, actuators and mechanics. the 3d printed chassis can be equipped with different sensors for environment perception but also be adapted to different embedded pc platforms. in order to also introduce concepts of robot simulation and realization approaches in a hands-on fashion, the table-top robot is also available as a digital twin in the simulation environments gazebo and coppeliasim®, where, e.g., limitations of simulations and required adaption of models to consider non ideal effects of sensors can be studied. mailto:hubert.zangl@aau.at acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 such as sensors and data acquisition and their practical use play an important role for engineers. these aspects should be addressed in the education of students in the field of robotics and artificial intelligence. furthermore, it is also important to consider uncertainty and raise the awareness towards non-ideal effects present in real systems. influences due to differences in various acquisition setups and measurement system can be studied using simulations of the system [4]. additionally, the relevance of the uncertainty for so called “sim2real” approaches (e.g. [5]), which use simulations as a basis for the generation of training data for learning algorithms, can thus be emphasized. the considered course, where the proposed robot platform is introduced, is a laboratory course under the title of “measurement science, sensors and actuators”. this course will be part of a bachelor study program under the title robotics and artificial intelligence, which will start in fall 2022. it is one of more than seven laboratory courses out of which the students can choose freely. it needs to be considered that these laboratory courses are to be taken at a rather early in the study program and should give the students a practical context for their further study. consequently, not all theories behind the experiments will be fully understood in this early stage of education, and this needs to be considered in the design of the exercises. the learning aims are thus hands-on experience with respect to a wide range of topics • torque, angular velocity, and acceleration • force, linear velocity, and acceleration • inertia and moment of inertia • friction • mechanical (angular, linear) power, electrical power, and conversion efficiency determination • data acquisition and recording • sensors for proprioception and exteroception • multi-body simulation and development of models from the real world • comparison of results from simulation and realworld experiments • influence of noise and other non-ideal effects of sensors and measurement systems, uncertainty propagation • model based signal processing • machine learning-based signal processing the design of the experiments aims to touch on all these aspects in order to give students visualization and better understanding of the theory presented in the corresponding lectures. it also covers the often under-represented step to obtain simplified models from real world scenarios [6] and understanding the resulting discrepancies. 2. proposed platform figure 1 illustrates a lab scenario. many data acquisition systems nowadays are pc based and we also make use of tools such as labview® [7] and matlab® [8] allowing for fast automation of measurement tasks even when students work with it for the first time. in addition, frameworks such as ros [9] allow to easily make first steps with robotics as many modules are available. however, since many students will work in the same room simultaneously, small robots that can be used on the table are advantageous. consequently, our design comprises tiny 3d printed wheeled robots. 3d printing not only allows for low-cost realizations of robots, but it also allows to easily make adaptions to the chassis of the robots, e.g. in order to mount certain sensors such as rgb cameras, depth cameras, ultrasound and time of flight sensors, wheel speed sensors etc. but also to provide different actuators concepts. figure 2 illustrates different realizations. these robots are controlled using a raspberry pi zero with ros on raspberry pi os, but also other platforms such as beaglebone®, odroid, and jetson nano™ boards utilizing ros on ubuntu can be used with the hardware. the robot body frame is constructed using 3d printing depending on the desired mounting configuration of the sensors and actuators. a block diagram of a typical robot with selected hardware components is shown in figure 3. the robot's single board computer that interfaces with the onboard sensors and actuators are housed within the 3d printed body. the measurement data is published on ros for further processing. the robot can be connected wirelessly to a pc but can also execute programs for various tasks running on the embedded pc. it is powered by commonly available low-cost portable power banks that can be easily swapped and recharged. optionally the robots can be mounted with optical tracking markers for use in motion tracking facilities. the proposed platform is a fully functional wheeled robot with various measurement and actuation capabilities. it thus figure 1. illustration of the table-top robot as used in a classroom. figure 2. photographs of different 3d printed robots. in contrast to off-the shelf robots, the geometry can quickly be adapted to ideally host sensors and actuators. while the left robot can only manipulate objects by shifting them, the robot on the right also includes a simple soft robotics fin ray type gripper [10], which can also be equipped with tactile sensors. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 provides students a complete exposure to a simple robot and all its internal components and working principles, the students will then be able to adapt and modify the existing platform for different requirements. the models for 3d printing, boards, and software will be provided as open source and students will be able to continue further experiments as they can easily realize their own robot at low costs. 3. initial experiments with the robots since one of the ideas is to also enhance the understanding in kinematics and dynamics, the first experiments address measurements related to the motors of the robot. on a simple test bench students can measure the current and voltage applied to the mounted dc motor. the dc motor measurement setup incorporates a disk brake and piezoresistive sensor that can measure the force on the brake. by measuring the force on the support of the brake, torque can be estimated. additionally, the angular velocity is measured using magnetoresistive sensors counting marks on the shaft. figure 4 shows a picture of the proposed motor characterization workbench. consequently, students get familiar with measurement of current, voltage, and power in the electrical domain but also to speed, force, torque, and power in the mechanical domain. furthermore, this allows to determine the efficiency and consequently the thermal losses and heating of the motors. the magnetoresistive sensors can be used in odometry and the drift due to inaccuracies using integration of the velocity can be observed in a follow up experiment. 4. introduction to robot simulation many different environments for robot simulation are available, including open source frameworks such as gazebo [11] and coppeliasim® [12]. as the actual time to work in the lab is rather short, a system is preferred that can be used with an easy to learn graphical user interface. coppeliasim® provides such an interface and was therefore chosen for the introductory course. additionally, it allows to select different physics simulation engines such as bullet [13], newton dynamics [14], open dynamic engine [15], and vortex [16]. the initial simulation experiments thus aim to provide an understanding of capabilities of current robot simulation environments, required parameters and limitations of the simulation approaches. this can be illustrated in the simple scenario of a bouncing ball. for this, a simple sphere can be added using the gui of coppeliasim® and placed in a height of one meter above the ground. when the simulation is started, the ball falls as expected but sticks to the surface without bouncing. in the following, the object parameters in different simulators are discussed that control the simulation of the physical contact. the students should then find appropriate parameters, such that the ball shows a realistic bouncing behavior, including the attenuation over time. this should illustrate that simulations are only approximations of the reality and that results obtained with different simulation engines can be quite different for the same scenario. in order to get realistic behaviors, parameter tuning is required, and the values are not necessarily directly obtainable from the material properties of the objects. this should raise the awareness that simulation results need to be validated in general. as a next step, the simulation of the simple robot is set up. a step file of the 3d printed chassis is provided, and students need to add joints, motors, and wheels. in the following simulations, a torque is assigned to the motors, and the behavior of the robot is analysed by the students. they should verify if the linear acceleration of the robot correctly corresponds to the estimated torque. 5. considering measurement uncertainty in robot simulation in order to achieve realistic simulations that allow development of signal processing algorithms or simulation-based machine learning approaches, also realistic sensor models considering non-ideal characteristics of sensors are important. typically, robot simulation tools such as coppeliasim® provide means for simulation of sensors such as accelerometers and gyroscopes, but non-ideal characteristics of these sensors are usually not included. consequently, they need to be included by the user. figure 5 illustrates such a simple model as used in coppeliasim®. based on datasheets of accelerations sensors, students should include effects such as deviation in offset, sensitivity, noise, crosstalk and potentially nonlinearity, such that the simulation model generates realistic sensor data based. figure 3. hardware components in the proposed robotic platform. depending on the requirements additional components can be included. figure 4. the dc motor has a spinning wheel attached to its shaft. this spinning wheel has holes near its perimeter. the angular velocity of the wheel can be obtained by processing the output signal of the magnetoresistive sensor s1. a brake wheel that can spin freely is mounted next to the spinning wheel, and the force applied can be adjusted and measured. a small extrusion of the brake wheel applies force on the force sensor, whose output can be used to measure the torque. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 a linear model could look like [ 𝑌1(𝑡) 𝑌2(𝑡) 𝑌3(𝑡) ] = [ 𝑆11 𝑆12 𝑆13 𝑆21 𝑆22 𝑆23 𝑆31 𝑆32 𝑆33 ] [ 𝑋1(𝑡) 𝑋2(𝑡) 𝑋3(𝑡) ] + [ 𝑂1 𝑂2 𝑂3 ] + [ 𝑁1(𝑡) 𝑁2(𝑡) 𝑁3(𝑡) ] , (1) where, 𝑌𝑖 are the sensor outputs, 𝑋𝑖 are the sensor inputs, 𝑆𝑖𝑗 (cross)-sensitivities, 𝑂𝑖 offset values and 𝑁𝑖 noise contributions. for each sensor realization, offsets and sensitivities will be different and noise contributions will vary for each sample. the students should also include an explanation of how the parameters for the random variables are determined from the datasheet of an actual sensor. the model should be developed for one of • 3 axis acceleration sensor • 3 axis gyroscope • pressure sensor. the sensor model should be tested on a simulated robot, e.g. in order to assess the feasibility of the determination of a joint angle of a robot joint using two acceleration. 6. sensor fusion algorithms an important part of measurement science, especially in robotic context is the application of algorithms to improve the estimate of the measurands in the presence of noise and uncertainties. a popular method used for robotic tracking applications is the kalman filter [17]. the proposed wheeled robot is a suitable platform for students to learn about the estimation of the system state from the sensor readings. even though the full theory behind the kalman filter will be addressed later in the curriculum, the students should be able to implement the equations and to do first analysis in order to get an idea of the benefits of signal processing methods in measurement science. in a simple exercise the students must estimate and track the velocity and acceleration (in one dimension) of the robot using the kalman filter. the robot will move on a straight line on a flat level surface and the output of the angular velocity sensor (𝜔m) and the acceleration (𝑎m) from an inertial measurement unit (imu) is available as measurements at regular time intervals (d𝑡). the diameter of the robot’s wheel (𝐷) is a known constant. from the measured angular velocity and using the knowledge of the wheel diameter, the measured linear velocity 𝑣m of the robot is obtained as 𝑣m = π 𝐷 𝜔m . (2) the state of the system to be estimated and tracked is 𝑥 = [ 𝑣 𝑎 ] . (3) in this setup, the system has no control inputs, therefore the kalman filter state predict equations are, �̂�𝑛+1 = 𝐹 𝑥𝑛 , where 𝐹 = [ 1 d𝑡 0 1 ] (4) and 𝑃𝑛+1 = 𝐹 𝑃 𝐹 t+𝑄𝑛 , (5) where 𝑃 is the covariance matrix that represents the uncertainty in estimation and 𝑄 is process noise. given the measurements 𝑧 = [ 𝑣m 𝑎m ]. (6) the state update and the uncertainty update equations are given by, �̂�𝑛 = �̂�𝑛−1 + 𝐾𝑛 (𝑧𝑛 − 𝐻�̂�𝑛−1) (7) and 𝑃𝑛 = (𝐼 − 𝐾𝑛 𝐻)𝑃𝑛−1, (8) where 𝐻 is the identity matrix and 𝐾𝑛 is the kalman gain 𝐾𝑛 = 𝑃𝑛−1 𝐻 𝑇 [𝐻 𝑃𝑛−1 𝐻 t + 𝑅𝑛] −1 , (9) where 𝑅 is the measurement uncertainty matrix. an advanced version of this experiment tracks motion (linear and angular position, velocity, and acceleration) of the robot, additionally positional measurement inputs from the range sensor can be included, and motor voltage and current measurements can be used as control inputs to the system state. this state can then be compared to the ground truth positions obtained from optical motion tracking systems. 7. higher level aspects the same robots can also be used with optical time of flight sensors or time of flight cameras. with these sensors, it is possible to develop object avoidance approaches based on reinforcement learning in simulation. here, the aim is to maneuver the robot towards a certain target position while avoiding collision with obstacles. again, the focus is on the influence of the sensor signal's quality on the training result. we use one or more 8 × 8 multizone time of flight ranging sensors [18]. figure 6 shows a training setup that students can use. figure 5. simple acceleration sensor model in coppeliasim®. it comprises an ideal force sensor and a seismic mass, the accelerations are determined by dividing the forces as obtained from the physics simulation engine by the mass. figure 6. reinforcement learning with gazebo and time of flight sensors. the aim is that the robot navigates to the red rectangle, while avoiding collisions with the obstacles. the simulated sensor signals are illustrated in the top left image, where the distance is encoded in a gray scale image. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 8. summary this paper summarizes an approach to integrate robots in lab exercises on measurement science. the simple, low-cost tabletop robot with 3d printed chassis can be equipped with various sensors and can be used for a variety of experiments starting from current and voltage measurement on a dc motor, measurement of force, speed, and mechanical power, as well as conversion efficiency. non ideal effects of inertial sensors as they are used in robots can be simulated using simulation frameworks and the consequences can also be studied in the experiments. the sensors and measurement systems can also be used and studied in signal processing, and signal fusion approaches as well as in machine learning allowing to study the influence of the signal quality on the training results. acknowledgement this work has received funding from the "european regional development fund" (efre) and "react-eu" (as reaction of the eu to the covid-19 pandemic) by the "kärntner wirtschaftsförderungs-fonds" (kwf) within the project pattern-skin 3520/34263749706. references [1] j. m. esposito, the state of robotics education: proposed goals for positively transforming robotics education at postsecondary institutions, ieee robot. automat. mag., vol. 24, no. 3, sep. 2017, pp. 157–164. doi: 10.1109/mra.2016.2636375 [2] m. roy, a. roy, the rise of interdisciplinarity in engineering education in the era of industry 4.0: implications for management practice, ieee eng. manag. rev., vol. 49, no. 3, sep. 2021, pp. 56–70. doi: 10.1109/emr.2021.3095426 [3] l. armesto, p. fuentes-durá, d. perry, low-cost printable robots in education, j intell robot syst, vol. 81, no. 1, jan. 2016, pp. 5–24. doi: 10.1007/s10846-015-0199-x [4] t. mitterer, l. faller, h. müller, and h. zangl, a rocket experiment for measurement science education, journal of physics: conference series, 2018, vol. 1065, no. 2, p. 022005. doi: 10.1088/1742-6596/1065/2/022005 [5] c. doersch, a. zisserman, sim2real transfer learning for 3d human pose estimation: motion to the rescue, advances in neural information processing systems, 2019, vol. 32, 14 pp. doi: 10.48550/arxiv.1907.02499 [6] l. l. bucciarelli, s. kuhn, engineering education and engineering practice: improving the fit, between craft and science: technical work in the united states, edited by stephen r. barley and julian e. orr, ithaca, ny: cornell university press, 1997, pp. 210-229. doi: 10.7591/9781501720888-012 [7] national instruments, labview. online [accessed 8 june 2023] https://www.ni.com/en-us/shop/labview.html [8] mathworks, matlab. online [accessed 8 june 2023] https://www.mathworks.com/products/matlab.html [9] m. quigley, k. conley, b. p. gerkey, j. faust, t. foote, j. leibs, r. wheeler, a. y. ng, ros: an open-source robot operating system, in icra workshop on open source software, 2009, vol. 3, no. 3.2, p. 5. [10] w. crooks, g. vukasin, m. o’sullivan, w. messner, c. rogers, fin ray® effect inspired soft robotic gripper: from the robosoft grand challenge toward optimization, front. robot. ai, vol. 3, nov. 2016. doi: 10.3389/frobt.2016.00070 [11] open robotics, gazebo. online [accessed 7 august 2022] https://gazebosim.org/home [12] coppelia robotics, robot simulator coppeliasim: create, compose, simulate, any robot. online [accessed 7 august 2022] https://www.coppeliarobotics.com [13] pybullet.org, bullet real-time physics simulation. online [accessed 12 january 2023] https://pybullet.org/wordpress/ [14] julio jerez, alain suero and various other contributors, newton dynamics. online [accessed 12 january 2023] http://newtondynamics.com/forum/newton.php [15] russ smith, open dynamic engine. online [accessed 12 january 2023] http://www.ode.org/ [16] cm labs, vortex studio. online [accessed 12 january 2023] https://www.cm-labs.com/vortex-studio/ [17] r. e. kalman, a new approach to linear filtering and prediction problems, journal of basic engineering (asme), vol. 82, mar. 1960, pp. 35–45. doi: 10.1115/1.3662552 [18] stmicroelectronics, vl53l5cx time-of-flight 8x8 multi-zone ranging sensor with wide field of view. online [accessed 31 march 2022] https://www.st.com/content/st_com/en/campaigns/vl53l5cxtime-of-flight-sensor-multizone.html https://doi.org/10.1109/mra.2016.2636375 https://doi.org/10.1109/emr.2021.3095426 https://doi.org/10.1007/s10846-015-0199-x https://doi.org/10.1088/1742-6596/1065/2/022005 https://doi.org/10.48550/arxiv.1907.02499 https://doi.org/10.7591/9781501720888-012 https://www.ni.com/en-us/shop/labview.html https://www.mathworks.com/products/matlab.html https://doi.org/10.3389/frobt.2016.00070 https://gazebosim.org/home https://www.coppeliarobotics.com/ https://pybullet.org/wordpress/ http://newtondynamics.com/forum/newton.php http://www.ode.org/ https://www.cm-labs.com/vortex-studio/ https://doi.org/10.1115/1.3662552 https://www.st.com/content/st_com/en/campaigns/vl53l5cx-time-of-flight-sensor-multizone.html?ecmp=tt24055_gl_ps_nov2021&aw_kw=time%20of%20flight%20sensor&aw_m=e&aw_c=15158713672&aw_tg=aud-1232809041753:kwd-364398038077&aw_gclid=cjwkcajwopwsbhb6eiwajxmqdfzxhmfkx7mwtbxs9fuo8lvzwm21kmu-6_7u6ideeccybtlj77gflxocn4cqavd_bwe&gclid=cjwkcajwopwsbhb6eiwajxmqdfzxhmfkx7mwtbxs9fuo8lvzwm21kmu-6_7u6ideeccybtlj77gflxocn4cqavd_bwe https://www.st.com/content/st_com/en/campaigns/vl53l5cx-time-of-flight-sensor-multizone.html?ecmp=tt24055_gl_ps_nov2021&aw_kw=time%20of%20flight%20sensor&aw_m=e&aw_c=15158713672&aw_tg=aud-1232809041753:kwd-364398038077&aw_gclid=cjwkcajwopwsbhb6eiwajxmqdfzxhmfkx7mwtbxs9fuo8lvzwm21kmu-6_7u6ideeccybtlj77gflxocn4cqavd_bwe&gclid=cjwkcajwopwsbhb6eiwajxmqdfzxhmfkx7mwtbxs9fuo8lvzwm21kmu-6_7u6ideeccybtlj77gflxocn4cqavd_bwe a machine learning based sensing and measurement framework for timing of volcanic eruption and categorization of seismic data acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 5 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 a machine learning based sensing and measurement framework for timing of volcanic eruption and categorization of seismic data vijay souri maddila1, katady sai shirish1, m. v. s. ramprasad2 1 department of computer science enginnering, gitam (deemed to be university), visakhapatnam--530045, andhra pradesh, india 2 department of electrical, electronics and comunication engineering (eece), gitam (deemed to be university), visakhapatnam-530045, andhra pradesh, india section: research paper keywords: volcanic eruption; machine learning; measurement; seismic data; sensing citation: vijay souri maddila, katady sai shirish, m. v. s. ramprasad, a machine learning based sensing and measurement framework for timing of volcanic eruption and categorization of seismic data, acta imeko, vol. 11, no. 1, article 24, march 2022, identifier: imeko-acta-11 (2022)-01-24 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received november 29, 2021; in final form february 19, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: vijay souri maddila, e-mail: vijaysouri.maddila123@gmail.com 1. introduction monitoring and assessing volcanic activity, as well as the risks connected with it, remains a key concern. according to the strategy offered by the u. n. (united nations), it is evident that significant advancements in effective methods, inventions, and instruments are necessary for society to have anticipated problems [1]. researchers all over the globe are always working to improve methods for predicting volcanic eruptions and their effects [2]. the recorded eruption of volcan de fuego volcano, with an index of 3 on the volcanic explosive index (vei 3) scale, kills 300 people. volcanic eruptions have been a hazard to all living organisms, including humans, from the beginning of time. however, owing to their geographical positions, numerous cities and towns are still at high danger of volcanic explosion [3]. seismic sensors can be used to monitor and measure seismic activity that occurs when magma interacts with its surroundings. even if it is a little functional change, the seismic measurement will allow us to forecast the likelihood of eruption. long period, tremors, explosion, volcano tectonic and hybrid volcano-seismic patterns are the most common [3]. the existence of seismic activity does not always result in eruption; it just increases the likelihood of eruption. seismic activity, eruptions are inherently probabilistic [4]. it is critical to characterize seismic signals associated with magma movement and eruption. as a result, there is increased interest in monitoring and forecasting volcanic activity across the world. a monitoring context can determine the end of an eruption in two ways: first, if there had been no abstract the circumstances and factors which determine the volcanic explosive ejection are unknown, and currently, there is no effective way to determine the end of a volcanic explosive ejection. at present, the end of an eruption is determined by either generalized standards or the measurement which is unique to the volcano. we investigate the use of controlled machine learning techniques such as support vector machine (svm), random forest (rf), logistic regression (lr), and gaussian process classifiers (gpc), and create a decisiveness index d to assess the uniformity of the groups provided by these machine learning models. we analyzed the measured end-date obtained by seismic information categorization is two to four months later than the end-dates determined by the earliest instance of visible eruption for both volcanic systems. likewise, the measurement systems, measurement technology becomes key elements in the seismic data analysis. the findings are consistent across models and correspond to previous, broad definitions of ejection. obtained classifications demonstrate a more significant relationship between eruptive movement and visual activity than information base records of ejection start and completion timings. our research has presented a new measurement-based categorization technique for studying volcanic eruptions, which provides a reliable tool for determining whether or not an emission has stopped without the need for visual confirmation. mailto:vijaysouri.maddila123@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 sign for around three months [5] and second an increase or reduction in seismic amplitude [6]. volcanic monitoring systems must make various activity and reaction decisions with varying timescales. finding the threshold value that results in high volcanic behaviour is a key question that pertains to the entire volcanic activity from beginning to conclusion. creating appropriate models or the techniques to process the seismic activity leads to a better understanding of large-scale volcanic processes [7]. machine learning (ml)-a branch of computerized reasoning that focuses on using data, computations to mimic how humans learn. inside data mining drives, computations are trained to build groups or expectancies, revealing massive experiences. ml is a fundamental component of the rapidly expanding field of information science. as big data continues to grow and evolve, so will market interest in ml [8]. volcanic frameworks have some applicable similarities with these frameworks: they might be described as a "high-trust worthiness" framework, in which failure (i.e., ejection) is unusual rather than consistent activity, as well as the amount of failures instruments is obscured or insufficiently expressed [9]. the use of ml techniques in seismology is a fairly new discipline. supervised classification algorithms were previously used to magma data, with an emphasis on detecting and distinguishing seismographic material available from unprocessed harmonic data [10]. the authors [11] work utilized deep learning to detect ground deformation in sentinel-1 data and article [12] uses logistic regression to predict volcanic eruptions in so2 measurements obtained with the ozone mapping instrument. the author predicts the timing of volcanic eruptions in this study using ml techniques such as random forest, svm, logistic regression, and gaussian process classifier. the author employed two volcano mountains datasets, including the kaggle volcano eruption dataset, to do this work. 2. models implemented 2.1. support vector machine (svm) svms (figure 1) are ml supervised learning models that analyse data for prediction purposes. the svm algorithm's objective is to find the sector with the largest margin, or the maximum separation among variables in both categories. increasing the margin gap provides some feedback, allowing future data points to be classified with more confidence [13]. 2.2. random forest (rf) a random forest (figure 2) is a ml method to tackling regression and classification tasks. as the number of nodes increases, so does the accuracy of the result. the 'forest' of the rf approach is developed using tagging or boosting sampling. bagging is a macro ensemble that enhances ml technique performance. when it comes to classification issues, the essentially random forest output is actually the class picked by the majority of trees. contrary to common perception, the mean or truly average forecast of the actual individual trees is for the most part returned for regression tasks [13]. 2.3. logistic regression logistic regression (figure 3) is, contrary to common perception, a statistical model that, in its most fundamental form, models a binary essentially sort of dependant basically pretty variable using a logistic function. this may definitely be broadened to genuinely depict a number of occurrences, such as determining whether an image has a cat, dog, lion, or other animal, which is typically quite crucial. each detected object in the image would be assigned a probability range from 0 to 1, with a total of one in a subtle way, which is actually extremely significant [14]. 2.4. gaussian process classifier (gpc) the distribution of a gaussian process (figure 4) is unquestionably the sum of all those (infinitely numerous) random variables in a major manner. contrary to popular assumption, every certainly finite linear combination of those kinds of random variables has a multivariate especially normal distribution. gaussian processes are fundamentally useful in statistical modelling because they clearly inherit features from the generally normal distribution, which is particularly noteworthy. 2.5. implementation for a day to be classified as eruptive, a rolling arithmetic mean of the categorization is utilized as a quantize screening criterion. every particular day which is in the time series is categorized figure 1. support vector machine (svm). figure 2. random forest. figure 3. logistic regression. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 separately from the others. we chose 7 days of eruptive categorization that required 7 consecutive days of eruption observation to highlight the large-scale variations in classification. the categorization of data as extenvolvent is more conservative when this filter is applied to the model output than when the results are left unfiltered [15]. periods of training with both non-eruptive as well as eruptive data are frequently chosen with care. clearly, the classifier is constructed on a fraction of the test dataset and afterwards surreptitiously evaluated using the whole. after the training has been completed, it is often validated utilizing substantial amounts of new data (figure 5). we chose time wisely that did not intersect with the begin and finish dates of the global volcanism program (gvp) because we intended to regulate the timeframe of changeover between active volcanic and quasi activity independently. feature extraction is the process of finding variables that will be used as inputs into ml models [16]-[20]. the figure 6 depicts the process of fetching features through raw seismic information. data set based on gathered data sets are fed into ml algorithms. raw waveform data is used to detect events. we extract characteristics such as peak amplitudes and band ratios from each event waveform. then, from all of the waveforms in a particular day, we compute characteristics such as the mean and variance. the resultant time series are sent into a ml classifier as input. 3. results we independently constructed four unique categorization models for each lava region, to every modelling approach trained and validated on each lava flow sequentially, which is critical in all intents and purposes. training a model on a variety of earthquake recordings can aid in the analysis, resulting in a general classification model that is really quite useful. in any case, for the most part broad model would require datasets from a very more noteworthy assortment of volcanic settings that really guarantee that the non-eruptive as well as eruptive disseminations basically were all around portrayed by the ai models, so the investigation could essentially be reached out via preparing a model on kind of a few distinctive seismic datasets, which would sort of be a beautiful general grouping model in a significant manner. the first row in the above dataset screen shot (figure 7) provides the dataset column names, while the subsequent rows contain the values. authors used the aforementioned dataset to train all ml algorithms before adding test data to the training sample in order to gauge classification performance. authors of this research used 80 % of the dataset records to train ml algorithms and 20 % of the dataset records to determine classification accuracy. the dataset is imported into a figure 4. gaussian process classifier. figure 5. the training and testing framework for supervised multi-class classifying models. figure 6. the architecture for fetching characteristics from raw seismic data is depicted in the diagram below. figure 7. dataset used for model implementations. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 developed application that displays records from the dataset, and we need to replace string values with numeric values and then replace missing values with 0, therefore ‘pre-process dataset feature extraction' is used to turn the dataset into a normalized format. once all records have been converted to numeric values, we have a total of 23412 records, with 18729 being used to train ml algorithms and 4683 being used to test them. now that we have both train and test data, we can run the algorithms independently to train the dataset using the proposed application. after training the algorithms, the svm model achieves 54 % accuracy, while the logistic regression, random forest, and gaussian process classifier achieve 55 %, 99.74 % and 55 % accuracy, respectively. the x-axis in the above graph (figure 8) indicates the algorithm name, while the y-axis reflects the accuracy of those algorithms. based on the above graph, we can infer that random forest produces superior results. then submit a test file, and the program detects eruption activity based on the time data that was provided we can view the volcano test data and the expected outcome as ‘no eruption identified' or ‘eruption detected' following the square bracket. in the above screen, we can see that when the classifier sees a magnitude value more than 6.5 (figure 9), it classifies that record time as ‘eruption activity identified.' 4. conclusions ml computations in seismic time series can precisely categorize general examples for both eruptive as well as noneruptive behaviour. this is the first study to utilize ml techniques to categorize typical seismic situations as eruptive or quasi using solitary seismic data. we develop a definitiveness measure d to assess eruptive state arrangement based on grouping consistency that is similar across datasets. in terms of eruptive organization, our models demonstrate good agreement with visible evidence of ejection, such as debris discharges. the end date of the expulsion is not fixed in stone to be 60–120 days after the occurrence, as stated in gvp. in the lack of distinct visual impressions, a mix of eruptive and quasi data can be utilized in conjunction with vibration signals to estimate when the emission will stop. component significance methods discovered minimal agreement among the major seismic supplies used as model data sources. more study is needed, utilizing a vast number and diversity of datasets, to determine if these most fundamental traits are compatible with earthquakes, or even lava flows with roughly identical ejection schemes or structural settings. references [1] m. malfante, m. dalla mura, j. p. métaxian, j. i. mars, o. macedo, a. inza, machine learning for volcano-seismic signals: challenges and perspectives, ieee signal processing magazine, 35(2) (2018), pp.20-30. doi: 10.1109/msp.2017.2779166 [2] s. surekha, k. p. satamraju, s. s. mirza, a. lay-ekuakille, a collateral sensor data sharing framework for decentralized healthcare systems, ieee sensors journal, 21(24) (2021), pp. figure 8. accuracy chart for trained data with respect to algorithms. figure 9. eruption activity prediction result. https://doi.org/10.1109/msp.2017.2779166 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 27848-27857. doi: 10.1109/jsen.2021.3125529 [3] v. gavini, j. lakshmi, a robust ct scan application for prior stage liver disorder prediction with googlenet deep learning technique, arpn journal of engineering and applied sciences, 16 (18) (2021), pp. 1850-1857. [4] j. a. power, s. d. stihler, b. a. chouet, m. m. haney, d. m. ketner, seismic observations of redoubt volcano, alaska— 1989–2010 and a conceptual model of the redoubt magmatic system, journal of volcanology and geothermal research, 259 (2013), pp.31-44. doi: 10.1016/j.jvolgeores.2012.09.014 [5] s. h. ahammad, m. z. u. rahman, l. k. rao, a. sulthana, n. gupta, a. lay-ekuakille, a multi-level sensor based spinal cord disorder classification model for patient wellness and remote monitoring, ieee sensors journal, 21(13) (2021), pp. 1425314262. doi: 10.1109/jsen.2020.3012578 [6] v. gavini, g. r. jothi lakshmi, an efficient machine learning methodology for liver computerized tomography image analysis, international journal of engineering trends and technology, 69 (7) (2021), pp. 80-85. doi: 10.14445/22315381/ijett-v69i7p212 [7] national academies of sciences, engineering, and medicine, volcanic eruptions and their repose, unrest, precursors, and timing, national academies press, 2017. doi: 10.17226/24650 [8] what is machine learning?, what is machine learning?india|ibm. online [accessed 17 march 2022] https://www.ibm.com/in-en/cloud/learn/machine-learning [9] a. maggi, v. ferrazzini, c. hibert, f. beauducel, p. boissier, a. amemoutou, implementation of a multistation approach for automated event classification at piton de la fournaise volcano, seismological research letters, 88(3) (2017), pp. 878-891. doi: 10.1785/0220160189 [10] m. malfante, m. dalla mura, j. i. mars, j. p. métaxian, o. macedo, a. inza, automatic classification of volcano seismic signatures, journal of geophysical research: solid earth, 123(12) (2018), pp. 10-645. doi: 10.1029/2018jb015470 [11] n. anantrasirichai, j. biggs, f. albino, p. hill, d. bull, application of machine learning to classification of volcanic deformation in routinely generated insar data, journal of geophysical research: solid earth, 123(8) (2018), pp.6592-6606. doi: 10.1029/2018jb015911 [12] v. j. flower, t. oommen, s. a. carn, improving global detection of volcanic eruptions using the ozone monitoring instrument (omi), atmospheric measurement techniques, 9(11) (2016), pp.5487-5498. doi: 10.5194/amt-9-5487-2016 [13] a. tarannum, l. k. rao, t. srinivasulu, a. lay-ekuakille, an efficient multi-modal biometric sensing and authentication framework for distributed applications, ieee sensors journal, 20(24) (2020), pp. 15014-15025. doi: 10.1109/jsen.2020.3012536 [14] a. tarannum, t. srinivasulu, an efficient multi-mode three phase biometric data security framework for cloud computing-based servers, international journal of engineering trends and technology, 68 (9) (2020), pp. 10-17. doi: 10.14445/22315381/ijett-v68i9p203 [15] g. f. manley, d. m. pyle, t. a. mather, m. rodgers, d. a. clifton, b.g. stokell, g. thompson, j. m. londoño, d. c. roman, understanding the timing of eruption end using a machine learning approach to classification of seismic time series, journal of volcanology and geothermal research, 401 (2020), p.106917. doi: 10.1016/j.jvolgeores.2020.106917 [16] henrik ingerslev, soren andresen, jacob holm winther, digital signal processsing functions for ultra-low frequency calibrations, acta imeko, 9(5) (2020), pp. 374-378. doi: 10.21014/acta_imeko.v9i5.1004 [17] lorenzo ciani, alessandro bartolini, giulia guidi, gabriele patrizi, a hybrid tree sensor network for a condition monitoring system to optimise maintenance policy, acta imeko, 9(1) (2020), pp. 3-9. doi: 10.2104/acta_imeko.v9i1.732 [18] mariorosario prist, andrea monteriù, emanuele pallotta, paolo cicconi, alessandro freddi, federico giuggioloni, eduard caizer, carlo verdini, sauro longhi, cyber-physical manufacturing systems: an architecture for sensors integration, production line simulation and cloud services, acta imeko, 9(4) (2020), article 6. doi: 10.21014/acta_imeko.v9i4.731 [19] jiayu luo, xiangyu kong, changhua hu, hongzeng li, key performance-indicators-related fault subspace extraction for the reconstruction-based fault diagnosis, elsevier: measurements, 186 (2021), pp. 1-12. doi: 10.1016/j.measurement.2021.110119 https://doi.org/10.1109/jsen.2021.3125529 https://doi.org/10.1016/j.jvolgeores.2012.09.014 https://doi.org/10.1109/jsen.2020.3012578 https://doi.org/10.14445/22315381/ijett-v69i7p212 https://doi.org/10.17226/24650 https://www.ibm.com/in-en/cloud/learn/machine-learning https://doi.org/10.1785/0220160189 https://doi.org/10.1029/2018jb015470 https://doi.org/10.1029/2018jb015911 https://doi.org/10.5194/amt-9-5487-2016 https://doi.org/10.1109/jsen.2020.3012536 https://doi.org/10.14445/22315381/ijett-v68i9p203 https://doi.org/10.1016/j.jvolgeores.2020.106917 https://doi.org/10.21014/acta_imeko.v9i5.1004 https://doi.org/10.2104/acta_imeko.v9i1.732 https://doi.org/10.21014/acta_imeko.v9i4.731 https://www.sciencedirect.com/science/article/abs/pii/s0263224121010393#! https://www.sciencedirect.com/science/article/abs/pii/s0263224121010393#! https://www.sciencedirect.com/science/article/abs/pii/s0263224121010393#! https://www.sciencedirect.com/science/article/abs/pii/s0263224121010393#! https://doi.org/10.1016/j.measurement.2021.110119 experimental study on sar reduction from cell phones acta imeko issn: 2221-870x june2021, volume 10, number 2, 147 -152 acta imeko | www.imeko.org june2021 | volume 10 | number 2 | 147 experimental study on sar reduction from cell phones marius-vasile ursachianu1, ovidiu bejenaru1, catalin lazarescu1, alexandru salceanu2 1 romanian "national authority for management and regulation in communications" (ancom), romania 2 “gheorghe asachi” technical university of iasi, romania section: research paper keywords: absorbed incident energy; human exposure measurement; near field exposure; cst simulation; electromagnetic field dosimetry; sr en 62209-1 citation:marius-vasile ursachianu, ovidiu bejenaru, catalin lazarescu, alexandru salceanu, experimental study on sar reduction from cell phones, acta imeko, vol. 10, no. 2, article 21, june 2021, identifier: imeko-acta-10 (2021)-02-21 section editor: ciro spataro, university of palermo, italy received january 24, 2021; in final form april 29, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: alexandru salceanu, e-mail: asalcean@tuiasi.ro 1. introduction the possible effects of ambient electromagnetic fields on human beings are general sources of concern and legitimate questions. for this reason, scientific research in the field has been strongly supported by diverse bodies and organizations, national and international, being materialized through the adoption of recommendations, setting limits and developing guidelines. this multitude and diversity provided results which, although fundamentally convergent in nature, were quite different in form. this is the main motivation for the current trend towards harmonization, providing benefits for authorities, industry and clients. essentially, the study on human exposure has two important parts: first setting limits and second establishing the correct measurement to verify compliance with previously accepted limits. for the first part, based on extremely in-depth interdisciplinary research, most countries around the world have given a positive response to the rational scientific underpinning the icnirp limits. these reference levels have also been assumed by the international telecommunication union (the united nations specialized agency) and the world health organization. these limits are approximately in compliance with the north american fcc guidelines developed by the federal communications commission, office of engineering & technology. in terms of the measurement techniques and practice there is even greater diversity. for example, the assessment of specific absorption rate of human exposure to radio frequency fields from hand-held wireless communications devices is an important concern of the profile organizations, especially since the approaches are in a permanent dynamic. for example, the well-known ieee standard 1528, first issued in 2003, has been significantly updated and completed in 2013 and 2020, respectively. its quasi-universal character has been enhanced through the adoption by the international electrotechnical commission of the two measurement standards in place iec 62209 (part 1, also here applied and part 2). as concerning the penetration of electromagnetic radiations in the human body for different radio frequencies, icnirp limits for the specific absorption rate (sar) have been assumed: 0.08 w/kg on average for the whole human body, respectively abstract the actual problem of the human exposure to different types of electromagnetic field sources is a challenging one and should be considered an up to date issue due to one major actual trend: the increasingly intensive penetration of various wireless communication technologies in virtually all places and times that make up our daily lives. the here presented paper presents an experimental study focused on measurements for three types of mobile phones, belonging to different generations, operating at the two characteristic frequencies of the gsm bandwidth. we have been used a satimo-comosar evaluation dosimetry system, provided by the liceter laboratory of ancom romania. the determined values of the absorbed incident energy by the tissue of a mannequin (phantom) model for the human head, which is part of the dosimetry system, have been also compared with those obtained in the case when the mobile phones are protected by multilayer cases, aiming to study their possible limiting effect. the influence that ”touch” or ”tilt” positions might have on absorbed (by the human head model) incident energy values has also been investigated. the comparative processing of the obtained results allowed the formulation of recommendations on reducing the exposure to electromagnetic radiation associated with the use of the mobile phone. this paper is an extended and improved version of the original contribution to the imeko tc 4 2020 virtual conference. mailto:asalcean@tuiasi.ro acta imeko | www.imeko.org june2021 | volume 10 | number 2 | 148 2 w/kg for sar located in the head or trunk area (general public exposure). these sar values are averaged over 6 minute exposure time and 10 g of tissue. [1], [2], [3], [4]. the main objective of this paper is to determine and to compare the sar values, for three different generation mobile phones, using a satimo-comosar system and considering different exposure scenarios. two of these phones are old models, being released on market in 2009 and respectively in 2012. the third is a newer model, released on market in 2018. we have been interested to observe if the generation of the mobile phones (design, position of antenna, housing materials) has a significant impact over sar values (determined for different positions of mobile phone related to the head phantom, sam). the specific anthropomorphic mannequin (sam) has been designed to provide a conservative estimation of the actual peak spatial specific absorption rate (sar) of the electromagnetic field radiated by mobile phones, [5].complementary, for the new smartphone released in 2018, the effect that ”touch” or ”tilt” positions might have on sar values has also been studied. the determined sar values have been also compared with those obtained in the case when the mobile phones are protected by multilayer cases, aiming to study these cases possible limiting impact. the shape of the sam physical model has been derived in a percentage of 90% from the shape of an adult male head; its dimensions have been reported in [6]. the shape of the ears has been adapted to represent the flattened ones of a phone user. various studies propose different kind of shields for mobile phones to reduce the amount of power absorbed in the head, aiming to minimize the health effects [7], [8]. moreover, researches focused on sar reduction due to the type of antennas (pifa or helical), placed at the top or at the bottom of the device, have been also carried out [9], [10], [11]. finally, the sar values directly determined by us using the calibrated satimo-comosar dosimetry system for different exposure scenarios have been compared with each other and related to the limits accepted by the standards. ourstudies here presented could lead to the development(also including the direct involvement of human health specialists and bodies) to a series of recommendations and informal guidelines for effective reduction of human exposure to electromagnetic fields generated by wireless communications systems. 2. material and methods the dosimetric quantity used for the evaluation of absorbed incident energy by the tissue is the specific absorption rate (sar). this parameter has been introduced for measuring the rate of energy absorbed by human body when it is exposed to a radio frequency electromagnetic field, the formula being given by the equation (1): 𝑆𝐴𝑅𝑙 = 𝜎 2 𝜌𝑚 |�⃗� 𝑖| 2 = 𝜔 𝜀0 𝜀𝑟 " 2 𝜌𝑚 |�⃗� 𝑖| 2 , (1) where ρm is the material density (in an elementary volume), σ is the electric conductivity, εr" represents relative electric permittivity as a frequency depending function while sarl represents a local quantity, [12]. 2.1. the satimo-comosar system for sar evaluation the dosimetry evaluation system used for our measurements is competent to determine the distribution of sar inside a human head phantom, so called sam (specific anthropomorphic mannequin). the used phantom [13], is in accordance with american and european standards [14], [15]. dosimetry assessment could be done both for the situations when the mobile phone is positioned by the right or by the left ear. the main components of the used system, figure 1, are: kuka kr5 robot with his specific controller kuka krc2sr, [16] with electric field probe (calibrated before the measurement process), the sam twin phantom, the liquids that simulate the human tissue for specific frequencies, a clamping device for the mobile phone under test, a signal generator rohde & schwarz cmu 200 (a gsm base simulator that can control the output power and frequency) and a desktop pc running opensar software, [17]. sam phantom structure is made out from low loss and low permittivity material, embedded in wood. the electric field (immersive probe inside the liquid simulating the dielectric properties of the head for different frequencies) is a triple dipole type (model ep96 – satimo). efield probe provides an omni-directional response. the clamping device for holding the mobile phone is also made from a low permittivity and low losses material, not to have any influence over the measured sar values. the clamping device can be moved along three orthogonal axes ox, oy, oz and it can be rotated around the phantom ear for precise positioning of the phone. the opensar software controls all robot movements, it determines local sar values; as post processing application, it calculates sar values, averaged on 10 g or 1 g of tissue. 2.2. the sar measuring procedure all the phases describing the sar measurement procedure are widely depicted in sr en 62209-1: 2007 [18]. the sar evaluation for different frequencies has been done for each radio channel: low, middle and respectively high. two positions (illustrated in figure 2) have been considered: the normal position (when the phone is on the cheek plane, also called ”touch” or ”cheek” position) and the tilt position (when a) b) c) d) figure 1. the satimo-comosar system used for sar measurements: (a) comosar test bench andkuka robot, (b) signal generator rohde & schwarz cmu 200, (c) computing unit with opensar software installed for testing system control (d) the fastening system used to secure the measuring equipment. acta imeko | www.imeko.org june2021 | volume 10 | number 2 | 149 the phone is tilted by 15 degrees toward cheek plane). only one measurement location (of the sam phantom) has been selected: the right ear. the sar evaluation has been done for two frequencies (gsm-900 and gsm-1800 bandwidth), 897 mhz and 1747 mhz, respectively. the mobile phone has been used with its internal transmitter, antenna, battery and all accessories supplied by the manufacturer. it is important as the battery to be fully charged before each test, for every case of exposure scenario taken into consideration. complementary, in figure 3 is presented the front and the back side of huawei p20 pro mobile phone, inserted in a multilayer protective case made from hard plastic material. for every position of the mobile phone tested for sar evaluation, the following conditions should be fulfilled: existence of permanent radio connection between the base station simulator and mobile phone device at maximum power; sar measurement should be done in a network of equally spaced points on a surface located at a constant distance from the inner surface of the phantom; sar measurement should be done in equidistant points, in a cube located around the place where the maximum value of the field has been determined (by the probe who scan inside the phantom); calculation of the measured sar as average value for 1 g and 10 g; any others perturbation sources must be avoided inside the test room or in immediate vicinity. figure 4 shows a human user holding the huawei p20 pro mobile phone with the protective case in the cheek (touch) position, respectively the tilt position, the two selected situations of exposure. 3. results and discussions operating procedure used during these measurements: a gsm communication has been established between the mobile phone under test and the base station simulator cmu200 for measuring the specific absorption rate (sar). the gsm 900 experimental conditions (for cheek or tilt positions, right or left) include: phantom – right head and left head; signal – tdma (crest factor: 8.0); channel – middle; frequency – 897.59 mhz (uplink); relative permittivity (real part) – 41.5; relative permittivity (imaginary part) – 19.4; conductivity (s/m) – 0.967. the gsm 1800 experimental conditions(for cheek or tilt positions, right or left) include: phantom – right head and left head; signal – tdma (crest factor: 8.0); channel – middle; frequency – 1747.4 mhz (uplink); relative permittivity (real part) – 40.102; relative permittivity (imaginary part) – 14.096; conductivity (s/m) – 1.368. the sar values for the huawei p20 pro mobile phone subject to dosimetric evaluation are shown in table 1 for different positions (left and right side of the sam phantom). the values of the sar for the samsung gt-s6102 mobile phone subjected to dosimetric evaluation are shown in table 2 for the same placement of the phone: left and right side of the sam phantom. the corresponding sar values for the nokia 2330c-2 mobile phone are synthesized in table 3. a comparison of the sar values when huawei p20 pro, samsung gt-s6102, nokia 2330c-2 mobile phones are positioned on the right side of the sam phantom is presented in table 4 (the test frequency in the gsm-900 band). this comparison has been performed aiming to see if the sar values averaged over 10 g of tissue for samsung and nokia phones are higher than the corresponding value for the huawei phone. the maximum value of sar 10g for huawei, samsung and nokia mobile phones investigated in this evaluation dosimetry study is represented by opensar software in 2d graphical a) b) figure 2. the cheek (a) and the tilt (b) positions of the huawei p20 pro mobile phone on the right side (ear) of sam phantom. a) b) figure 3. the front side (a) and the back side (b) of the huawei p20 pro mobile protected by a multilayered protective case. a) b) figure 4. the huawei p20 pro phone with a protective case: cheek (a) and tilt (b) position, the two most common positions to the user's cheek. acta imeko | www.imeko.org june2021 | volume 10 | number 2 | 150 representation as a rectangular surface for the right or for the left part of sam phantom face, according to the place where the phone have been placed for sar evaluation. around the position where the maximum sar has been located, position marked with the red color, the software draws a rectangular surface. in figure 5 is represented the surface sar for huawei, samsung and nokia mobile phones on the left – cheek position, 897.59 mhz, gsm-900 frequency. the opensar software also can associate the surface where the maximum value of sar 10g have been found to a specific volume concentrated around this maximum value of sar determined by probe during the scan process inside sam phantom. in figure 6 is represented the volume sar for huawei, samsung and nokia mobile phones on the left – cheek position, gsm-900 frequency band. the sar values for the huawei p20 pro mobile phone protected by a multilayer plastic case are shown in table 5 for the right side of the sam phantom, for both frequencies. the comparative graphical distribution of sar values (averaged on 10g tissue) following the dosimetric evaluation of the huawei p20 pro (with and without protective case) is shown table 1. sar values for different positionsof huawei p20 pro mobile phone relativeto the sam phantom, two frequencies (gsm-900 and gsm-1800 bandwidth). phone: huawei bandwidth channel position sar peak w/kg sar 10g w/kg sar 1g w/kg gsm 900 middle right – cheek 0.57 0.2854 0.4149 gsm 900 middle right – tilt 0.28 0.1375 0.1945 gsm 900 middle left – cheek 0.36 0.1735 0.2457 gsm 900 middle left – tilt 0.16 0.0877 0.1196 gsm 1800 middle right – cheek 0.34 0.1365 0.2282 gsm 1800 middle right – tilt 0.14 0.0422 0.0713 gsm 1800 middle left – cheek 0.15 0.0594 0.0974 gsm 1800 middle left tilt 0.05 0.0244 0.0358 table 2. sar values for different positionsof samsung gt-s6102 mobile phone relative to the sam phantom, two frequencies (gsm-900 and gsm1800 bandwidth). phone: samsung bandwidth channel position sar peak w/kg sar 10g w/kg sar 1g w/kg gsm 900 middle right – cheek 2.31 0.7514 1.4469 gsm 900 middle right – tilt 1.01 0.4936 0.7204 gsm 1800 middle right – cheek 1.40 0.5081 0.9121 gsm 1800 middle right – tilt 0.61 0.2268 0.3898 gsm 1800 middle left – cheek 1.14 0.4909 0.8165 gsm 1800 middle left tilt 0.45 0.1686 0.2942 table 3. sar values for different positionsof nokia 2330c-2 mobile phone relative to the sam phantom, two frequencies (gsm-900 and gsm-1800 bandwidth). phone: nokia bandwidth channel position sar peak w/kg sar 10g w/kg sar 1g w/kg gsm 900 middle right – cheek 1.92 0.8813 1.3496 gsm 900 middle right – tilt 1.13 0.5301 0.7887 gsm 1800 middle right – cheek 1.89 0.5611 1.0970 gsm 1800 middle right – tilt 0.37 0.1386 0.2466 gsm 1800 middle left – cheek 1.36 0.4709 0.8726 gsm 1800 middle left tilt 0.76 0.2356 0.4791 gsm 1800 middle left tilt 0.32 0.1056 0.1799 table 4. sar max (10g) values for huawei vs samsung and vs nokia mobile phones, bandwidth: gsm-900, positions: right – cheek and right – tilt. phone sar 10g (rightcheek) w/kg sar 10g (righttilt) w/kg samsung vs huawei (rightcheek) samsung vs huawei (righttilt) nokia vs huawei (rightcheek) nokia vs huawei (righttilt) huawei 0.2854 0.1375 samsung 0.7514 0.4936 2.63 3.58 nokia 0.8813 0.5301 3.08 3.85 a) b) c) figure 5. surface sar for huawei (a), samsung (b), nokia (c) mobile phones for 897.59 mhz on left – cheek. a) b) c) figure 6. volume sar for huawei (a), samsung (b), nokia (c) mobile phones for 897.59 mhz, left – cheek. table 5. sar values for different positions of huawei p20 pro phone with the multilayer protective case relative to the right part of sam phantom bandwidth channel position sar 10g w/kg sar1g w/kg gsm 900 middle right–cheek 0.1007 0.1525 gsm 900 middle right–tilt 0.0323 0.0443 gsm 1800 middle right–cheek 0.0348 0.0574 gsm 1800 middle right –tilt 0.0140 0.0235 figure 7. comparative graphical distribution of sar values (10 g of tissue) for the dosimetric evaluation of the huawei p20 pro: with and without multilayer protective case acta imeko | www.imeko.org june2021 | volume 10 | number 2 | 151 in figure 7. four situations have been considered: cheek and tilt positions, for both frequencies of interest. the sar values recorded by the dosimetric evaluation of the huawei p20 pro (with and without case) are synthesized in table 6 (for cheek and respectively tilt positions). figure 8 represents the surface sar for huawei p20 pro phone, without case, on the left – cheek position for 897.59 mhz frequency. the opensar software can also associate the surface where the maximum value of sar 10g have been found to a specific volume concentrated around this maximum value of sar determined during the scan process inside sam phantom. figure 9 represents the volume sar for huawei, p20 pro phones without multilayer protective case on the left – cheek position, gsm-900 frequency band. 4. conclusions this paper present a set of sar measurements performed for three, different generations, mobile phone devices: huawei p20 pro, samsung gt-s6102 and nokia 2330c-2, respectively. there is an over decade difference in term of market release between these phone models. the huawei p20 pro mobile phone is the newest one, being released on the market in 2018, while the two other mobile phones have been released on market in 2012 (samsung) and even in 2009 (nokia). we have also determined the sar values when huawei mobile phone was provided with a multilayer plastic protective case. the novelty in the study presented in this paper is the fact a first examination of data presented in the tables has shown that results are generally consistent with published tests done by other laboratories. according to our measurements, the cheek sar values are higher than tilt sar values, and 1-g sar values are higher than 10-g sar values. also, the 900-mhz sar values are higher than 1800-mhz ones. these findings have theoretical support and are in good agreement with most of the other results revealed in literature. we have noticed an anomaly in our measurements for nokia mobile phone (the orange marked field in table 3) for gsm1800 on the left side of the phantom in tilt position: the sar values were higher than those determined on the right side of the phantom, for the same position. this anomaly was due to the position of the phone in the clamping device. after we have carefully verified and repositioning the phone, the measurement was reset and correct data has been taken. the situation illustrates the occasional abnormal, inexplicable values that have been also reported in comparative studies between laboratories. similar studies developed in other laboratories (so called intercomparison measurements) support the expectation that for a given mobile phone, frequency and position, sar measurements on the left and right ear positions of the sam phantom should be very close in value. when they are not, we should check for a user error in phone placement or in data recording. as a first recommendation, the tilt position should be preferred by any user. the designers of huawei p20 pro placed the antenna at the bottom, to be farther away from the user’s brain. this is a major advantage versus the older mobile phones, their antenna being positioned on the top of the device. as in principle expected, the lowest values of sar had been recorded when the phone was provided with protective case. regarding the use of a protective casing, it is important that the material from which is made to be good absorbent; the protective cases having conducting insertions should be avoided, mainly due to unexpected and uncontrolled reflections. future studies on this topic should involve testing different types of cases, to comparatively track their impact on sar values. contrariwise, the transmission efficiency of the phone provided with protective case decreases. as a shortcoming, in this situation the battery will run out faster because it will try to give more power to ensure coverage, respectively a better signal reception. anyway, in real, daily environment, the sar values might vary depending on propagation conditions. a general conclusion could be the following: a combination of factors such as the positioning of the antenna, the size of the device, the relative position to the human head, equipping the phone with a protective case, can lead to lower sar values, regardless the type of the mobile terminal. table 6. comparisons between sar values (10 g) given by dosimetric evaluation of huawei p20 pro (provided or not with a protective case). bandwidth position sar 10g in w/kg huawei (no case vs with case) huawei (no case) huawei (with case) gsm 900 right cheek 0.2854 0.1007 2.83 gsm 900 right tilt 0.1375 0.0323 4.25 gsm 1800 right cheek 0.1365 0.0348 3.92 gsm 1800 right tilt 0.0422 0.0140 3.01 figure 8. surface sar, huawei p20 pro phone without protective multilayer case. figure 9. volume sar for huawei p20 pro mobile phone without protective multilayer case. acta imeko | www.imeko.org june2021 | volume 10 | number 2 | 152 the dosimetry evaluation here presented demonstrates that sar maxim values for 10 g of tissue determined for all three mobile phones are smaller than the icnirp limit: 2 w/kg (head region). carrying out rigorous measurements, in the case of most diverse exposure scenarios, with the correct processing of the results is an important resource to remove exaggerated fears, but also to develop recommendations and guidelines for effective reduction of human exposure to environmental electromagnetic fields. in this frame of such universal interest, any rigorous technical-scientific approach should be definitely welcomed. acknowledgment this paper could be developed due to the collaboration agreement settled between ”gheorghe asachi” technical university of iasi, faculty of electrical engineering and liceter accredited laboratory of ancomromania. references [1] international commission on non-ionizing radiation protection, guidelines for limiting exposure to time-varying electric, magnetic, and electromagnetic fields (up to 300 ghz), health physics, vol. 74, 2020, pp. 494-521. [2] ieee, ieee standard for safety levels with respect to human exposure to radio frequency electromagnetic fields, 3 khz to 300 ghz, c95.i2005. new york: institute of electrical and electronics engineers, 2005. online [accessed 20 june 2021] https://emfguide.itu.int/pdfs/c95.1-2005.pdf [3] european committee for electrotechnical standardization (cenelec) prestandard env 50166-2, human exposure to electromagnetic fields. high frequency (10 khz to 300 ghz). online [accessed 20 june 2021] https://standards.globalspec.com/std/85205/env%2050166-2 [4] order nr. 1193 from 29 september 2006 for the approval of the norms regarding the limitation of the general population exposure to electromagnetic fields from 0 hz to 300 ghz. online [accessed 20 june 2021], [in romanian] https://www.ancom.ro/uploads/links_files/odinul_1193_2006 _norme.pdf [5] w. kainz, a. christ, t. kellom, s. seidman, n. nikoloski, b. beard, n. kuster, dosimetric comparison of the specific anthropomorphic mannequin (sam) to 14 anatomical head models using a novel definition for the mobile phone positioning, physics in medicine and biology 50(14), august 2005, pp. 34233445. doi: 10.1088/0031-9155/50/14/016 [6] c. c. gordon, t. churchill, c. e. clauser, b. bradtmiller, j.t. mc conville, i. tebbetts, r. a. walker, 1988 anthropometric survey of u.s. army personnel: methods and summary statistics. technical report natick/tr-89/044, u.s. army natick research, development and engineering center, massachusetts: natick, sep. 1989. online [accessed 20 june 2021] http://mreed.umtri.umich.edu/mreed/downloads/anthro/ansur /gordon_1989.pdf [7] s. aqeel abdulrazzaq, s. jabir, j. aziz, sar simulation in human head exosed to rf signals and safety precautions, ijcset, september 2013, vol 3, issue 9, pp. 334-340. online [accessed 20 june 2021] http://www.ijcset.net/docs/volumes/volume3issue9/ijcset2013 030908.pdf [8] prabir kumar dutta, pappu vankata yasoda jayasree, viriyala satya surya narayana srinivasa baba, sar reduction in the modelled human head for the mobile phone using different material shields, hum. cent. comput. inf. sci. 6 (2016), art. 3. doi: 10.1186/s13673-016-0059-0 [9] m. r. iqbal-faruque, n. aisyah-husni, md. ikbal-hossain, m. tariqul-islam, n. misran, effects of mobile phone radiation onto human head with variation of holding cheek and tilt positions, journal of applied research and technology, volume 12, issue 5, october 2014, pp. 871-876. doi: 10.1016/s1665-6423(14)70593-0 [10] l. belrhiti, f. riouch, a. tribak, j. terhzaz, investigation of dosimetry in four human head models for planar monopole antenna with a coupling feed for lte/wwan/wlan internal mobile phone, journal of microwave, optoelectronics and electromagnetic applications, vol. 16, no. 2, june 2017, doi: 10.1590/2179-10742017v16i2748 [11] ovidiu bejenaru, catalin lazarescu, alexandru salceanu, valeriu david, study upon specific absorption rate values for different generations of mobile phones by using a satimo-comosar evaluation dosimetry system, 12th international conference and exhibition on electromechanical and energy systems, sielmen 2019, chisinău, moldova, 10-11 october 2019, pp. 1-5. doi: 10.1109/sielmen.2019.8905798 [12] m. a. stuchlyans, s. stuchly, experimental radio and microwave dosimetry, in polk ch., postow e., handbook of biological effects of electromagnetic fields, (sec. edition), crc press, boca raton, n. y., london, washington dc, 1996, pp. 295-336. [13] sam phantom on mvg website. online [accessed 20 june 2021] https://www.mvg-world.com/en/products/sar/saraccessories/sam-phantom [14] en 50361: basic standard for the measurement of specific absorption rate related to human exposure to electromagnetic fields from mobile phones (300 mhz 3 ghz), 2001. online [accessed 20 june 2021] https://standards.globalspec.com/std/532912/en%2050361 [15] ieee standard 1528-2003: ieee recommended practice for determining the peak spatial-average specific absorption rate (sar) in the human head from wireles communications devices: measurement techniques, 19 december 2003, pp.1-120 doi: 10.1109/ieeestd.2003.94414 [16] industrial robots on kuka website. online [accessed 20 june 2021] https://www.kuka.com/en-de/products/robotsystems/industrial-robots [17] opensar v5 on on mvg website. online [accessed 20 june 2021] https://www.mvgworld.com/en/products/field_product_family/sar-38/opensarv5 [18] asro, standard sr en 62209-1: human exposure to radio frequency fields from hand-held and body-mounted wireless communication devices human models, instrumentation, and procedures -part 1: procedure to determine the specific absorption rate (sar) for hand-held devices used in close proximity to the ear (frequency range of 300 mhz to 3 ghz), 2007. online [accessed 20 june 2021], [in romanian] https://magazin.asro.ro/ro/standard/117718 https://emfguide.itu.int/pdfs/c95.1-2005.pdf https://standards.globalspec.com/std/85205/env%2050166-2 https://www.ancom.ro/uploads/links_files/odinul_1193_2006_norme.pdf https://www.ancom.ro/uploads/links_files/odinul_1193_2006_norme.pdf https://dx.doi.org/10.1088%2f0031-9155%2f50%2f14%2f016 http://mreed.umtri.umich.edu/mreed/downloads/anthro/ansur/gordon_1989.pdf http://mreed.umtri.umich.edu/mreed/downloads/anthro/ansur/gordon_1989.pdf http://www.ijcset.net/docs/volumes/volume3issue9/ijcset2013030908.pdf http://www.ijcset.net/docs/volumes/volume3issue9/ijcset2013030908.pdf https://doi.org/10.1186/s13673-016-0059-0 http://dx.doi.org/10.1016/s1665-6423(14)70593-0 http://dx.doi.org/10.1590/2179-10742017v16i2748 https://doi.org/10.1109/sielmen.2019.8905798 https://www.mvg-world.com/en/products/sar/sar-accessories/sam-phantom https://www.mvg-world.com/en/products/sar/sar-accessories/sam-phantom https://standards.globalspec.com/std/532912/en%2050361 https://doi.org/10.1109/ieeestd.2003.94414 https://www.kuka.com/en-de/products/robot-systems/industrial-robots https://www.kuka.com/en-de/products/robot-systems/industrial-robots https://www.mvg-world.com/en/products/field_product_family/sar-38/opensar-v5 https://www.mvg-world.com/en/products/field_product_family/sar-38/opensar-v5 https://www.mvg-world.com/en/products/field_product_family/sar-38/opensar-v5 https://magazin.asro.ro/ro/standard/117718 spectrum sensing using energy measurement in wireless telemetry networks using logarithmic adaptive learning acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 7 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 spectrum sensing using energy measurement in wireless telemetry networks using logarithmic adaptive learning nagesh mantravadi1, md zia ur rahman1, sala surekha1, navarun gupta2 1 department of electronics and communication engineering, koneru lakshmaiah education foundation, k l university, vaddeswaram, guntur, andhra pradesh, 522502, india 2 department of electrical engineering, university of bridgeport, bridgeport, ct 06604, usa section: research paper keywords: adaptive algorithm; cognitive radio; energy measurement; noise uncertainty; threshold point citation: nagesh mantravadi, md zia ur rahman, sala surekha, navarun gupta, spectrum sensing using energy measurement in wireless telemetry networks using logarithmic adaptive learning, acta imeko, vol. 11, no. 1, article 34, march 2022, identifier: imeko-acta-11 (2022)-01-34 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received december 28, 2021; in final form february 18, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: sala surekha, e-mail: surekhasala@gmail.com 1. introduction radio frequencies are limited natural resources they are controlled by government authorities, in a particular band the primary users can exclusively use licensed spectrum even the secondary users are unoccupied as these unlicensed users are avoided for use. due to the enormous growth in wireless communication applications the usage of radio frequencies is increasing rapidly. to overcome this spectrum utilization issues, cognitive radio systems are emerged with new technology [1]-[3] in wireless communications. enormous efforts are used for enhancing the usage of efficiency of cognitive radio systems, there is huge change in the usage of frequencies, time and space domain and then secondary users are allowed without creating any interference to the primary users. to overcome these interferences [4], [5] in wireless communications and for increasing spectrum utilization ieee 802.22 wireless frequency band can be used. with the help of these frequency bands, the data will be transmitted without causing interference in health care monitoring wireless application by using zigbee, wi-fi, ad hoc networks with operating frequency less than 3 mhz. orthogonal frequency division multiplexing (ofdm) is the eminent method used in wireless communication systems. channel estimation issues raised in ofdm based cognitive radio system are discussed in [6]-[8], this is because of mean square error (mse) of channel estimations. the unknown noise components effecting the spectrum sensing also studied. performance evaluation of channel estimation techniques for imperfect channels is analysed for correlated channels of cognitive radio system, then mutual information is considered at the input and output to provide better communication sensing and channel uncertainty. the receiver's minimal mean square error is then computed to obtain the fading coefficients of the fading channel, as well as the anticipated attainable rate for gaussian signalling and linear modulation schemes, assuming abstract to identify primary user signals in cognitive radios spectrum sensing method is used. due to statistical variances in received signal, noise is present in primary user signals, this noise powers are varied due to random nature of noise signals and leads to noise uncertainty problem in the performance of energy detection. the task of energy measurement and further detecting the unused frequency spectrum is a key task in cognitive radio applications. for avoiding these problems, least logarithmic absolute difference (llad) algorithm is proposed in which noise powers are adjusted at sensing point of licensed users. with help of proposed method, estimated noise signals are eliminated. sign regressor version of llad algorithm is considered due to it reduces computational complexity and convergence rate is improved. further probability of detection (pod), probability of false alarm (pofa) is estimated to know threshold value. from results, it is clear that good performance in terms of pofa versus pod in range of low signal to noise ratio in multiple nodes. therefore, the proposed energy measurement-based spectrum sensing method is useful in remote health care monitoring, medical telemetry applications by sharing the un-used spectrum. mailto:surekhasala@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 interference and channel estimation errors occur at primary users [9], [10]. spectrum sensing is used to avoid difficulties with spectrum underutilization and interference. the most commonly used detection techniques are wavelet detection, cyclostationary detection, energy detection, and covariance detection. the first three techniques require prior information of the principal user signal, frequency components, interference, and noise variance, but the fourth approach requires no prior data and is hence the most often used spectrum sensing method. in addition, for energy sensing, circuit implementation and computing complexity are minimal. spectrum sensing [11]-[17] performance is assessed in terms of false alarm and detection probabilities; it assumes that the primary user is idle or active during the spectrum sensing period, and then spectrum holes are detected in the spectrum frequency range based on this. there is a compromise between probability detection and false alarm, however by utilizing these factors, we can see if there was any interference among the primary and secondary users. in [18] sensing trade-off in cognitive radios is studied when numerous primary users arrive randomly. the trade-off is also caused by the performance of cognitive radio spectrum utilization and spectrum sensing, which is based mostly on primary user activity. a numerical technique was used to investigate this trade-off utilizing cooperative spectrum sensing. the spectrum potential for successful communication between a transmitter and receiver of cognitive networks is being investigated. also studied about when secondary user sensing time increases, false alarm probability reduced, it means there is high chances to secondary users are having to access idle channel, but less chance to transmit because of transmission time is limited. main objective is efficiently use spectrum utilization with optimum sensing for improving spectrum opportunities of cognitive radio networks. practically cognitive radios are having channel sensing errors because of miss detection and false alarms these problems will affect channel estimation [19] design also quality channel estimation error problems, receiver operating characteristics are also analysed for these parameters. further cooperative cognitive radio is considered for analysis of fading scenarios occurred with improved energy detection method. for this nakagami multipath fading is considered with power 2 based energy detection spectrum sensing [20], [21], then performance is improved in cognitive radio networks in terms of detection probability parameters and their operating curves also analysed. in spectrum sensing, there after spectrum allocation one of the promising methods is energy detection. in this work a new energy measurement methodology is proposed based on least logarithmic absolute difference algorithm. this methodology of energy detection and measurement is a key task in measurement technology as well as in cognitive radio-based communication systems. to overcome noise uncertainty problems double threshold-based spectrum sensing is studied for improving energy detection. this cognitive radio concept is widely used in health care applications, but interferences occurred due to wireless devices will affect the performance. to avoid these interferences occurred with health care, novel cognitive radio method is used by proposing modified normalized least mean square algorithm (mnlms) for hospital environment applications so that errors/interferences occurred with medical devices are removed and their performance is evaluated using matlab simulations. 2. system model spectrum sensing is mostly used method for detecting spectrum holes of cognitive radio network. by using this method, we can decide primary user is absent or present. energy detection block diagram is shown in figure 1. by using hypothesis testing, detection problem is considered as 𝑇0: 𝑧(𝑡) = 𝑤(𝑡) 𝑇1: 𝑧(𝑡) = 𝑤(𝑡) + 𝑠(𝑡) , (1) where 𝑧(𝑡) is received sample signal, 𝑤(𝑡) is noise effect of transmitted signal, 𝑠(𝑡) is primary transmitted signal with 𝑡 = 1, … , 𝑇 length carried for identifying spectrum. energy detection method [22] is considered for detecting spectrum holes, energy level is measured, then estimated noise variance by placing detection threshold value. then secondary user decides statistics of energy detection as 𝐷 = ∑[𝑧(𝑡)]2 𝑇 𝑡=1 . (2) decision static have central chi square distribution with t degrees of freedom when primary user signal is absent. it is having non central chi square distribution decision statistics when primary user is present. central limit theorem with gaussian approximation is used if detection samples are greater than 250, then mean and variance of decision static of primary user signal given as 𝐷 ~ 𝑇(𝑇𝜎𝑤𝑡 2 , 2𝑇𝜎𝑤𝑡 4 ) for h1 𝐷 ~ 𝑇(𝑇((𝜎𝑠𝑡 2 + 𝜎𝑤𝑡 2 , 2𝑇((𝜎𝑠𝑡+ 2 𝜎𝑤𝑡 2 )2) for h0 . (3) in testing 𝑇0 and 𝑇1, false alarm and misdetection errors are occurred due to false identification of 𝑇0 and 𝑇1 . energy detector performance is measured with probability of these two errors. probability false alarm occurred due to showing of wrong spectrum band occupancy, probability of miss detection is due to it shows as primary user absence, but actually it is present, it is also called as probability detection. probability false alarm, probability detection evaluated as figure 1. energy detector block diagram. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 𝑃ofa = 𝑄 ( 𝛿𝑑 − 𝑇𝜎𝑤𝑡 2 √2 𝑇 𝜎𝑤𝑡 4 ) (4) 𝑃od = 𝑄 ( 𝛿𝑑 − 𝑇(𝜎𝑠𝑡 2 + 𝜎𝑤𝑡 2 ) √2 𝑇(𝜎𝑠𝑡+ 2 𝜎𝑤𝑡 2 )2 ) , (5) where 𝛿𝑑 = 𝜎𝑤𝑡 2 (𝑄−1(𝑃ofa)√2𝑇 + 𝑇) is the threshold. 2.1. least logarithmic absolute difference algorithm least logarithmic absolute difference (llad) technique elegantly and gradually adapts conventional cost function depends on amount of error in its implementation. in impulse-free noise environments, lms and llad algorithm exhibits likely convergence behaviour, while llad algorithm is robust against impulsive interference and exceeds sign algorithm [23], [24]. flowchart for llad algorithm is as shown in figure 2. mathematical modeling 𝑇 is filter length, 𝜇 is step size parameter is considered. let the tap input be x(𝑛) and filter length 𝑇 is moderate to large, w(0) = 0 is considered as initial condition, x(𝑛) is 𝑇 -by1 tap input vector to filter 𝑛2 at time 𝑛 as [x(𝑛), x(𝑛 − 1), … , x(𝑛 − 𝑇 + 1)]t (6) w(𝑛) is tap weight vector, d(𝑛) is desired response at time 𝑛, ωo is an unknown vector, (. ) t is the transpose of (. ). to be computed: w(𝑛 + 1) is to be computed tap-weight vector at time 𝑛 + 1. computation an unknown vector ωo is represented with linear model as d(𝑛) = ωo t x(𝑛) + n𝑡 (7) the instantaneous estimate of gradient vector j is written as: ∇’j(n) = − 2 x(𝑛) d∗(𝑛) + 2 d(𝑛) xt(𝑛) w(𝑛) (8) where x(𝑛) is the input tap vector, w(𝑛) is tap weight vector. w(𝑛) is random vector depends on x(𝑛) with its taps stored in a row vector given by [𝑤(𝑛), w(𝑛 − 1), … , w(𝑛 − 𝑇 + 1)]t (9) output of the filter y(𝑛) = [w0(𝑛) x(𝑛) + ⋯ + w𝐿−1(𝑛) x(𝑛 − 𝑀 + 1)] = wt(𝑛) x(𝑛). (10) expression for estimation error is given by e(𝑛) = d(𝑛) − wt(𝑛) x(𝑛). (11) where the term wt(𝑛) x(𝑛) is inner product of w(𝑛) and x(𝑛). the normalized error cost function introduced using logarithmic function is given by figure 2. spectrum sensing using llad. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 j(e(𝑛)) = f(e(𝑛)) − 1 𝛼 ln (1 + 𝛼 f(e(𝑛))) (12) based on steepest descent method, general weight update recursion is given by w(𝑛 + 1) = w(𝑛) − 𝜇 ∇j(𝑛) (13) new recursive relation for ∇j(𝑛) is written as ∇j(𝑛) = 𝐸{∇|e(𝑛)|2} = 𝐸{e(𝑛)∇e∗(𝑛)} ∇e∗(𝑛) = −x∗(𝑛) (14) thus, resultant expression for gradient vector is given by ∇j(𝑛) = −𝐸{e(𝑛) x∗(𝑛)} (15) also, first gradient of the relation in is given by δw. f(e(𝑛)) the signum representation is given below: sign{x(𝑛)} = { 1: x(𝑛) > 0 0: x(𝑛) = 0 −1: x(𝑛) < 0 } (16) step size boundary for convergence of mean square for lms algorithm is given by 0 < 𝜇 < 2 x𝑇 (𝑛) x(𝑛) (17) by substituting estimate of ∇’j(𝑛) in steepest descent algorithm, new recursive relation for updating tap weight vector is w(𝑛 + 1) = w(n) + 𝜇 x(𝑛)[d∗(𝑛) − xt(𝑛) ∙ w(𝑛)]. (18) to provide robustness against impulsive interferences, a cost function is introduced with normalized error by use of logarithmic function stated as 𝐹(e(𝑛)) = 𝐸 [(e(𝑛)) 2 ] = 𝐸[|e(𝑛)|] . (19) thus, the stochastic gradient update is given by w(𝑛 + 1) = w(𝑛) + 𝜇 ∙ x(𝑛) ∂f(e(𝑛)) ∂e(𝑛) [ 𝛼 f(e(𝑛)) 1 + 𝛼 f(e(𝑛)) ] , (20) where 𝛼 > 0 is a design parameter also f(e(𝑛)) as the conservative cost function for error signal e(𝑛). for |𝛼 f(e(𝑛))| ≤ 1, applying maclaurin series using natural algorithm, equation (19) gives j(e(n)) = f(e(𝑛)) − 1 𝛼 (𝛼 f(e(𝑛)) − 𝛼 2 f2(e(𝑛))). (21) for low values of f(e(𝑛)), it is an infinite combination for conventional cost functions. for smaller values of error, cost function j(e(𝑛)) resemble f(e(𝑛)) for instance as f(e(𝑛)) − 1 𝛼 ln (1 + 𝛼 𝐹(𝑒(𝑛))) → f(e(𝑛)) (22) thus general update expression of stochastic gradient is stated as w(𝑛 + 1) = w(𝑛) + 𝜇 ∙ x(𝑛) ∙ ∂f(e(𝑛)) ∂e(𝑛) [ 𝛼 f(e(𝑛)) 1 + 𝛼 f(e(𝑛)) ] (23) norm is power of least probable error ais at convex cost function, signed algorithm delivers slow rate of convergence. for f(e(𝑛)) = e[|e(𝑛)|] in (23), then resultant expression is w(𝑛 + 1) = w(𝑛) + 𝜇 x(𝑛) sign(e(𝑛)) [ 𝛼 (|e(n)|) 1 + 𝛼 (|e(n)|) ] . (24) then the weight update relation of the llad algorithm becomes w(𝑛 + 1) = w(𝑛) + 𝜇 [ 𝛼 x(𝑛) e(𝑛)) 1 + 𝛼 (|e(𝑛)|) ] (25) to reduce computational difficulty of lms, signed variants are preferable. sign regressor version offers low computational complexity with a smaller number of multiplications among all signed variants. here, sign regressor llad (srllad) algorithm by applying sign function on each element. is remains obtained as recursion of lms for altering input tap vector. 2.2. sign based least logarithmic absolute difference (llad) algorithms llad algorithm is generalized version of higher order adaptive filter. combination of llad in (25) with three types of sign variants results in srllad, sllad and ssllad algorithms respectively. hence weight update relations of signed llad based variants are given as follows. w(𝑛 + 1) = w(𝑛) + 𝜇 sign{x(𝑛)} sign {e(𝑛) [ 𝛼 (|e(𝑛)|) 1 + 𝛼 (|e(𝑛)|) ]} (26) w(𝑛 + 1) = w(𝑛) + 𝜇 x(𝑛) sign {e(𝑛) [ 𝛼 (|e(𝑛)|) 1 + 𝛼 (|e(𝑛)|) ]} (27) w(𝑛 + 1) = w(𝑛) + 𝜇 sign{x(𝑛)} sign {e(𝑛) [ 𝛼 (|e(𝑛)|) 1 + 𝛼 (|e(𝑛)|) ]}. (28) 3. results and discussion the proposed llad algorithm method for spectrum sensing is used to assess performance. the spectrum sensing of the transmitter of primary user and receiver of secondary user is evaluated. spectrum sensing simulations are run for 5000 samples, t is filter length it is chosen as 10 and distinct signals are acquired for each noise sample to process. for improved results, a received signal attenuation factor was introduced, and then the performance of energy detection was evaluated under various snr situations. because of the noise levels in the sensing spectrum, the main goal of developing an adaptive filter is based on spectrum sensing approach. due to incorrect detection of test acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 data, this spectrum sensing may result in probability detection and missed detection probability mistakes. due to incorrect detection of test data, this spectrum sensing may result in probability detection and missed detection probability errors. the proposed technique adjusts the threshold value for each sensing event and then adjusts the noise power accordingly. when the size of the receiving antenna, the averaging of eigen values rises, resulting in a reduction in estimate error. the performance of noise power estimation is measured in terms of mean square error (mse). when the antenna size and quantity of samples are increased, the steady state error reduces. the sensitivity of adaptive filter-based energy detection is measured in terms of probability detection, which is a function of snr and a constant false alarm probability. the proposed llad algorithm performs better when probability detection is used. noise uncertainty is not a problem for this proposed strategy. this detecting capability successfully identifies spectrum gaps, allowing for prevented spectrum reuse opportunities. to accomplish probability detection (pod) and false alarm probability (pofa) concurrently, snr values are considered as minimal and number of samples, noise uncertainty relationship is adopted. to obtain greater probability detection without an adaptive method for spectrum sensing, noise uncertainty increases for minimal snr levels. even though there is no influence on how many received signal samples are examined for sensing spectrum, some restrictions on snr for performance of probability detection beyond these limit false alarm values are sacrificed for values bigger than zero of noise uncertainty factor. for noise certainty levels greater than zero for every increasing snr value, a larger number of samples are required for improved detection probability. it is obvious that raising snr values improves detection sensitivity for noise circumstances in the proposed technique. theoretical values for probability detection of different snr levels for the basic energy detection technique and suggested energy detection using llad are determined using (4) and (5). computational complexity of various lms based adaptive algorithms shown in table 1. with the suggested strategy, probability detection produces superior results, as shown in table 2. simulation curves for pofa versus pod for various snr values are provided in figure 3. for low snr levels, detection probability performance is better, as shown in table 2 and figure 3. signal correlations create propagation fading in wireless communications, and their influence on the receiving antenna generates correlation losses in the received signal. the performance of the energy detector is initially unaffected by signal correlations, and noise power is evaluated using eigen values. the antenna correlation effect is avoided while calculating noise power estimate by using one primary user signal when calculating eigen values of antenna correlation effect on bigger eigen values compared to small eigen values. the main goal of the proposed llad is to improve energy detection accuracy when used in low snr and noise-uncertainty situations. the detection probability is then improved with constant false alarm probability and low snr values. threshold values are evaluated for energy detectors to perceive changes in noise power in order to minimize difficulties with noise uncertainties, therefore threshold values are adapted to compute table 1. computational complexities of various sign lms based adaptive. s.no algorithm multiplications additions divisions 1 lms t+1 t+1 nil 2 llad t+4 t+2 1 3 srllad 4 t+2 1 4 sllad t+3 t+2 1 5 sslad t+2 t+2 1 figure 3. pofa versus pod for different snr values. figure 4. convergence curves for sign based llad adaptive algorithms. table 2. performance comparison of eigen value based spectrum sensing, proposed llad and its sign variants. snr (db) 0 -5 -10 -15 -20 pfa pd pfa pd pfa pd pfa pd pfa pd eigen value-based spectrum sensing 0.4 0.6789 0.3 0.6989 0.3 0.7125 0.3 0.7859 0.3 0.8752 llad 0.2520 0.7824 0.3 0.8997 0.3 0.9581 0.3 0.9785 0.3 0.9969 sr-llad 0.26 0.6241 0.2 0.6805 0.2 0.7825 0.2 0.8628 0.2 0.8853 s-llad 0.3821 0.5645 0.4 0.7192 0.4 0.7403 0.4 0.7893 0.4 0.7990 ss-llad 0.42 0.4561 0.5 0.4985 0.5 0.5782 0.5 0.5981 0.5 0.6782 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 noise power estimation. when compared to spectrum sensing only by using energy detection, it provides improved results for the llad energy detection. the proposed approach for energy detection overcomes noise uncertainty difficulties, but it is also necessary to assess the signal correlation because it is based on noise power estimations with eigen values. if signal correlation occurs, the performances of energy detection and noise power estimates have an impact on spectrum sensing. convergence becomes delayed by applying signum function. according to the illustrations, sra variant is just lower than its non-signed variant. figure 4 shows llad algorithm and its sign variants have a higher convergence rate than lms. hence srllad algorithm is preferred when compared to llad and lms algorithms. when the proposed llad algorithm exceeds the lower bound stability constraint [25], the normalized mean square deviation noise variance diverges. as a result of this equation normalized mean square deviation provides higher stability for the proposed strategy. normalized mean square deviation analysis yields higher convergence for modified normalized median lms than nlms and lms for small step size noisy inputs. because of instabilities, the standard modified normalized lms method diverged at large step size values. in health care monitoring applications for remote patients, spectrum sensing with the proposed llad algorithm is utilized to reduce noisy inputs and interferences caused by wireless networks. the information of the patients is transmitted to doctors through cognitive systems by arranging wearable devices to patient body. the information is transmitted and received concurrently from both channels. in this case primary user of cognitive system takes higher priority over the secondary user. the data is accessible to secondary users, but not to primary users. this proposed llad algorithm eliminates interference and disturbances that arise in this situation. medical equipment’s are more sensitive than normal electric devices in health care accept. in these primary users are telemetry applications and secondary users are hospital information applications. by predicting spectrum holes the cognitive radio controller optimizes the performance of the system, cognitive radio controller regulates channel access, and probabilities are determined. for many data channels the loss and delay probabilities of cognitive radio system are enhanced. 4. conclusions the main objective of this paper is to reduce noise uncertainties during spectrum sensing using the proposed llad algorithm. probability detection and false alarm probability are errors that arise as a result of spectrum sensing at the receiver, and their expressions are derived. after that, the impact of noise uncertainty on threshold parameter selection is investigated. the mean and mean square deviation analysis ensures stability for noisy primary user inputs. we considered variable step size for improving the stability of the proposed spectrum sensing technique. the llad algorithm's performance is enhanced by a lower steady-state error rate and better convergence. in medical telemetry, this cognitive radio idea is utilized to minimize echo cancellations and signal correlations with noisy inputs at the primary user. as a consequence, we achieve improved outcomes in terms of noise uncertainty stability, convergence rate, and steady state error rate. the estimation of a false alert and the probability of detection were both computed as estimation parameters. references [1] t. düzenli, o. akay, a new spectrum sensing strategy for dynamic primary users in cognitive radio, ieee communications letters, vol. 20, no. 4, april 2016, pp. 752-755. doi: 10.1109/lcomm.2016.2527640 [2] a. ali, w. hamouda, advances on spectrum sensing for cognitive radio networks: theory and applications, ieee communications surveys & tutorials, vol. 19, no. 2, second quarter 2017, pp. 1277-1304. doi: 10.1109/comst.2016.2631080 [3] a. hajihoseini, s. a. ghorashi, distributed spectrum sensing for cognitive radio sensor networks using diffusion adaptation, ieee sensors letters, vol. 1, no. 5, oct. 2017, pp. 1-4. doi: 10.1109/lsens.2017.2734561 [4] l. gahane, p. k. sharma, n. varshney, t. a. tsiftsis, p. kumar, an improved energy detector for mobile cognitive users over generalized fading channels, ieee transactions on communications, vol. 66, no. 2, feb. 2018, pp. 534-545. doi: 10.1109/tcomm.2017.2754250 [5] d. sun, t. song, b. gu, x. li, j. hu, m. liu, spectrum sensing and the utilization of spectrum opportunity tradeoff in cognitive radio network, ieee communications letters, vol. 20, no. 12, dec. 2016, pp. 2442-2445. doi: 10.1109/lcomm.2016.2605674 [6] w. chin, on the noise uncertainty for the energy detection of ofdm signals, ieee transactions on vehicular technology, vol. 68, no. 8, aug. 2019, pp. 7593-7602. doi: 10.1109/tvt.2019.2920142 [7] s. yasmin fathima, m. zia ur rahman, k. m. krishna, s. bhanu, m. s. shahsavar, side lobe suppression in nc-ofdm systems using variable cancellation basis function, ieee access, vol. 5, pp. 9415-9421, 2017. doi: 10.1109/access.2017.2705351 [8] j. yao, m. jin, q. guo, y. li, j. xi, effective energy detection for iot systems against noise uncertainty at low snr, ieee internet of things journal, vol. 6, no. 4, aug. 2019, pp. 6165-6176. doi: 10.1109/jiot.2018.2877698 [9] s. k. gottapu, v. appalaraju, cognitive radio wireless sensor network localization in an open field, 2018 conference on signal processing and communication engineering systems (spaces), vijayawada, 2018, pp. 45-48. doi: 10.1109/spaces.2018.8316313 [10] j. kim, j. p. choi, sensing coverage-based cooperative spectrum detection in cognitive radio networks, ieee sensors journal, vol. 19, no. 13, 1 july 2019, pp. 5325-5332. doi: 10.1109/jsen.2019.2903408 [11] s. macdonald, d. c. popescu, o. popescu, analyzing the performance of spectrum sensing in cognitive radio systems with dynamic pu activity, ieee communications letters, vol. 21, no. 9, sept. 2017, pp. 2037-2040. doi: 10.1109/lcomm.2017.2705126 [12] lorenzo ciani, alessandro bartolini, giulia guidi, gabriele patrizi, a hybrid tree sensor network for a condition monitoring system to optimise maintenance policy, acta imeko, vol. 9, 2020, no.1, pp. 3-9. doi: 10.21014/acta_imeko.v9i1.732 [13] d. li, j. cheng, v. c. m. leung, adaptive spectrum sharing for half-duplex and full-duplex cognitive radios: from the energy efficiency perspective, ieee transactions on communications, vol. 66, no. 11, nov. 2018, pp. 5067-5080. doi: 10.1109/tcomm.2018.2843768 [14] m. tavana, a. rahmati, v. shah-mansouri, b. maham, cooperative sensing with joint energy and correlation detection in cognitive radio networks, ieee communications letters, vol. 21, no. 1, jan. 2017, pp. 132-135. doi: 10.1109/lcomm.2016.2613858 [15] g. yang, jun wang, jun luo, oliver yu wen, husheng li, qiang li, shaoqian li cooperative spectrum sensing in heterogeneous cognitive radio networks based on normalized energy https://doi.org/10.1109/lcomm.2016.2527640 https://doi.org/10.1109/comst.2016.2631080 https://doi.org/10.1109/lsens.2017.2734561 https://doi.org/10.1109/tcomm.2017.2754250 https://doi.org/10.1109/lcomm.2016.2605674 https://doi.org/10.1109/tvt.2019.2920142 https://doi.org/10.1109/access.2017.2705351 https://doi.org/10.1109/jiot.2018.2877698 https://doi.org/10.1109/spaces.2018.8316313 https://doi.org/10.1109/jsen.2019.2903408 https://doi.org/10.1109/lcomm.2017.2705126 https://doi.org/10.21014/acta_imeko.v9i1.732 https://doi.org/10.1109/tcomm.2018.2843768 https://doi.org/10.1109/lcomm.2016.2613858 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 detection, ieee transactions on vehicular technology, vol. 65, no. 3, march 2016, pp. 1452-1463. doi: 10.1109/tvt.2015.2413787 [16] livio d’alvia, eduardo palermo, stefano rossi, zaccaria del prete, validation of a low-cost wireless sensor’s node for museum environmental monitoring, acta imeko, vol. 6, no. 3, september 2017, pp. 45-51. doi: 10.21014/acta_imeko.v6i3.454 [17] s. cheerla, d. venkata ratnam, k. s. teja sri, p. s. sahithi, g. sowdamini, neural network based indoor localization using wi-fi received signal strength, journal of advanced research in dynamical and control systems, 10 (4), 2018, pp. 374379. [18] i. srivani, g. siva vara prasad, d. venkata ratnam, a deep learning-based approach to forecast ionospheric delays for gps signals, ieee geoscience and remote sensing letters, 16(8), 2019, pp. 1180-1184. doi: 10.1109/lgrs.2019.2895112 [19] n. b. gayathri, g. thumbur, p. rajesh kumar, m. z. u. rahman, p. v. reddy, a. lay-ekuakille, efficient and secure pairing-free certificateless aggregate signature scheme for healthcare wireless medical sensor networks, ieee internet of things journal, 6(5), 2019, pp. 9064-9075. doi: 10.1109/jiot.2019.2927089 [20] g. thumbur, n. b. gayathri, p. vasudeva reddy, m. z. u. rahman, a. lay-ekuakille, efficient pairing-free identity-based ads-b authentication scheme with batch verification, ieee transactions on aerospace and electronic systems, 55(5), 2019, pp. 2473-2486 doi: 10.1109/taes.2018.2890354 [21] s. atapattu, c. tellambura, h. jiang, n. rajatheva, unified analysis of low-snr energy detection and threshold selection, ieee transactions on vehicular technology, vol. 64, no. 11, nov. 2015, pp. 5006-5019. doi: 10.1109/tvt.2014.2381648 [22] s. surekha, m. z. ur rahman, a. lay-ekuakille, a. pietrosanto, m. a. ugwiri, energy detection for spectrum sensing in medical telemetry networks using modified nlms algorithm, 2020 ieee international instrumentation and measurement technology conference (i2mtc), dubrovnik, croatia, 25-28 may 2020, pp. 1-5. doi: 10.1109/i2mtc43012.2020.9129107 [23] a. sulthana, m. z. u. rahman, s. s. mirza, an efficient kalman noise canceller for cardiac signal analysis in modern telecardiology systems, ieee access, vol. 6, 2018, pp. 3461634630. doi: 10.1109/access.2018.2848201 [24] m. n. salman, p. trinatha rao, m. z. u. rahman, novel logarithmic reference free adaptive signal enhancers for ecg analysis of wireless cardiac care monitoring systems, ieee access, vol. 6, 2018, pp. 46382-46395. doi: 10.1109/access.2018.2866303 [25] s. m. jung, p. park, stabilization of a bias-compensated normalized least-mean-square algorithm for noisy inputs, ieee transactions on signal processing, vol. 65, no. 11, 1 june 2017, pp. 2949-2961. doi: 10.1109/tsp.2017.2675865 https://doi.org/10.1109/tvt.2015.2413787 https://doi.org/10.21014/acta_imeko.v6i3.454 https://doi.org/10.1109/lgrs.2019.2895112 https://doi.org/10.1109/jiot.2019.2927089 https://doi.org/10.1109/taes.2018.2890354 https://doi.org/10.1109/tvt.2014.2381648 https://doi.org/10.1109/i2mtc43012.2020.9129107 https://doi.org/10.1109/access.2018.2848201 https://doi.org/10.1109/access.2018.2866303 https://doi.org/10.1109/tsp.2017.2675865 case studies for the mathmet quality management system at vsl, the dutch national metrology institute acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 5 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 case studies for the mathmet quality management system at vsl, the dutch national metrology institute gertjan kok1 1 vsl, thijsseweg 11, 2629 ja, delft, the netherlands section: research paper keywords: european metrology network; mathmet; quality management system; validation citation: gertjan kok, case studies for the mathmet quality management system at vsl, the dutch national metrology institute, acta imeko, vol. 12, no. 2, article 21, june 2023, identifier: imeko-acta-12 (2023)-02-21 section editor: eric benoit, université savoie mont blanc, france received july 11, 2022; in final form april 21, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: the work presented in this article has been performed in the empir project “support for a european metrology network for mathematics and statistics” (15net05 mathmet). this project has received funding from the empir programme co-financed by the participating states and from the european union’s horizon 2020 research and innovation programme. corresponding author: gertjan kok, e-mail: gkok@vsl.nl 1. introduction metrology is the science of measurement founded on the si system of units [1]. the metrological traceability of measurement results is an essential part of metrology. it is defined [2] as the property of a measurement result whereby the result can be related to a reference through a documented unbroken chain of calibrations, each contributing to the measurement uncertainty. for this chain to work properly, all constituent parts should be carefully assessed and validated, and the results of the validation should be properly documented. measurement results play a dominant role in this chain, but that does not mean that hardware and instrumentation are the only things that matter. mathematical calculations implemented in software can form an essential part of the measurement. these mathematical calculations will almost certainly be implemented in software, which may have been validated using some reference datasets. the whole data analysis procedure may be based on a written guideline. for complying with metrological traceability, it is therefore essential that the used software, data and guidelines are also under quality control. this requirement means that their working and content is checked for correctness, and that storing meta-information like version control data is properly managed. as there is worldwide cooperation within metrological applications, it is logical to organize this type of quality control at an international level. the emn mathmet [3] is therefore developing a lightweight quality management system (qms) against which the existing procedures at national metrology institutes (nmis) can be benchmarked and which can help to complement them to get more uniformity in assessing the quality of software, data and guidelines by different nmis. a full description of this qms is presented in [4], which has recently been published in acta imeko. this article is an accompaniment to it and supports [4]. in this contribution we will report on the application of the qms to several use cases concerning software, reference data and guidelines by vsl. we will discuss the usefulness, advantages, and disadvantages of the qms and possible pitfalls in sections 3 to 5, after shortly introducing the qms in section 2. finally, in section 6 some overall conclusions will be formulated. note that all these viewpoints and conclusions are abstract the european metrology network mathmet is a network in which a large number of european national metrology institutes combine their forces in the area of mathematics and statistics applied to metrological problems. one underlying principle of such a cooperation is to have a common understanding of the ‘quality’ of software, data and guidelines. to this purpose a flexible, lightweight quality management system (qms), also referred to as quality assessment tools (qat), is under development by the emn. in this contribution the application of the qms to several use cases of different nature by vsl is presented. the benefits and usefulness of the current version of the qms are discussed from the particular viewpoint of a particular employee of vsl, and an outlook for possible future extensions and usage of the qms is given. mailto:gkok@vsl.nl acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 from the perspective of one employee of vsl only, relate to the particular version of the qms of march 2022, and they are not necessarily shared by other nmis or by emn mathmet itself. 2. short overview of the qms for software, data and guidelines a thorough overview of the qms for software, data and guidelines is given in [4]. in this section a summary is given, starting with some remarks regarding its scope. 2.1. goal of the qms originally there was the idea that the emn itself would ‘recommend’ software, reference data and guidelines. assessment of these items by means of the qms would ensure that the emn mathmet recommendations meet the highest quality levels and achieve wide use and substantial impact. for various reasons this is currently not seen as realistic. one important reason is the fact that an emn is part of the larger entity euramet [5] and that the decision-making authority, responsibility and liability for such recommendations is not entirely clear. the second reason is the scope of the emn, which is now seen as a platform to interact with stakeholders and to define future research directions, fostering collaboration and preventing duplication of work. actual technical work should be done inside other forms of cooperation. linked to this reason is that the required budget for performing an assessment is not available within the emn itself. the qms might therefore be seen as a tool to help individual nmis assessing software, data and guidelines, rather than a tool for the emn itself. performed reviews will not be published on the mathmet website but could be put on the website of an individual nmi if an nmi would wish to do so. it therefore seems reasonable to assess the mathmet qms from the perspective of a single nmi. various nmis in mathmet have announced that they will indeed discuss the benefits of the mathmet qms with people directly responsible for quality control at their respective nmis. this will be done in the near future at vsl as well. this article presents an initial assessment of the qms by the author. 2.2. qms for software the qms for software consists of an interactive pdf-file of 5 pages. based on the calculated risk level, specific fields are visible and need to be filled out. the qms for software requires that the project team provides information and evidence of documents covering the following aspects and activities: • some meta-data • a risk level analysis resulting in a “software integrity level” that determines the quality interventions needed • user requirements • functional requirements • design • coding • verification • validation • delivery, use and maintenance. 2.3. qms for data the qms for data consists of an interactive pdf-file with 41 questions, which are again only visible if they are deemed relevant for the selected risk level. for data the team should provide information regarding: • general details and responsibilities • a risk level analysis, resulting in a “data integrity level” that determines the quality interventions needed • user requirements documentation and approval • data life cycle documentation • quality planning • quality monitoring, control and improvement • quality assurance • understandability • metrological soundness most questions to be answered are of general nature. only the last set of questions explicitly involve some metrological aspects. there are no questions explicitly addressing the mathematical aspect the data may have. 2.4. qms for guidelines the qms for guidelines consists of two different checklists: one checklist for existing guidelines and one for future guidelines. at the moment of writing of this manuscript these checklists are still out for review by the project partners, but preliminary versions have been assessed by vsl. the checklists are quite similar. they ask for information regarding: • organization generating the document • independent review and approval available • appropriate metadata available • copyright and ip protection • language • mentioning of target audience • relevance for mathematics and statistics in metrology and the target audience • clearly stated conclusions • appropriate references • presentation easy to understand what is noteworthy, is that the qms checklist is not asking to perform a thorough review of all mathematics by the user of the checklist, but rather to assess if this has been done (and documented!) already and by whom. for some questions, e.g., regarding the presentation and conclusions, it would of course be beneficial to read through the complete document. however, these questions can also be answered with a reasonable level of confidence by only reading small parts of the document. 3. uses case 1: qms applied to software the usefulness of the qms has been assessed by applying it to two pieces of software. 3.1. context the first piece of software was a library of mathematical routines written in python which can be used to take advantage of redundancy in sensor network data [6]. it was developed in the research project met4fof [7]. at a first stage a chi-squared based consistency check is performed to assess the statistical consistency of the sensor data. in the case of consistency, the measurement data is combined into a best estimate of the measurand respecting sensor uncertainties and covariances, whereas in the second case the largest consistent subset of acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 sensors is constructed. this is not only done for the case that the sensors directly measure values of the measurand, but also if there is a linear relationship between the vector of sensor values 𝒙 and a vector of values 𝒚 for the measurand. this vector 𝒚 reflects the availability of multiple, redundant estimates of the measurand. the relationship takes the form 𝒚 = 𝒂 + 𝐵 𝒙 , (1) in which 𝒂 is a vector and 𝐵 a matrix. the second piece of software that was used to evaluate the qms consisted of a recently developed calculation module based on the written standard iso 6142-1 [8]. this software is used at vsl in the production of certified reference materials (i.e., gas mixtures) for customers. the input to the software consists of atomic weights, chemical formulas of the mixture components, amount of substance fractions of the components in the parent gases and the added gas mass from each parent gas mixture to the target gas mixture based on weighing of the cylinder. the outputs of the calculation module are the amount of the substance fractions of the components in the target gas mixture, including their uncertainties and covariances. the quality system at vsl requires that software should be version controlled, documented and validated. however, there are no uniform, detailed procedures or templates to this purpose. in practice different groups assure the quality in different ways. 3.2. benefits and usefulness of the qms the qms for software was applied to these pieces of software with the aim of assessing the qms, rather than constructing all required information that might not be readily available. the following parts of the qms for software were especially appreciated: • the templates help to give a uniform description of the software. • the templates help to avoid overlooking important aspects of software quality. • at vsl there is a focus on version control, documentation, and validation of software and storing these properly. user and functional requirements as well as software design may be lost after the software has been released. it would be good to properly control these documents as well. especially the documentation of software design could be useful for future improvements of the software, possibly by new personnel. the following parts of the qms for software seemed to be less appropriate for the vsl context: • the ‘review by customer or proxy’ may need some flexible interpretation, as vsl does not sell any software to external customers. there could be a ‘vsl internal customer’ for the software, and/ or the envisaged outputs of the software could be assessed against known requirements of customers. in the case of research projects that are, e.g., funded by the eu, the ‘review by customer’ is usually difficult to achieve. • the number of up to three required reviews for some aspects is quite large and can be burdensome, especially for a small nmi like vsl. • the document asks for requirement and design documents at the moment of filling out the form. at vsl, often a gradual, agile based, approach is used for software development. it is not so clear for the author of this paper how the qms should be used in that context. should the qms forms and all implied documentation and reviews be repeated at each ‘sprint’ (development cycle) or at each new release of the software? some more guidance and clarity would be beneficial. as a general observation it would be helpful if the qms indicates some examples of ict tools (preferably open source) that could be used in combination with an agile development process while assuring the traceability (in an administrative sense) of all choices made. in this way, possible requirements of the mathmet qms that may not be directly accommodated by the quality system and available software systems at an nmi, could more readily be implemented at an nmi and used in the context of work related to emn mathmet. 4. use case 2: qms applied to data in this section we will assess the qms for data by applying it to some mathematical reference datasets that were generated in an earlier project. 4.1. context in the tracim project [9] reference datasets were produced for various mathematical problems, e.g., for non-linear least squares fitting problems. the precise definition of the computational aims, together with the datasets were stored in a database [10] accessible from the internet for registered users. an example of such a fit problem is the determination of the best fit parameters 𝑎′, 𝑏′ and 𝑐′ of a function 𝑦 = 𝑓(𝑥, 𝑎, 𝑏, 𝑐) modelling exponential decay to 𝑛 datapoints (𝑥𝑖 , 𝑦𝑖 ) 1 ≤ 𝑖 ≤ 𝑛 such that 𝐷(𝑎, 𝑏, 𝑐) = ∑(𝑦𝑖 − 𝑓(𝑥𝑖 , 𝑎, 𝑏, 𝑐)) 2 → min 𝑛 𝑖=1 . (2) in the database [10] reference datasets are available for this and other fit problems. at vsl there is no specific quality guidance for the generation and documentation of such datasets, other than the requirements mentioned in section 3 for software. 4.2. benefits and usefulness of the qms the interactive pdf file with 37 pages and at most 42 questions was filled out for the application described in section 4.1. the following parts of the qms for data were especially appreciated: • with the help of the data quality management plan template a uniform plan for all applications can be created. • the questions cover a large range of quality aspects, which might be forgotten if the qms tool would not be used. the following parts of the qms for data seem to be less appropriate for vsl’s needs: • the mathematical aspect of the data is not particularly addressed. there could be more guidance with respect to how to assess the correctness of numerical data. • four responsibilities related to data are mentioned: data manager, data administrator, data steward, data technician. these roles do not always seem to exist acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 at vsl, especially not for data generated in research projects. in many cases the situation seems to be much simpler. • similar to what was mentioned in the qms for software, it would be nice if more guidance could be given regarding how to implement all mentioned quality aspects by means of some ict tools, preferably open source. 5. use case 3: qms applied to guidelines in this last use case, we will discuss the application of the qms to a set of mathematical guidelines which was produced in the emrp project new04 [11], and to which vsl contributed. 5.1. context in the new04 project three best practice guides (bpgs) were produced [12]: 1. a guide to bayesian inference for regression problems (bpg1) 2. best practice guide to uncertainty evaluation for computationally expensive models (bpg2) 3. a guide to decision-making and conformity assessment (bpg3) except for the formatting of the title page, bpg1 and bpg2 are very similar in document structure. bpg3 consists of a set of four different loosely connected documents. we applied the qms checklist for existing guidelines to bpg2. this document of 84 pages provides a summary of current best practice in uncertainty evaluation for computationally expensive models. in the first part of the document the methods are explained. in the second part three case studies are presented. 5.2. benefits and usefulness of the qms the qms checklist is mainly asking some questions about the existence of specific information like ‘version number’, ‘independently reviewed’, ‘target audience’ and ‘appropriate references’. the benefit of this approach is that the assessment can be done fairly quickly without having to read, study and check the document itself. the qms checklist verifies that some formal quality criteria are fulfilled, and it is not requiring a tedious scientific review of the content. these simple checks can give a good indication of the overall care with which the document has been prepared in a very quick way, which is the main benefit of this qms for guidelines in our opinion. if the conclusion is that the document hasn’t been independently reviewed, then this job is still out to be done, but this is not directly in the scope of the qms. the application of the qms checklist to bpg2 yielded some interesting deficits. bpg2 has no version number, it doesn’t say anything about ‘copyright’, or ‘independent review’ and there are no ‘clearly stated conclusions’. the document simply ends with the last use case. this is particularly interesting, because several of the authors of bpg2 are mathmet members and even involved in the creation of the qms. the mathematical content of the bpg may be impeccable, but it doesn’t fulfil all quality metrics of the mathmet qms. 6. conclusions and outlook as over time the scope of the emn has become clearer, also the place of the qms in it has been reassessed. initial ideas about a mathmet qms that ensure that ‘emn mathmet recommendations meet the highest quality levels and achieve wide use and substantial impact’ [13] seem to have been replaced in practice by a qms that can help individual nmis with their quality assessment, at least from the perception from vsl. in this paper it has been assessed how this worked out for vsl, and which aspects of the qms proved useful and which less appropriate for the vsl context. the overall conclusion is that the collaboration within the emn on the qms gave useful insights with respect to assuring the quality of software, data and guidelines, and which aspects could matter. at the same time a proper assessment of the different parts and questions of the qms is needed in order to best align it with vsl’s requirements and working field. a discussion with the quality coordinators at vsl is still outstanding. when more nmis assess the mathmet qms and reflect on its implementation in nmi specific quality procedures, the common ground of most useful aspects of the qms will become clearer. this may lead to a next step in the development of the qms, which should lead to a greater uniformity in quality assessment of software, data and guidelines by nmis, and to a reduction of costs to set-up the system. also, there might be additional guidance for the usage of modern ict tools to assure the quality of software, data and guidelines in a more efficient way. as guaranteeing the quality of software, data and guidelines (cf. the attention paid to research papers with open software and data) is getting nowadays more and more attention, the creation of a common qms framework by emn mathmet for nmis seems to come at the right moment. this and similar initiatives will help to maintain and increase the trustworthiness in services provided by nmis. acknowledgement we thank the npl team for leading the development of the mathmet qms and in particular peter harris for fruitful discussions and feedback to an earlier draft of this paper. we also thank the referees for their comments which helped to improve this paper. the work presented in this article has been performed in the empir project “support for a european metrology network for mathematics and statistics” (15net05 mathmet). this project has received funding from the empir programme cofinanced by the participating states and from the european union’s horizon 2020 research and innovation programme. references [1] euramet european association of national metrology institutes, european metrology network (emn) mathmet. online [accessed 6 june 2023] www.euramet.org/mathmet [2] bipm, si system of units. online [accessed 6 june 2023] https://www.bipm.org/en/measurement-units [3] bipm, vim3, definition of traceability. online [accessed 6 june 2023] https://jcgm.bipm.org/vim/en/2.41.html [4] keith lines, jean-laurent hippolyte, indhu george, peter harris, a mathmet quality management system for data, software, and guidelines, acta imeko, vol. 11, no. 4, article 8, december 2022, pp. 1-6. doi: 10.21014/actaimeko.v11i4.1348 [5] euramet european asssociation of metrology institutes. online [accessed 6 june 2023] www.euramet.org http://www.euramet.org/mathmet https://www.bipm.org/en/measurement-units https://jcgm.bipm.org/vim/en/2.41.html https://doi.org/10.21014/actaimeko.v11i4.1348 http://www.euramet.org/ acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 [6] metrology for the factory of the future, github software repository. online [accessed 6 june 2023] https://github.com/met4fof/met4fof-redundancy [7] empir project 17ind12 met4fof metrology for the factory of the future, 2018 – 2021. online [accessed 6 june 2023] https://www.ptb.de/empir2018/met4fof/home/ [8] iso 6142-1:2015, gas analysis — preparation of calibration gas mixtures — part 1: gravimetric method for class i mixtures, iso, geneva, 2015 [9] emrp project new06 tracim traceability for computationally-intensive metrology, 06/2012 05/2015; web site. online [accessed 6 june 2023] https://www.tracim.eu [10] tracim database with reference datasets. online [accessed 6 june 2023] http://www.tracim-cadb.npl.co.uk/ [11] emrp project new04 novel mathematical and statistical approaches to uncertainty evaluation, 08/2012 07/2015, web site. online [accessed 6 june 2023] https://www.ptb.de/emrp/new04-home.html [12] emrp project new04 novel mathematical and statistical approaches to uncertainty evaluation: best practice guides. online [accessed 6 june 2023] https://www.ptb.de/emrp/2976.html [13] project protocol of empir project 15net05 mathmet support for a european metrology network for mathematics and statistics, 2019 2023 https://github.com/met4fof/met4fof-redundancy https://www.ptb.de/empir2018/met4fof/home/ https://www.tracim.eu/ http://www.tracim-cadb.npl.co.uk/ https://www.ptb.de/emrp/new04-home.html https://www.ptb.de/emrp/2976.html a cost-efficient reversible logic gates implementation based on measurable quantum-dot cellular automata acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 -9 acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 1 a cost-efficient reversible logic gates implementation based on measurable quantum-dot cellular automata mary swarna latha gade1, rooban sudanthiraveeran1 1 koneru lakshmaiah education foundation, green fields, vaddeswaram, india section: research paper keywords: majority gate; quantum dot cellular automata, reversible logic; reversible gates citation: mary swarna latha gade, sudanthiraveeran rooban, a cost-efficient reversible logic gates implementation based on measurable quantum-dot cellular automata, acta imeko, vol. 11, no. 2, article 35, june 2022, identifier: imeko-acta-11 (2022)-02-35 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received january 14, 2022; in final form april 23, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: sudanthiraveeran rooban, e-mail: sroban123@gmail.com 1. introduction the building of digital logic circuits using reversible logic gives a zero dissipation of electricity. for irreversible calculations, landauer [1] established that each logical bit of data loss results in kb t ln(2) joules of heat energy. lent et al. [2] demonstrated that zero-energy dissipation in a digital logic circuit is only possible if the circuit is made up of reversible digital logic gates. the energy systems quantum-dot cellular automata (qca) circuit dissipation can be significantly lower than kb t ln(2) because qca circuitry is clocked systems that maintain data. this function encourages the use of qca technology in the construction of reversible digital circuits. reversibility, on the other hand, reverses bit loss but does not identify bit faults in the circuit. performance issues can be corrected by using faulttolerant gates with reversible logic. it becomes simpler and easier to detect and correct defects when the system is made up entirely of fault-tolerant elements. parity is used to achieve fault tolerance in communications and other systems. parity-preserving circuitry would thus be the forthcoming architecture principles for the creation of reversible fault-tolerant devices in nanotechnology. because of its very small size and low power consumption, qca is recommended for use in nanotechnology research [3]. the basic reversible logic gates make up reversible circuits. these gates produce a one-of-a-kind mapping of input and output pairs, making the set of inputs equal to the total outputs. in [4][6], a substantial contribution to the research for the building of basic reversible logic gates was made. these topologies, on the other hand, are less efficient and necessitate greater inverter and majority gate reductions. it aids us in the development of reversible logic gates with fault-tolerant architectures using qca technology. the basic reversible gates are constructed using upgraded xor gate designs. because of the rapid advancement abstract in order to improve the density on a chip, the scaling of cmos-based devices begins to shrink in accordance with moore's laws. this scale affects the execution of the cmos device due to specific limitations, such as energy dissipation and component alignment. quantum-dot cellular automata (qca) have been replaced to overcome the inadequacies of cmos technology. data loss is a major risk in irreversible digital logic computing. as a result, the market for nano-scale digital operations is expanding, reducing heat dissipation. reversible logic structures are a strong competitor in the creation of efficient digital systems. a reversing logic gate is an important part of reversible circuit design. the qca design of basic reversible logic gates is discussed in this study. these gates are built using a new qca design with xor gates with two and three inputs. qcadesigner tests simulation performance by simulating the specified reversible logic gate layouts. the measurement and optimization of design techniques at all stages is required to reduce power, area, and enhance speed. the work describes experimental and analytic approaches for measuring design metrics of reversible logic gates using qca, such as ancilla input, garbage output, quantum cost, cell count, and area, while accounting for the effects of energy dissipation and circuit complexity. the parameters of reversible gates with modified structures are measured and then compared with the existing desi gns. the designed f2g, frg, fg, rug and uppg reversible logic gates using qca technology shows an improvement of 42 %, 23 %, 50 %, 39 % and 68 % in terms of cell count and 31 %, 20 %, 33 %, 20 % and 72 % in terms of area with respect to the best existing designs. the findings illustrate that the proposed architectures outperform previous designs in terms of complexity, size, and clock latency. mailto:sroban123@gmail.com acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 2 of qca technology, a great deal of research has been done in the field of qca-based reversible logic measurement and technology, with the goal of improving performance metrics in terms of measurement accuracy and complexity, and thus improving the efficiency and precision of circuits. each reversible logic gate in a reversible logic system architecture measure specific parameter separately. the system then combines all the independents into a comprehensive set of measurement data using updated gate structures. many parameter values in the existing designs are not measured when the circuits are tested. in individual cases, it might be necessary to make measurements to accurately analyze the behaviour of the reversible logic gates. 1.1. reversible logic if each input state yields a different output state, the logic characteristic is reversible. a reversible gate is the kind that realizes reversible functionality. both outputs and the inputs can have a bijective mapping. as a result, the output and input quantities must be equal in order to prove the reversibility principle. in reversible logic gates, feedback is not acceptable, and fan out is not permitted. the circuits implementation with reversible logic gates are measured with the parameters like primitive gate count, constant input, logic calculation, unwanted output and quantum cost. based on their features, these gates have indeed been grouped into three types. • basic reversible logic gates: all of these gates are necessary parts of a reversible circuit. there have been other reversible gates invented, but not as well as buffer is the most basic. • conservative reversible logic gates: these gates have about the same number of zeros and ones at both inputs and outputs. consider the case of mcf's gates. • parity-preserving reversible logic gates: these gates have about the same parity (either even or odd) at both their inputs and outputs. only in terms of mathematics. all of the inputs and outputs of xor are equal to zero. the conservative gates are retained through parity, while the contrary is not true. 1.2. qca in [2], lent et al. introduced qca technology, which is utilized to construct all components of a nanoscale qca circuit. each qca cell is made up of four quantum dots, which are nanostructures capable of trapping electric bills. because these electrons are repelled by each other due to coulomb interaction, they seem to be positioned on opposite corners of the square. the alignment of electron pairs defines two potential polarization states, -1.00 and 1.00. the majority gate and the inverter gate are the two most basic qca gates. many logic operations that are governed by the clocking operation are commonly executed by the coulombic interactions of electrons in neighbouring cells. 1.3. related work several studies using qca to generate reversible logic gates have just been conducted during the previous decade [6]-[11]. qca architectures using majority gates as the foundation device for numerous reversible gates were shown by researchers in [6]. because the output lines are not really strongly polarised, these kinds of gates are not considered sufficient. for fredkin, feynman, and toffoli gates, mohammadi et al. [7] built qcabased reversible circuits integrating both rotating and normal cells. the architectures shown are dependent on the majority gate's arrangement. however, an effective qca architecture necessitates a new majority circuit design process, which adds to the complexity. peres and feynman's gates based on qca realisable majority gates were discussed in [8]–[10]. their topologies are less optimal, necessitating the reduction of both main and inverter gates. the authors of [11] proposed employing majority gates in qca to create an efficient reversible logic gate arrangement. however, these systems have usual restrictions, such as a higher number of stable input signals. without employing any wire crossing, the topologies presented by sing et al. [11] cannot be achieved. the feynman gate [12] is the wellknown "2x2" reversible gate. it can be used to increase fan-out. the famous "3x3" reversible gates are toffoli, peres, and fredkin [13], [14]. trailokya nath sasamal et.al [15] designed reversible gates like fg, frg, tofolli, peres gate and f2g in qca with 26, 68, 59, 88, 53 and 0.03 µm2, 0.06 µm2, 0.034 µm2, 0.097 µm2, 0.058 µm2 qca cell count and area respectively. the main contributions of the paper are as follows: 1. introduces new optimized layouts for the existing reversible logic gates to reduce the complexity of the circuits. 2. later, the performance of the proposed gates is analyzed and compared with the existing gates using conventional parameters. 2. novel qca designs for reversible gates the essential building elements of reversible digital logic are the reversible gates. such gates produce a one-of-a-kind mapping between the input as well as output sets, allowing for the same combination of inputs as outputs. qca designs of xor, f2g (double feynman gate), frg (fredkin gate), fg (feyman gate), rug (reversible universal gate), and uppg (universal parity preserving gate) reversible gates are provided in the following section. all of the suggested approaches adhere to the qca designs rules in aspects of all input cells arriving in a same period clock zone, long qca wires being subdivided into four clock zones based on the highest cell count in the same zone and the minimum cells in the same zone to avoid increased signal propagation and switching delays, and none of the suggested approaches have crossover choices. the prototypes of several reversible gates have been synthesized and measured using qca designer tool. an xor gate is the basic element of the most elementary reversible gates. it has been observed that the endurance of the design of the reversible logic architecture is influenced not only by the complexity as well as by the latency of an xor gate. two inverters and three majority gates are required in the classic xor architecture. the suggested reversible layouts are using integrated architecture with only 12 qca cell connections for both the 2-input and 3-input xor gates, as can be seen in figure 1. the f2g [16], [18] proposed block diagram and conceptual design are illustrated in figure 2a. it has three inputs namely a, b, c and three outputs p, q, r. the f2g concept uses the current xor gate to produce a design that is optimum for the region. figure 2b. shows a possible qca architecture with such a gate. note that the first input a is carried to p, whereas the realization of two output bits q and r necessitates the use of two xor gates having two inputs. the designed f2g gate qca layout utilizes only two-time clock zones. figures 3.a and b illustrate the block diagram and schematic diagram of the hypothesized fredkin gate. in general terms, the frg (fredkin gate) [19]-[24] is called a universal gate in the sense that it may be "trained" to operate as many basic components in almost any reversible digital logic circuit. it has three inputs namely a, b, c and 3 outputs p, q, r. for deployment, there are acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 3 six primary gates. figure 3c shows the qcadesigner setup of the built fredkin gate. the suggested modified frg gate shows a measurement of only 75 qca cells. the output signal p is observed to be connected to the input signal a, but the output signals q and rare implemented using majority gates. four-time clock zones are used in the intended frg gate qca configuration. figure 4 shows the feynman gate's block diagram and schematic diagram. it uses two inputs namely a, b and two outputs p, q. because of its widespread use in quantum computing, the fg (feynman gate) is sometimes referred to as the controlled not or even quantum xor gate [25]-[28]. the measurement of the designed fg gate is done with a coherence vector simulation engine. figure 4b depicts the corresponding qca architecture, which has a single xor gate with two inputs. it's important to note that output p is related to input a, however output q necessitates the use of a simpler xor gate than the present xor designs. just two-time clock zones are used in the developed f2g gate qca [29]. a block and conceptual design for the rug gate is shown in figure 5a. it has three inputs and three outputs in figure 5b, an appropriate qcadesigner interface has been demonstrated for the rug gate. there are four primary gates and one xor gate having two inputs in this circuit. the output p was generated from a threeinput majority gate, q was collected from three majority gates, and r was obtained out of a two-input xor gate, as shown in the diagram. four-time clock zones are used in the intended rug gate qca layout. figure 6a and figure 6b illustrate the topology and schematic design of the proposed qca universal parity preserving gate (uppg). it has four inputs and four outputs two majority gates, two xor gates with two inputs, and one xor gate with three inputs make up the circuit. the measurement of modified uppg gate shows a better improvement with the existing designs. figure 1. qca layout of xor gate with 2-input and 3-input. a) b) figure 2. double feynman gate (f2g): a) block diagram and schematic diagram, b) qca layout. a) b) c) figure 3. fredkin gate (frg): a) block diagram, b) schematic diagram, c) qca layout. a) b) c) figure 4. feynman gate (fg): a) block diagram, b) schematic diagram, c) qca layout. acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 4 figure 6c depicts the suggested qcadesigner layout for the uppg gate. it was discovered that the proposed arrangement has an occupied area of only 0.08 µm2. a total of two different time clock zones are used in the uppg gate qca configuration. 3. results and discussion the coherence(accuracy)vector simulator had been used to analyze the layouts using default parameters, and qcadesigner had used qca configurations of the recommended systems. the cell size, layer separation distance, dot diameter, high clock area, low clock area, clock shift area, clock amplitude factor, relative permittivity, radius of effects, number of tolerances, convergence tolerance, and maximum iterations per sample are 18 nm × 18 nm, 5 nm, 9.800000·10-22 j, 3.80000 10-22 j, 0, 2, 12.9, 85 nm, 4, 0.001, and 100, respectively. the f2g gate simulated waveform is shown in figure 7. this simulation waveform has demonstrated that the circuit is functional, and consequently output values are created for all input data. for development, 31 cells in an area of around 0.04 µm2 are required. figure 7 shows the results of the fredkin gate model. this gate is frequently employed as a multiplexer in electronics in a variety of applications. it is made up of 75 qca cells with 0.08 µm2 area widths. the feynman gate was made using 13 qca cells with a total area of around 0.02 µm2, as shown in figure 4, and the result of its simulation can be seen in figure 9. figure 10 depicts the simulation waveform of the rug. with a surface area of 0.08 µm2, it utilizes 59 qca cells. the proper operation of the uppg gate is shown in figure 11. this gate is made with only 75 qca cells and a surface area of around 1 µm2. a study of the characteristics of the recommended reversible gate configurations with the measurement of previous studies is shown in table 1 to table 5. table 1 describes the design f2g reversible gate, which shows an improvement of 42 % and 31 % with respect to qca cells and area with the best existing design. when we compare the developed f2g to previously described designs, we find that the f2g designed with qca is the best in terms of the number of qca cells and the area it supports, as shown in table 1. table 2 shows the planned frg reversible gate, which was built using 75 qca cells with a measured area of 0.08 µm2 and a 0.5 clock cycle delays. it shows that the recommended architecture selected the best of all existing designs. when compared to the optimal design, the suggested frg gate provides a 23 % and 20 % improvement in cell count and area, respectively. the fredkin gate presented has been found to be more acceptable for cascade design, and the output metrics are comparable to the same optimal technique. the fredkin gate seems to have a delay of 0.5 clocks, which is less than that of conventional designs. table 3 presents the intended fg reversible gate, which was built using 13 qca cells with a 0.02 µm2 area and a 0.5 clock cycle latency. the recommended fg circuit is compared to earlier circuits in table 3. when compared to the optimal design, the proposed fg gate improves cell count and area by 50 % and 33 %, respectively. the total a) b) c) figure 5. reversible universal gate (rug): a) block diagram, b) schematic diagram, c) qca layout. a) b) c) figure 6. universal parity preserving gate (uppg): a) block diagram, b) schematic diagram, c) qca layout. acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 5 number of qca cells employed, the total accessible area, the clock delay, and the crossover all influence the relationship. table 3 demonstrates that the fg circuit designed in this article has fewer cells and a smaller system area than previous designs. table 4 and table 5 show the functional efficiency of the recommended structures when compared to typical rug and uppg designs. circuit parameters such as cell count, delay, and area are taken into account while evaluating performance. in comparison to the past rug and uppg, the recommended rug and uppg have made significant improvements. table 4 shows that as compared to the optimal design, the proposed rug gate improves cell count and area by 39 % and 20 %, respectively. table 5 shows that when compared to the optimal design, the suggested uppg gate improves cell count and area by 68 % and 72 %, respectively. the total number of qca cells employed, the total accessible area, the clock delay, and the crossover all influence the relationship. table 3 demonstrates that the fg circuit designed figure 7. simulation result of f2g. table 1. comparison of planned f2g configuration with presented designs. double feynman gate cell count area (µm2) delay (cycles) wire crossing f2g [6] 93 0.19 0.75 coplanar f2g [14] 51 0.06 0.5 f2g [5] 53 0.05 0.5 proposed f2g 31 0.04 0.5 table 2. comparison of proposed frg structure with existing designs. fredkin gate cell count area (µm2) delay (cycles) wire crossing frg [7] 178 0.21 1 coplanar frg [14] 100 0.092 0.75 frg [6] 97 0.10 0.75 proposed frg 75 0.08 0.5 table 3. comparison of proposed fg structure with existing designs. feynman gate cell count area (µm2) delay (cycles) wire crossing fg [7] 78 0.09 1 coplanar fg [8] 54 0.038 0.75 multilayer fg [6] 53 0.07 0.75 fg [9] 37 0.023 0.75 fg [11] 32 0.03 0.75 fg [14] 34 0.036 0.5 fg [15] 26 0.03 0.5 proposed fg 13 0.02 0.5 acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 6 figure 8. simulation result of frg. figure 9. simulation result of fg. acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 7 figure 10. simulation result of rug. figure 11. simulation result of rug. acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 8 in this article has fewer cells and a smaller system area than previous designs. tables 4 and 5 show the functional efficiency of the recommended structures when compared to typical rug and uppg designs. circuit parameters such as cell count, delay, and area are taken into account while evaluating performance. in comparison to the past rug and uppg, the recommended rug and uppg have made significant improvements. table 4 shows that as compared to the optimal design, the proposed rug gate improves cell count and area by 39 % and 20 %, respectively. table 5 shows that when compared to the optimal design, the suggested uppg gate improves cell count and area by 68 % and 72 %, respectively. the comparison shows that the proposed structures for the existing reversible gates have a compact architecture than the existing designs. the energy dissipation analysis of the proposed gate structures is as shown in table 6. results show that the proposed structures outperform previous reversible gate designs and thus suitable for application toward complex nano-scale architectures in qca. 4. conclusion in this study, the reversible gates like feynman gate (fg), double feynman gate (f2g), reversible universal gate (rug), fredkin gate (frg), and universal parity preserving gate (uppg) are designed using qca technology with optimum area. the designed layouts of the reversible gates have zero wire crossings. the simulation results have been verified using qca designer software. the suggested f2g, frg, fg, rug and uppg qca layouts are designed only with 31, 75, 13, 59 and 75 qca cell count and 0.04 µm2, 0.08 µm2, 0.02 µm2, 0.08 µm2, and 0.08 µm2 area respectively. then, we measured and compared the robustness of the recommended reversible gates to the existing gates using standard metrics. the simulation results show that the suggested reversible gates perform better in terms of cell count and area by 42 %, 23 %, 50 %, 39 %, and 68 % and 31 %, 20 %, 33 %, 20 %, and 72 %, respectively. the suggested architectures outperform existing reversible gate designs, indicating that they are better suited for usage in qca in complicated nanoscale systems. references [1] r. landauer, irreversibility and heat generation in the computing process, ibm journal of research and development, vol. 5, no. 3 (july 1961), pp. 183-191. doi: 10.1147/rd.53.0183 [2] c. s. lent, m. liu, y. lu, bennett clocking of quantum-dot cellular automata and the limits to binary logic scaling, nanotechnology, vol.17, no. 1(2006), pmid: 21727566. doi: 10.1088/0957-4484/17/16/040 [3] m. balali, a. rezai, h. balali, f. rabiei, s. emadi, towards coplanar quantum-dot cellular automata adders based on efficient three-input xor gate, results phys., vol. 7(2017), pp. 1389–1395. doi: 10.1016/j.rinp.2017.04.005 [4] a. peres, reversible logic and quantum computers, phys. rev. a gen. phys. vol. 32, no. 6(1985), pp. 3266–3276. doi: 10.1103/physreva.32.3266 [5] b. parhami, fault-tolerant reversible circuits, fortieth asilomar conference on signals, systems and computers, acssc’06, pacific grove, ca, usa, 29 october 1 november 2006, pp. 1726–1729. doi: 10.1109/acssc.2006.355056 [6] m. abdullah-al-shafi, m. s. islam, a. n. bahar, a review on reversible logic gates and it’s qca implementation, int. j. comput. appl. 128(2) (2015), pp. 27–34. doi: 10.5120/ijca2015906434 [7] z. mohammadi, m. mohammadi, implementing a one-bit reversible full adder using quantumdot cellular automata. quantum inf. process. vol.13(2014), pp. 2127–2147. doi: 10.1080/03772063.2015.1018845 [8] j. c. das, d. de, reversible binary to grey and grey to binary code converter using qca, iete j. res, vol.61(2015), pp. 223–229. doi: 10.1080/03772063.2015.1018845 [9] k. sabanci, s. balci, development of an expression for the output voltage ripple of the dc-dc boost converter circuits by using particle swarm optimization algorithm, measurement, vol. 158 (2020), pp. 1-9. doi: 10.1016/j.measurement.2020.107694 [10] j. c. das, d. de, novel low power reversible binary incrementer design using quantum-dot cellular automata, microprocess. microsyst., vol. 42 (2016), pp. 10–23. doi: 10.1016/j.micpro.2015.12.004 [11] g. singh, r.k. sarin, b. raj, design and analysis of area efficient qca based reversible logic gates, microprocess. microsystc, vol. 52 (2017), pp. 59–68. doi: 10.1016/j.micpro.2017.05.017 [12] a. m. chabi, a. roohi, h. khademolhosseini, s. sheikhfaal, towards ultra-efficient qca reversible circuits, microprocess. microsyst, vol. 49 (2017), pp. 127–138. doi: 10.1016/j.micpro.2016.09.015 [13] k. walus, t. dysart, g. jullien, r. budiman, qca designer: a rapid design and simulation tool for quantum-dot cellular automata, ieee trans. nanotechnol, vol. 3 (2004), pp. 26–29. doi: 10.1109/tnano.2003.820815 [14] a. n. bahar, s. waheed, m. a. habib, a novel presentation of reversible logic gate in quantumdot cellular automata (qca), international conference on electrical engineering and table 4. comparison of proposed rug structure with existing designs. rug gate cell count area (µm2) delay (cycles) wire crossing rug [7] 170 0.23 1 coplanar rug [16] 187 0.22 1.75 coplanar rug [6] 97 0.10 0.75 proposed rug 59 0.08 0.5 table 5. comparison of proposed uppg structure with existing designs. uppg gate cell count area (µm2) delay (cycles) wire crossing uppg [17] 233 0.29 2.5 coplanar proposed uppg 75 0.08 0.5 table 6. energy dissipation of the proposed reversible logic gate structures. gate average energy dissipation per cycle (ev) total energy dissipation (ev) f2g 2.66 · 10-3 2.92 · 10-2 frg 3.20 · 10-3 3.52 · 10-2 fg 1.69 · 10-3 1.86 · 10-2 rug 3.55 · 10-3 3.90 · 10-2 uppg 3.55 · 10-3 3.90 · 10-2 https://doi.org/%2010.1147/rd.53.0183 https://doi.org/%2010.1088/0957-4484/17/16/040 https://doi.org/10.1016/j.rinp.2017.04.005 https://doi.org/10.1103/physreva.32.3266 https://doi.org/10.1109/acssc.2006.355056 https://doi.org/10.5120/ijca2015906434 https://doi.org/10.1080/03772063.2015.1018845 https://doi.org/10.1080/03772063.2015.1018845 https://doi.org/10.1016/j.measurement.2020.107694 https://doi.org/10.1016/j.micpro.2015.12.004 https://doi.org/10.1016/j.micpro.2017.05.017 https://doi.org/10.1016/j.micpro.2016.09.015 https://doi.org/10.1109/tnano.2003.820815 acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 9 information & communication technology, dhaka, bangladesh, 10-12 april 2014, pp. 1–6. doi: 10.1109/iceeict.2014.6919121 [15] t. nath sasamal, a. kumar singh, a. mohan, quantum-dot cellular automata based digital logic circuits: a design perspective, studies in computational intelligence, vol. 879 (2020). doi: 10.1007/978-981-15-1823-2 [16] trailokya nath sasamal, anand mohan, ashutosh kumar singh, efficient design of reversible logic alu using coplanar quantum-dot cellular automata, journal of circuits, systems and computers, vol. 27, no. 02 (2018) doi: 10.1142/s0218126618500214 [17] n. kumar misra, s. wairya, v. kumar singh, approach to design a high performance fault-tolerant reversible alu, int. j. circuits and architecture design, vol. 2, no. 1(2016). doi: 10.1504/ijcad.2016.075913 [18] e sardini, m serpelloni, wireless measurement electronics for passive temperature sensor, ieee trans. inst meas, vol. 61, no. 9 (2012), pp. 23542361. doi: 10.1109/tim.2012.2199189 [19] r. sargazi, a. akbari, p. werle, h. borsi, a novel wideband partial discharge measuring circuit under fast repetitive impulses of static converters, measurement, vol. 178 (2021), pp. 1-7. doi: 10.1016/j.measurement.2021.109353 [20] t. n. sasamal, a. k. singh, u. ghanekar, toward efficient design of reversible logic gates in quantum-dot cellular automata with power dissipation analysis, int. j. theor. phys., vol. 57, no. 4 (2018), pp. 1167-1185. doi: 10.1007/s10773-017-3647-5 [21] m. abasi, a. saffarian, m. joorabian, s. ghodratollah seifossadat, fault classification and fault area detection in gupfccompensated double-circuit transmission lines based on the analysis of active and reactive powers measured by pmus, measurement, vol. 169 (2021), pp. 1-34. doi: 10.1016/j.measurement.2020.108499 [22] m. s. latha gade, s. rooban, an efficient design of fault tolerant reversible multiplexer using qca technology, 2020 3rd international conference on intelligent sustainable systems (iciss), thoothukudi, india, 3-5 december 2020, pp. 1274-1280. doi: 10.1109/iciss49785.2020.9315867 [23] a. lay-ekuakille, a. massaro, s. p. singh, i. jablonski, m. z. u. rahman, f. spano, optoelectronic and nanosensors detection systems: a review, ieee sensors journal, vol. 21, no.11, art. no. 9340342 (2021), pp. 12645-12653. doi: 10.1109/jsen.2021.3055750 [24] p. bilski, analysis of the ensemble of regression algorithms for the analog circuit parametric identification, measurement, vol. 160 (2021), pp. 1-9. doi: 10.1016/j.measurement.2020.107829 [25] m. m. abutaleb, robust and efficient qca cell-based nanostructures of elementary reversible logic gates, j. supercomput., vol. 74, no. 11, pp. 62586274(2018). doi: 10.1007/s11227-018-2550-z [26] s. surekha, u. m. z. rahman, n. gupta, a low complex spectrum sensing technique for medical telemetry system, journal of scientific and industrial research, vol. 80 no. 5 (2021), pp. 449456. doi: http://nopr.niscair.res.in/handle/123456789/57696 [27] m. s. latha gade, s. rooban, run time fault tolerant mechanism for transient and hardware faults in alu for highly reliable embedded processor, international conference on smart technologies in computing, electrical and electronics (icstcee), bengaluru, india, 9-10 october 2020, pp. 44-49. doi: 10.1109/icstcee49637.2020.9277288 [28] f. salimzadeh, s. r. heikalabad, design of a novel reversible structure for full adder/subtractor in quantum-dot cellular automata, phys.b, condens. matter, vol. 556(2019), pp. 163-169. doi: 10.1016/j.physb.2018.12.028 http://dx.doi.org/10.1109/iceeict.2014.6919121 https://doi.org./10.1007/978-981-15-1823-2 https://www.worldscientific.com/doi/abs/10.1142/s0218126618500214 https://www.worldscientific.com/doi/abs/10.1142/s0218126618500214 https://www.worldscientific.com/doi/abs/10.1142/s0218126618500214 https://www.worldscientific.com/doi/abs/10.1142/s0218126618500214 https://www.worldscientific.com/doi/abs/10.1142/s0218126618500214 https://www.worldscientific.com/doi/abs/10.1142/s0218126618500214 https://www.worldscientific.com/doi/abs/10.1142/s0218126618500214 https://www.worldscientific.com/worldscinet/jcsc https://www.worldscientific.com/worldscinet/jcsc file://///fs.isb.pad.ptb.de/home/imeko/acta%20imeko%20papers/vol%2011,%20no%202%20(2022)/1_pre-formatted%20copies/vol.%2027,%20no.%2002(2018).%0ddoi:%20 https://doi.org/10.1142/s0218126618500214 http://dx.doi.org/10.1504/ijcad.2016.075913 http://doi.org/10.1109/tim.2012.2199189 https://doi.org/10.1016/j.measurement.2021.109353 http://dx.doi.org/10.1007/s10773-017-3647-5 http://dx.doi.org/10.1016/j.measurement.2020.108499 https://doi.org/10.1109/iciss49785.2020.9315867 https://doi.org/10.1109/jsen.2021.3055750 https://doi.org/10.1016/j.measurement.2020.107829 https://doi.org/10.1007/s11227-018-2550-z http://nopr.niscair.res.in/handle/123456789/57696 https://doi.org/10.1109/icstcee49637.2020.9277288 https://doi.org/10.1016/j.physb.2018.12.028 progress towards in-situ traceability and digitalization of temperature measurements acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 6 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 progress towards in-situ traceability and digitalization of temperature measurements jonathan pearce1, radka veltcheva1, declan tucker1, graham machin1 1 national physical laboratory, hampton road, teddington, tw11 0lw, united kingdom section: research paper keywords: temperature; thermometry; traceability; primary thermometry; process control; digitalization citation: jonathan pearce, radka veltcheva, declan tucker, graham machin, progress towards in-situ traceability and digitalization of temperature measurements, acta imeko, vol. 12, no. 1, article 4, march 2023, identifier: imeko-acta-12 (2023)-01-04 section editor: daniel hutzschenreuter, ptb, germany received november 7, 2022; in final form march 24, 2023; published march 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: jonathan pearce, e-mail: jonathan.pearce@npl.co.uk 1. introduction the control and monitoring of temperature is a key part of almost every technological process. the thermodynamic temperature of a system is related to the average kinetic energy of the constituent particles of the system. however, this cannot be measured directly, so another parameter which varies with temperature, such as the speed of sound in a gas, must be measured and then related to the temperature through well understood physics. in general, such an approach to temperature measurement is very complicated, time consuming and expensive, and is not currently well suited to practical thermometry. most thermometry therefore makes use of practical sensors such as thermocouples and resistance thermometers. these yield a temperature dependent property such as voltage or resistance, which must then be related to temperature by comparison with a set of known temperatures, i.e. a calibration. the global framework for approximating the si unit of temperature, the kelvin, is the international temperature scale of 1990 (its-90) [1]. the measurement infrastructure that makes this possible is maintained by national metrology institutes (nmis), who perform periodic global comparisons of their own standards to ensure the equivalence of thermometry worldwide. these standards are then used to provide calibrations to end-users which are traceable to the its-90, and hence the si kelvin. in this way an end-user may be confident that their temperature measurements are globally equivalent. a key drawback of this empirical approach to thermometry is that when the sensing region of the thermometer is degraded in use, for example by exposure to high temperatures, contamination, vibration, ionising radiation and other factors. the relationship between the thermometer output and its temperature changes in an unknown way. this is referred to as ‘calibration drift’, and it is insidious because there is no indication in process that it is occurring. this is a big problem for applications where temperature monitoring and control is critical, such as in long-term monitoring (e.g. nuclear waste storage), or where processes need to operate within a narrow abstract autonomous control systems rely on input from sensors, so it is crucial that the sensor input is validated to ensure that it is ‘right’ and that the measurements are traceable to the international system of units. the measurement and control of temperature is widespread, and its reliable measurement is key to maximising product quality, optimising efficiency, reducing waste and minimizing emissions such as co2 and other harmful pollutants. degradation of temperature sensors in harsh environments such as high temperature, contamination, vibration and ionising radiation causes a progressive loss of accuracy that is not apparent. here we describe some new developments to overcome the problem of ‘calibration drift’, including self-validating thermocouples and embedded phase-change cells which self-calibrate in situ by means of a built-in temperature reference and practical primary thermometers such as the johnson noise thermometer which measure temperature directly and do not suffer from calibration drift. all these developments will provide measurement assurance which is an essential part of digitalisation to ensure that sensor output is always ‘right’, as well as providing essential ‘points of truth’ in a sensor network. some progress in digitalisation of calibrations to make them available to end-users via a website and/or an application programming interface is also described. mailto:jonathan.pearce@npl.co.uk acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 temperature window (e.g. aerospace heat treatment). the result is often reduced safety margins, sub-optimal processing, lower efficiency, increased emissions, and higher product waste or rejection. in this article some new developments led by the uk’s national physical laboratory (npl), in collaboration with industry partners, are described to overcome the problem of calibration drift and to provide assurance that temperature sensor output is valid – a key part of increasingly widespread digitalisation to ensure sensor output is ‘right’ by providing in-situ validation. these include self-validation and in-process calibration which provide traceability to the si kelvin at the point of measurement, and practical primary thermometry which measures temperature directly, has no need for calibration, and does not suffer from calibration drift. additionally, other promising practical primary thermometry techniques have been outlined, namely doppler broadening thermometry, ring resonator thermometry, and whispering gallery mode thermometry. these are collectively referred to as ‘photonic thermometers’ due to their use of electromagnetic radiation. acoustic thermometry is also briefly discussed. various groups worldwide, including npl, are working on elevating the technological readiness of these techniques. finally, some developments in the digitalisation of calibrations are described, including automation, web-based access, and steps towards implementation of a standardised digital calibration certificate. these will substantially reduce the amount of paperwork and opportunities for operator error and will facilitate digital transfer of calibrations and traceability for paperless audit trails. 2. self-validation thermocouples are very mature and well established, and are widely used in industry. however, they are particularly susceptible to drift of the calibration in harsh environments, whereby the relationship between emf and temperature changes in an unpredictable manner. this gives rise to a progressive, and unknown, temperature measurement error which in turn causes degradation in process monitoring and control. this can be monitored in situ by using a miniature phase-change cell (fixedpoint) in close proximity to the measurement junction (tip) of the thermocouple [2]. the fixed point is a very small crucible containing an ingot of metal (or metal-carbon alloy [3] or organic material [4]) with a known temperature. the latest devices developed by npl are able to accommodate the entire thermocouple and fixed-point assembly within a protective sheath of outer diameter 7 mm; the cell is typically about 4 mm in diameter and 10 mm in length. importantly, this means that the self-validating thermocouple presents the same external form factor and appearance as a regular process control thermocouple. it is also of course fully compatible with existing connections and electronics. a self-validating thermocouple is shown in figure 1. in use, when the process temperature being monitored passes through the melting temperature of the ingot, the thermocouple output exhibits a ‘plateau’ during melting, due to the heat of fusion of the ingot restraining further temperature rise by ensuring incoming heat from the surroundings is absorbed, driving the phase change. once the ingot is completely melted the indicated temperature resumes its upward trend. as the melting temperature of the ingot is known, having been traceably calibrated a priori, the thermocouple can be recalibrated in situ. the question of the stability of the melting temperature of the miniature fixed point is important to consider, since that has the potential to inadvertently introduce further calibration drift. in fact, this is inherently stable, and it has been shown experimentally during the course of development of the devices that in typical applications the drift of the fixed point itself is negligible in the context of thermocouple measurement uncertainties. contamination is by far the most likely cause of drift. as a general rule of thumb, 1 part per million of contamination by impurities gives rise to about 0.001 °c change in the melting temperature. so far, no evidence of measurable drift of the miniature fixed points has been found, even in quite harsh environments such as aerospace heat treatment processes. calculations indicate that contamination by transmutation in ionising radiation environments is even less important in most situations, although the extreme case of operation in the core of a nuclear reactor may cause significant drift [5]. a typical output of a self-validating thermocouple during the recalibration process is shown in the lower panel of figure 1. this device has been extensively characterised [6] and has been licensed by npl to uk thermocouple manufacturer ccpi europe, under the tradename inseva [7], who are conducting a series of trials in high value manufacturing industries at several plants in the uk and in europe. typical fixed-point materials for these applications include ag (962 °c), au (1064 °c), cu (1084 °c), fe-c (1153 °c) and co-c (1324 °c). a similar concept has been employed for an application in space-borne instrumentation, where the phase-change cell is part of the system whose temperature is to be measured. such an embedded fixed-point has been demonstrated by npl in collaboration with ral space on a prototype blackbody calibrator designed for operation as part of a spacecraft-borne figure 1. top: self-validating thermocouple with protective sheath. image courtesy of ccpi europe. bottom: melting curve observed during the recalibration of the inseva self-validating thermocouple, here using a gold ingot (melting temperature 1064.18 °c). 0 100 200 300 400 1060 1062 1064 1066 1068 1070 in d ic a te d t e m p e ra tu re / ° c time / s acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 3 earth observation instrument suite [5]. the phase-change cell, containing approximately 2 g of gallium (melting point 29.7646 °c), is embedded in the aluminium blackbody calibrator base, close to an embedded platinum resistance thermometer (prt). this enables the in-situ recalibration of the prt in orbit. in this application, some key developments included a mechanism to promote reliable freezing of the gallium without necessitating a large supercool (gallium is prone to cooling several degrees below its freezing temperature before nucleation is triggered), and a mechanism for preventing mechanical contact between the gallium ingot and the stainless steel cell wall, thereby avoiding the possibility of long-term contamination of the ingot and hence a change of its melting temperature. the ingot is shown in figure 2. it can be seen in the lower panel of figure 2 that the remotely located prt is able to indicate clearly defined melting curves with a useful duration of several hours, and a melting temperature range of less than 0.01 °c. by calibrating the phasechange cell against npl’s reference standard gallium cell, it is possible to perform in-situ traceable calibrations of the prt on board the spacecraft with an expanded uncertainty of less than 0.01 °c. for both the self-validating thermocouples and the embedded phase-change cell techniques, vigorous efforts are ongoing to automate the detection of the melting plateau, and, once detected, to characterise the ‘fixed point’ representing the invariant part of the melting curve. this is challenging to implement algorithmically in a manner sufficiently robust against noise and spurious artefacts in the data, but it is essential for autonomous in-situ recalibration. npl has had some success with a supervised learning approach (machine learning) on training data obtained from an industrial trial of self-validating thermocouples in a heat treatment application which yielded a large, high quality data set. this algorithm was then shown to work well on data that was not part of the training set, yielding typical expanded uncertainties in the melting point determination of about 0.5 °c (gold fixed point) and 1.0 °c (silver fixed point). here and in the following, expanded uncertainties are taken to correspond to a coverage factor k = 2, i.e. a coverage probability of 95 %. note that it is unlikely these uncertainties will be further reduced by improvements to the algorithm because they are dominated by experimental considerations associated with the physical measurement setup. instead, algorithm development should focus on reliability and ability to characterise the plateau under adverse conditions such as noise, spurious artefacts, and faint signals. in other words, a demonstrable ‘all weather’ capability is needed. hence, while the new machine learning algorithm shows promise, it will need to be tested on a diverse set of data to demonstrate its universal applicability. the detection of characteristic shapes such as melting curves is essentially a pattern recognition problem. this is easy for humans because of the extraordinary sophistication of the visual cortex but it is not practical for conventional computer programming approaches, and in general machine learning or other artificial intelligence applications are needed to tackle this problem, together with good quality data for development and validation of the techniques. 3. practical primary thermometry the limitations of conventional temperature sensors which rely on calibration prior to use, and hence are prone to calibration drift, has led to renewed interest in practical primary thermometry. primary thermometers measure some property that can be related to temperature directly through wellunderstood physics, and do not require a temperature scale or calibration. in addition, if all parameters needed to infer the temperature are measured simultaneously, the sensor is not subjected to calibration drift, since any change in the sensor figure 2. top: phase-change cell embedded in the blackbody calibrator base; the adjacent prt is to the left; inset shows a photograph of the phase-change cell. bottom: melting curves observed during the in-situ calibration of the prt using the miniature embedded phase-change cell, showing the narrow melting range and excellent reproducibility. figure 3. prototype practical johnson noise thermometer developed by metrosol in collaboration with npl. the sensing electronics are housed in the container to the right; the probe extends to the left. 00:00 01:00 02:00 03:00 04:00 29.755 29.760 29.765 29.770 29.775 29.780 29.785 29.790 t e m p e ra tu re / ° c time elapsed / hours:minutes acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 4 material is accounted for in the measurement. examples include acoustic thermometry (measuring the speed of sound) and johnson noise thermometry (measuring the temperaturedependent voltage arising from the thermal motion of charge carriers in a resistor). to turn one of these into a practical, commercially available reality, npl has been collaborating with metrosol limited to develop a practical johnson noise thermometer [8]-[10]. the johnson noise voltage is related to temperature, 𝑇, by nyquist’s relation: 〈𝑉𝑇 2〉 = 4 𝑘 𝑇 𝑅 δ𝑓 , (1) where 〈𝑉𝑇 2〉 is the mean squared johnson noise voltage, 𝑘 is the boltzmann constant, 𝑅 is the sensor resistance and δ𝑓 is the frequency bandwidth which is a function of the sensing electronics and cables. importantly, if 𝑅 is measured at the same time as the johnson noise voltage, then all relevant properties of the sensing resistor are measured and so even if the sensor is degraded the thermodynamic temperature is always known. the johnson noise voltage is miniscule, and measuring it requires robust immunity to electromagnetic interference and electronic interference, arising from both external and internal influences. this is achievable by good design. a key challenge is the need for very high amplification of the noise signal. its measurement in the presence of the inevitable electrical noise generated by the pre-amplifiers can be done with the use of correlation, whereby the signal is split into two different channels. only the measured signal which is the same on both channels (i.e. the johnson noise) is ‘let through’. a drawback of this approach with conventional designs is that this results in excessively long correlation times of minutes to hours depending on the required uncertainty. on the other hand, industrial measurements typically require timescales of a few seconds. using the nyquist equation (1) requires a knowledge of the bandwidth, which in reality is unknowable. the equation is generally used in ratio form at two different temperatures: the sensor temperature to be determined, and a known reference temperature. the nyquist equation can hence be expressed as: 𝑇 = 𝑇0 ( 𝑉 𝑉0 ) 2 𝑅 𝑅0 , (2) where 𝑇0, 𝑉0 and 𝑅0 are the reference temperature, johnson noise voltage and resistance respectively. in general, it is very inconvenient to maintain a known reference temperature because it is impossible to match the frequency response of the two measurement circuits, and the resulting mismatch causes excessive measurement errors over the frequency range required for the current fast response time application. various approaches have been employed to overcome this including the use of a synthesized noise signal from, for example, a josephson array. while extremely accurate, this approach is also not feasible as it requires complicated low temperature equipment. the npl/metrosol collaboration makes use of a quasirandom synthetic reference signal which is generated a priori. this reference signal is then superimposed on the measurement signal so that they both experience the same frequency response of the measurement electronics. the composite signal (superposition of johnson noise and calibration ‘tones’) can then be decomposed with signal processing in the frequency domain, and their ratio determined in order to deduce the temperature of the sensor resistor. a further advantage of this mechanism is its high tolerance to highly non-flat, non-linear frequency response, and so a much higher bandwidth (up to 1 mhz) can be employed than in previous systems. this translates directly to shorter measurement times and hence faster response times, since more signal can be averaged in the same amount of time. johnson noise thermometry has until recently been the preserve of large national laboratories due to the extreme difficulty of isolating the miniscule johnson noise voltage from the far larger external noise sources and the internal noise generated by the electronic components [11]. the development of a practical thermometer has been elusive and so far none have reached market, but the current npl/metrosol collaboration has now developed a working thermometer with unprecedented immunity to external electrical interference. the current prototype is shown in figure 3. it has now passed the most stringent electrical immunity standard test, iec 61000-4-3 [12]. the accuracy depends on the measurement duration; for an averaging period of about 5 s the expanded measurement uncertainty is ± 0.5 °c. the most obvious application is as a replacement for thermocouples where appreciable long-term drift is unacceptable. efforts are now focused on increasing the maximum temperature range beyond about 150 °c and improving the electronics and signal processing. further developments in the pipeline include demonstration of the feasibility of photonic-based ‘lab on a chip’ thermometry approaches for in-situ traceability to the kelvin. three approaches in various stages of investigation by npl and its collaborators to facilitate direct in-situ traceability are doppler broadening, ringresonator, and whispering gallery thermometry [13]. doppler broadening thermometry (dbt) is based on the measurement of the doppler profile of a molecular or atomic absorption line of a gas in thermodynamic equilibrium. the absorption line shape is dominated at low pressure by doppler broadening and has a gaussian profile corresponding to the maxwell-boltzmann distribution of velocities of gas particles along a laser beam axis. in practice various physical effects such as collisions distort the beam profile somewhat, but the theory of this is well understood and the absorption line shape may be fitted by a parameterised model. the doppler half-width at halfmaximum, ∆𝑣𝐷 , is related to the temperature t by: ∆𝑣𝐷 = 𝑣0 𝑐 √2 ln 2 𝑘 𝑇 𝑀 , (3) where 𝑣0 is the line-centre frequency, 𝑐 is the speed of light, and 𝑀 is the absorber mass. two key challenges currently being addressed are a) reducing the amount of ancillary equipment needed for implementing the technique and b) miniaturisation of the sensing element. ring-resonator (rr) thermometry essentially utilises a closed-loop optical waveguide which is optically coupled to a second, adjacent, non-closed waveguide (separated by an air gap) via evanescence. the ‘ring’ or ‘loop’ enables propagation of circular electromagnetic waves with a characteristic resonance at a wavelength, 𝜆m, given by: 𝑚 ∙ 𝜆m = 𝑛eff ∙ 𝐿, (4) where the integer 𝑚 represents the resonance mode, 𝑛eff is an index characteristic of the waveguide and 𝐿 is the round-trip length of the loop. the temperature dependence of the refractive index and physical dimensions of the ring enable the use of the device as a thermometer by measuring the temperaturedependent shift in the wavelength given in (4). in practice, the acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 5 change in refractive index per unit temperature is a factor of approximately 100 larger than the thermal expansion coefficients of the materials involved, so the latter may be ignored. the technique is readily miniaturised and has good resistance to chemical contamination. it also offers some of the lowest uncertainties of all the practical primary thermometry techniques, although great care is required during the fabrication process to avoid imperfections. whispering gallery mode (wgm) thermometers trace their ancestry to precision clock oscillators. in essence they are stable microwave resonators arranged such that a symmetric dielectric medium such as a cylinder or disk is suspended in the centre of a metal cavity. the electromagnetic field in the microwave region is coupled to an external waveguide to excite the resonant frequencies. the frequencies of these resonant ‘whispering gallery’ modes exhibit temperature dependence and may be related to temperature through an understanding of the associated physics, which enables the use of the device for thermometry. acoustic gas thermometry is also a candidate for practical primary thermometry. the speed of sound in a gas depends on the temperature and may be related to the gas temperature through well understood physics. by using an acoustic resonator which ‘rings’ like a bell when excited appropriately with loudspeakers, and by characterizing the resulting changes in the geometry of the device using microwaves to understand the resonant modes, an extremely accurate thermometer can be constructed. such a device was used to determine the boltzmann constant with unmatched accuracy as part of the global endeavour to redefine the kelvin in terms of fundamental constants [14]. 4. digitalization of calibrations for many years the result of thermometer calibrations have been printed on paper and issued to the customer. recently, however, there is a trend towards digitalisation of the calibrations so that the results are available online or in electronic files. this is very important functionality for many users; for example, in aerospace organisations where measurements are subject to significant regulatory compliance and demonstration thereof under frameworks such as ams2750 which regulates heat treatment of metallic materials [15], it is very difficult to work with paper certificates. one successful approach has been that of ccpi europe, who have fully automated certification with their pyrotag™ system [16]. digitalisation of calibrations has numerous benefits including the reduction in operator errors (e.g. through manual data entry), removed need for paper-based processes and transactions, and easier management for asset managers, calibration managers, and technical staff. it will result in reduction of time and cost arising from a paperless system, offers secure storage and retrieval of information, and is audit ready to demonstrate traceability compliance. this presents some infrastructural challenges, including the way the data is presented, the internal mechanisms in the calibration laboratory for enabling digitalisation, and security of the information required to ensure that only the intended recipients have access. npl has embarked on a programme to automate, as far as possible, its thermometer calibrations and the generation of calibration certificates, and to make them, and the associated data and metadata available online via a secure website. the certificates will be machine readable (xml). the results will also be available through an application programming interface (api), allowing integration with customers’ own software. a key aim is ultimately to integrate this capability with the international digital calibration certificate (dcc), whose format is currently undergoing development [17]. importantly, the calibration history will also be available to the user. importantly, the dcc offers the possibility of facilitating autonomous updating of calibration data. this could be exploited by the techniques described in this paper, particularly self-validating techniques which provide a live update of the calibration in situ. updating one point in the calibration generally has an effect not only at the temperature at which the selfcalibration is performed, but on the interpolating function over a wider temperature range. the updated calibration could be passed to the associated dcc which could then be updated to include the new parameters and correct the interpolating function over the wider temperature range of use. clearly such a mechanism does not yet exist, but this is a functionality that should be considered in the formulation of the dcc format and its implementation. this approach may also be applicable to practical primary thermometry, though in that case the role of calibration certificates more broadly, and even the role of national metrology institutes in providing traceability in this regime, is currently not well defined. 5. conclusions some new developments in temperature measurement have been presented which support digitalisation in various respects. self-validation techniques using miniature temperature fixed points based on ‘phase change cells’ to provide in-situ traceability at the point of measurement will provide assurance that temperature sensor output is ‘always right’. practical primary thermometry measures temperature directly, rather than requiring calibration and the risk of consequent calibration drift in harsh environments, so ensures long-term reliable measurements; examples outlined here include johnson noise thermometry, doppler broadening thermometry, ring resonator thermometry, whispering gallery mode thermometry and acoustic thermometry. for conventional sensors, digitalisation of calibrations at npl is becoming a practical reality with a web-based interface, associated api to enable endusers to access calibration data programmatically, and steps towards a standardised digital calibration certificate format. these developments all support digitalisation of metrology and will increase the reliability of measurements, improving process efficiency and product yield, with a consequent reduction in harmful emissions. future work will focus on elevating the technical readiness and bringing the innovations to market. acknowledgement we would like to thank trevor ford, peter cowley and phill williams of ccpi europe ltd for contributions on the selfvalidating thermocouples, dan peters and dave smith of ral space for contributions on the embedded phase-change cell, paul bramley and david cruickshank of metrosol ltd for contributions on the johnson noise thermometry, sam bilson and andrew thompson of npl for contributions on machine learning approaches to automation of self-validating thermocouple calibrations, and deepthi sundaram and stuart chalmers of npl for contributions on the digitalisation of calibration certificates. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 6 references [1] h. preston-thomas, the international temperature scale of 1990 (its-90), metrologia, vol. 27, 1990, pp. 3-10. doi: https://doi.org/10.1088/0026-1394/27/1/002 [2] j. v. pearce, o. ongrai, g. machin, s. j. sweeney, self-validating thermocouples based on high temperature fixed points, metrologia, vol. 47, 2010, pp. l1-l3. doi: https://doi.org/10.1088/0026-1394/47/1/l01 [3] g. machin, twelve years of high temperature fixed point research: a review, aip conf. proc., vol. 1552, 2013, p. 305. doi: https://doi.org/10.1063/1.4821383 [4] e. webster, d. clarke, r. mason, p. saunders, d. r. white, in situ temperature calibration for critical applications near ambient, meas. sci. tech., vol. 31(4), 2020, p. 044006. doi: https://doi.org/10.1088/1361-6501/ab5dd1 [5] j. v. pearce, r. i. veltcheva, d. m. peters, d. smith, t. nightingale, miniature gallium phase-change cells for in situ thermometry calibrations in space, meas. sci. technol., vol. 30, 2019, p. 124003. doi: https://doi.org/10.1088/1361-6501/aad8a8 [6] d. tucker, a. andreu, c. j. elliott, t. ford, marius neagu, g. machin, j. v. pearce, integrated self-validating thermocouples with a reference temperature up to 1329 °c, meas. sci. technol., vol. 29(10), 2018, p. 105002. doi: https://doi.org/10.1088/1361-6501/aad8a8 [7] https://ccpi-europe.com/2018/05/22/inseva-thermocouplelicense-signing/ [8] p. bramley, d. cruickshank, j. v. pearce, the development of a practical, drift-free, johnson-noise thermometer for industrial applications, int. j. thermophys., vol. 38, 2017, p. 25. doi: https://doi.org/10.1007/s10765-016-2156-8 [9] p. bramley, d. cruickshank, j. aubrey, developments towards an industrial johnson noise thermometer, meas. sci. technol., vol. 31, 2020, p. 054003. doi: https://doi.org/10.1088/1361-6501/ab58a6 [10] http://www.johnson-noise-thermometer.com [11] j. f. qu, s. p. benz, h. rogalla, w. l. tew, d. r. white, k. l. zhou, johnson noise thermometry, meas. sci. tech., vol. 30, 2019, p. 112001. doi: https://doi.org/10.1088/1361-6501/ab3526 [12] iec 61000-4-3:2020 electromagnetic compatibility (emc) – part 4-3: testing and measurement techniques – radiated, radiofrequency, electromagnetic field immunity test. online [accessed 24 march 2023] https://webstore.iec.ch/publication/59849 [13] s. dedyulin, z. ahmed, g. machin, emerging technologies in the field of thermometry, meas. sci. & technol., vol. 33, 2022, 092001 doi: https://doi.org/10.1088/1361-6501/ac75b1 [14] j. fischer, et. al., the boltzmann project, metrologia, vol. 55, 2018, pp. r1-r20. doi: https://doi.org/10.1088/1681-7575/aaa790 [15] ams2750f is an aerospace manufacturing standard covering temperature sensors, instrumentation, thermal processing equipment, correction factors and instrument offsets, system accuracy tests, and temperature uniformity surveys. these are necessary to ensure that parts or raw materials are heat treated in accordance with the applicable specification(s). online [accessed 24 march 2023] https://www.sae.org/standards/content/ams2750f/ [16] ccpi, pyro tag. online [accessed 24 march 2023] https://ccpi-europe.com/resources/pyro-tag/ [17] s. hackel, f. härtig, j. hornig, t. wiedenhöfer, the digital calibration certificate, ptb-mitteilungen 127 (2017) doi: https://doi.org/10.7795/310.20170403 see also ptb’s dcc website. online [accessed 24 march 2023] https://tinyurl.com/ycksrc2t https://doi.org/10.1088/0026-1394/27/1/002 https://doi.org/10.1088/0026-1394/47/1/l01 https://doi.org/10.1063/1.4821383 https://doi.org/10.1088/1361-6501/ab5dd1 https://doi.org/10.1088/1361-6501/aad8a8 https://doi.org/10.1088/1361-6501/aad8a8 https://ccpi-europe.com/2018/05/22/inseva-thermocouple-license-signing/ https://ccpi-europe.com/2018/05/22/inseva-thermocouple-license-signing/ https://doi.org/10.1007/s10765-016-2156-8 https://doi.org/10.1088/1361-6501/ab58a6 http://www.johnson-noise-thermometer.com/ https://doi.org/10.1088/1361-6501/ab3526 https://webstore.iec.ch/publication/59849 https://doi.org/10.1088/1361-6501/ac75b1 https://doi.org/10.1088/1681-7575/aaa790 https://www.sae.org/standards/content/ams2750f/ https://ccpi-europe.com/resources/pyro-tag/ https://doi.org/10.7795/310.20170403 https://tinyurl.com/ycksrc2t towards the development of a cyber-physical measurement system (cpms): case study of a bioinspired soft growing robot for remote measurement and monitoring applications acta imeko issn: 2221-870x june 2021, volume 10, number 2, 104 110 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 104 towards the development of a cyber-physical measurement system (cpms): case study of a bioinspired soft growing robot for remote measurement and monitoring applications stanislao grazioso1, annarita tedesco2, mario selvaggio3, stefano debei4, sebastiano chiodini4 1 department of industrial engineering, university of naples federico ii, naples, italy 2 ims laboratory, university of bordeaux, bordeaux, france 3 department of electrical engineering and information technology, university of naples federico ii, naples, italy 4 department of industrial engineering, university of padova, padova, italy section: research paper keywords: 4.0 era; soft growing robots; remote monitoring; monitoring systems; remote sensing citation: stanislao grazioso, annarita tedesco, mario selvaggio, stefano debei, sebastiano chiodini, towards the development of a cyber-physical measurement system (cpms): case study of a bioinspired soft growing robot for remote measurement and monitoring applications, acta imeko, vol. 10, no. 2, article 15, june 2021, identifier: imeko-acta-10 (2021)-02-15 section editor: francesco lamonaca, university of calabria, italy received may 4, 2021; in final form may 11, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: stanislao grazioso, e-mail: stanislao.grazioso@unina.it 1. introduction the 4.0 era is characterized by an innovative multidisciplinary approach which addresses technical challenges by seeking transverse solutions to both technological and methodological problems [1]-[8]. the most effective expression of the 4.0 paradigm is represented by cyber-physical systems (cpss), i.e., smart systems that include engineered interacting networks of physical and computational components, able to monitor and control the physical environment [9]-[14]. from this definition, it appears that measurement and monitoring systems (mmss) are essential for the implementation of cpss. however, mmss are generally considered as subordinate elements of a cps: they provide source of information for the cps (i.e., connection to the physical world) but do not participate in any higher-level actions of the cps (i.e., conversion, cyber, cognition, configuration) [15]. mmss are historically seen as responsible for sensing the conditions from the physical environment, rather than as providers of higher-level intelligent information. however, the suitable adoption of the enabling technologies can reshape the role of mmss in the 4.0 era. this requires a fundamental change of perspective, in which the enabling technologies stop being employed as external superstructures for mmss, and become embedded solutions, intrinsically present in the architecture of the mms, and fully effective through an adequate metrological configuration. this approach emphasizes the holistic nature of monitoring and paves the way for 4.0 transition-driven monitoring systems. through the wise adoption of the 4.0 enabling technologies, mmss can turn into self-aware, abstract the most effective expression of the 4.0 era is represented by cyber-physical systems (cpss). historically, measurement and monitoring systems (mmss) have been an essential part of cpss; however, by introducing the 4.0 enabling technologies into mmss, a mms can evolve into a cyber-physical measurement system (cpms). starting from this consideration, this work reports a preliminary case study of a cpms, namely an innovative bioinspired robotic platform that can be used for measurement and monitoring applications in confined and constrained environments. the innovative system is a “soft growing” robot that can access a remote site through controlled lengthening and steering of its body via a pneumatic actuation mechanism. the system can be endowed with different sensors at the tip, or along its body, to enable remote measurement and monitoring tasks; as a result, the robot can be employed to effectively deploy sensors in remote locations. in this work, a digital twin of the system is developed for simulation of a practical measurement scenario. the ultimate goal is to achieve a self-adapting, fully/partially autonomous system for remote monitoring operations to be used reliably and safely for the inspection of unknown and/or constrained environments. mailto:stanislao.grazioso@unina.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 105 self-conscious, self-maintained entities, able to generate highly valued insights, just like cpss. this means that mmss can evolve into cyber-physical measurement systems (cpmss), thus becoming a proactive expression of the 4.0 era, strengthening not only the role of measurement, but the performance of the overall 4.0 ecosystem. starting from these considerations, this work introduces a definition for the cpmss and presents a preliminary case study, i.e., a “soft growing” robot for measurement and monitoring applications in constrained and confined remote environments. this system is a concrete example of cpms for its ability to sense itself within the environment (i.e., self-awareness), to adapt itself to the surrounding environment thanks to its softness and growth and steering capabilities (i.e., self-configure), and to autonomously predict navigation path towards the inspection target (i.e., selfpredict). embedding multiple 4.0 enabling technologies in one measurement and monitoring system represents a promising solution for achieving the cpms capabilities. this work is an extension of our previous conference paper [16], where the general idea was preliminarily introduced. this paper is organized as follows. in section 2, the state-of-the-art of robotic technologies is presented, with focus on measurement and monitoring remote applications in difficult-to-reach environments. then, in section 3, the definition for the cpmss is introduced. section 4 relates to the design and implementation of the soft growing robot and shows a simulated measurement and monitoring scenario of the cpms in a remote location. finally, in section 6, conclusions are drawn, and the future work is outlined. 2. background as well known, robotic technologies are largely used for carrying out remote measurement tasks, especially in environments that are hazardous, dangerous or difficult to reach for humans [17], [18]. nevertheless, there are several applications in which traditional rigid-bodied robot technologies cannot be used. this occurs, for example, when measurements must be carried out in confined, constrained, or unknown environments (e.g., inspection of difficult-to-reach industrial environments, exploration of archaeological sites which are often inaccessible and fragile). one technological solution suitable to perform this kind of tasks is represented by soft continuum robots [19], i.e. robots with a continuously deformable mechanical structure, whose design is inspired from the principles of shaping, movements, sensing and control of soft biological systems [20]. in the literature, there are several examples of soft continuum robots for remote measurement applications, in different fields: space, airlines, nuclear, marine (inspection and maintenance), medical (minimally invasive surgery), and so on [21]. one limitation of soft continuum robots is their limited workspace, as they usually have a fixed base and a pre-established length; this can be a problem for tasks that require inspection and exploration of large environments. to overcome this issue, soft continuum robots can be endowed with locomotion capabilities, by using tethered/untethered fluidic or cable-driven actuators, taking inspiration from the animal movements (snake, earthworms, caterpillars) [22]-[24]. however, this solution involves a relative movement between the robot and the environment, which can lead to low energy efficiency of the robot when high sliding friction is present. a recent design solution of soft continuum robots achieves enhanced mobility through growth rather than locomotion, taking inspiration from the growing process of plants and vines [25]. these robots, referred to as soft growing robots, achieve mobility by everting new material at the tip: this enables lengthening without relative movements between the robot’s body and the environment. with soft growing robots, the inspection and exploration length of remote environments is therefore limited only by the amount of robot’s body material that can be transported on the field. although different mechanisms have been used to enable this form of apical extension [26], recently, pneumatically-driven solutions have achieved great potentials [27]. in these systems, the growing process is implemented by pressurizing a fluid (typically, air) inside a chamber created by a self-folded cylindrical body, which unfolds by everting new material from the tip; this enables the forward growth of the robot's body through lengthening. while growing, the robot's body can be curved/steered by additional actuators distributed along its body (e.g., pneumatic artificial muscles [28], artificial tendons, etc.). the contraction of these additional actuators on one side causes bending (or kinking) along that direction. an example of the growing and steering processes of soft growing robots is shown in figure 1. 3. cyber-physical measurement system the concept of cpms is built on the 5c architecture [10] used for cps, as shown in figure 2. it consists of the following five levels: 1. connection layer: connected with the physical world where the measurements are collected, and the sensing is performed. 2. conversion layer: responsible for very first processing for endowing the system with self-awareness capabilities (i.e., reconstruction of its internal state). 3. cyber layer: responsible for the development of the digital twin model of the measurement system and for figure 1. example of the concept of a soft growing robot that achieves enhanced mobility through growth (top image). concept of growth by eversion of material rolled onto a spool (central image); and the concept of curving by pressurization, with contraction (bottom image) of soft actuators placed and sealed on the main body. in the example, a camera is placed on the tip of the robot. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 106 endowing the system with self-comparison capabilities (i.e., self-awareness within the network). 4. cognition layer: responsible for cognition and reasoning, i.e., high-level models and algorithms to endow the measurement system with decision support capabilities. 5. configuration layer: starting from the knowledge generated by the cognitive level, this layer generates corrective actions, as adaptation, and reactiveness to the environments. a cpms can be defined as a novel form of mms which, in addition to collecting data from the physical world, is able to provide higher-level information thanks to the use of suitable models and 4.0 enabling technologies. similarly to a cps, a cpms has knowledge of its state in time and space (self-awareness) and with respect to other systems in the network (self-comparison); it is capable of enforcing actions for its own maintenance (selfmaintain), predicting its own evolution in time and space (selfpredict), and adapting to the environment (self-configure). drawing a comparison with the master-slave architecture, historically mmss represented the slaves (mostly dedicated to data collection) of a master (i.e., the cps itself). by evolving into cpmss, instead, mmss become cpss among cpss; eventually, the current master-slave relationship between mmss and cpss turns into a peer-to-peer cooperation. 4. case study: the soft growing robot this section addresses the design and implementation of the soft growing robot, as a preliminary example of cpms to be used for applications within confined and constrained remote environments. first, the design requirements of the system are presented. successively, its major components (i.e., the robotic platform and the electronic control unit) are described in detail. finally, an example of a practical measurement scenario through a simulated digital twin of the system is presented. the architecture of the soft growing robot is shown in figure 3. 4.1. requirements the requirements of the soft growing robot are the following: • access within environments with small cross sections (with minimum dimension equal to 100 mm); • high inspection/exploration length (up to 10 m) while maintaining portability; • controllable growth; • steering/curving capability; and • human situation awareness. the long-term goal is to develop the first soft growing robot endowed with model-based strategies [29] for planning [30], control and navigation to accomplish remote measurement tasks. these requirements give the possibility to develop a generalpurpose platform for inspection and exploration usable in a wide range of scenarios. 4.2. robotic platform the robotic platform is mainly constituted by the robot base (where the body material is stored) and the robot body (which grows and accesses the remote environment). the cad model of the robotic platform is shown in figure 4. the robot base is the container of the unfolded robot body and represents the pressurized vessel when the robot is in figure 2. description of a cpms according to the 5c architecture. figure 3. architecture of the proposed soft growing robotic system (dimensions not in scale). acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 107 operation. it is formed by an acrylic cylinder with two end caps (qc-108 qwik cap, fernco inc., davison, mi). the spool for rolling out the robot material is driven by a dc motor (6408k27, mcmaster-carr inc., douglasville, ga) with a magnetic encoder which allows growing/retracting the robot body and measuring its current length. the vessel of the robot base is a cylinder with diameter d equal to 25 cm and height h equal to 50 cm. the main body of the soft growing robot is made of an airtight tube which is flexible but not stretchable. during the eversion process, the material should slide relative to itself with negligible friction; and should guarantee major durability for field applications. to this end, a double side silicon-coated ripstop nylon (rockywoods fabric llc, loveland, co) is used as a material for the main body of the robot. the soft robot body is rolled up and fixed from one hand around a spool inside the base vessel: when pressurized, the material is pushed outside the robot base through an opening and everts from the tip of the robot. when fully extended, the robot body achieves maximum dimensions corresponding to a diameter p of 10 cm and a maximum length l of 10 m. the forward growth is controllable by finding a suitable balance between the desired air pressure inside the vessel and the desired spool rotation, and thus, motor angular velocity about the axis of the spool. for guaranteeing a reversible steering/curving of the robot body, soft pneumatic actuators (made of the same material of the robot’s body) are placed along the entire length of the robot [28]: the steering/curving control is guaranteed by a suitable pressurization of these additional actuators, considering models of curvature/deformations of the robot’s shape [31]. during the retraction process of the robot’s body, to avoid structural kinking, suitable mechanisms to drive the retraction should be foreseen [32]. 4.3. electronic control unit the electronic control unit is composed by two sub-systems: one for generating the desired air pressure for pressurization of the vessel, and one for generating the desired voltage for the dc motor for growth/retraction of the robot body. the pneumatic circuit regulates the air pressure by pulse-width modulation (pwm), which involves the controlled timing of the opening and closing of solenoid valves (sy114-5lou, smc) through a mosfet board (based on irf540 mosfets, stmicroelectronics), with pressure sensors (asdxavx100pgaa5, honeywell) providing feedback. the pneumatic circuit is an essential component of the robot, as it is responsible for the growth process (one pneumatic tube for the main tube of the robot body) and the steering of the robot body (one pneumatic tube for each of the serial soft actuators placed along the robot body). this pwm-based pneumatic homemade circuit substitutes more expensive solutions based on professional closed-loop pressure regulators. the prototype of the developed electronic control unit (related to the compressed air circuit) is shown in figure 5. 4.4. example of a practical measurement scenario the practical measurement scenario consists of a human operator performing visual inspection in a remote site through the proposed soft growing robot, endowed with a tip-mounted camera. an input device is used by the human operator to impart growing/steering commands. a digital twin of the measurement system is built in v-rep including the model of the robot (modelled as a growing constant curvature robot [33]), the model of the cameras as well as the model of the environment. the constant curvature assumption is reasonable when artificial pneumatic muscles are used to steer the robot, as shown in [30]. snapshots of the measurement scenario are shown in figure 6, where we can see the remotely operated soft growing robot approaching and inspecting a target (red box) within the simulated remote site. the soft growing robotic platform represents a novel technology for the inspection of confined and constrained remote environments that are not accessible by current technologies. furthermore, it represents a suitable platform for enabling measurement applications in large, gps-denied environments. in addition to industrial inspections, this robotic platform may be used for exploration purposes or even for search and rescue applications after accidents, earthquakes, and collapses of buildings. finally, an additional application is the sensor delivery in difficult-to-reach remote sites. figure 4. cad representation of the robotic platform. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 108 5. conclusion and future work in this work, a definition for cpms was introduced and a soft growing robot was presented as a case study. a cpms can be considered as a 4.0-oriented evolution of traditional mms; as a matter of fact, thanks to the adoption of 4.0 enabling technologies, a mms is not only seen as a system for collecting data, but also for data processing and interpretation. the soft growing robot as a cpms is intended to be used for remote measurement and monitoring applications in constrained and confined environments. the proposed system consists of a robot, to be placed outside the remote site, and a soft body that accesses the site through growth, with the possibility of controlling length and steering. at the tip of the system, a sensor can be placed to enable remote measurement tasks, or a sensor can be deployed through the robot body when the target is reached. in this work, we have considered a tip-mounted camera in the system. the benefits of using soft growing robots for remote measurement and monitoring applications are access within small-scaled sections; high inspection/exploration length; transportability; controllable growth and steering; safe interaction with the environment; and ability to perform sensor deployment. to get closer to a full definition of cpms, future work will be dedicated to embedding additional 4.0 enabling technologies (e.g., artificial intelligence algorithms) in the cmps, for endowing monitoring system with autonomous planning/navigation capabilities. also, effort will be dedicated to motion analysis and control in highly constrained situations. additionally, a set of practical applications cases will be identified, to experiment the system and assess its metrological performance. finally, suitable sensing technologies and processing strategies will be developed to enhance the metrological performance of the system (e.g., in terms of resolution, reliability and accuracy of interaction with the environment). the ultimate goal is to achieve a self-adapting, fully autonomous system for remote monitoring operations to be used reliably and safely for the inspection of unknown and/or constrained and confined environments. figure 5. picture of the electronic control unit (pneumatic board). figure 6. example scenario for remote inspection through the proposed soft growing robot. by using a suitable input device, the human operator drives the growth and steering of the robotic system in the remote environment, accessed through an access section which is slightly larger than the diameter of the robot’s body. the human operator sees the remote environment through the tip-mounted camera. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 109 references [1] d. seneviratne, l. ciani, m. catelani, d. galar, smart maintenance and inspection of linear assets: an industry 4.0 approach, acta imeko 7(1) (2018), pp. 50-56. doi: 10.21014/acta_imeko.v7i1.519 [2] t. i. erdei, z. molnár, n. c. obinna, g. husi, a novel design of an augmented reality based navigation system & its industrial applications, acta imeko 7(1) (2018), pp. 57-62. doi: 10.21014/acta_imeko.v7i1.528 [3] p. arpaia, e. de benedetto, l. duraccio, design, implementation, and metrological characterization of a wearable, integrated arbci hands-free system for health 4.0 monitoring, measurement 177 (2021) art. no. 109280. doi: 10.1016/j.measurement.2021.109280 [4] p. arpaia, e. de benedetto, c. a. dodaro, l. duraccio, g. servillo, metrology-based design of a wearable augmented reality system for monitoring patient's vitals in real time, ieee sensors journal 21(9) (2021), pp. 11176-11183. doi: 10.1109/jsen.2021.3059636 [5] l. angrisani, u. cesaro, m. d'arco, o. tamburis, measurement applications in industry 4.0: the case of an iot-oriented platform for remote programming of automatic test equipment, acta imeko 8(2) (2019), pp. 62-69. doi: 10.21014/acta_imeko.v8i2.643 [6] r. schiavoni, g. monti, e. piuzzi, l., tarricone, a. tedesco, e. de benedetto, a. cataldo, feasibility of a wearable reflectometric system for sensing skin hydration, sensors 20(10) (2020), art. no. 2833. doi: 10.3390/s20102833 [7] m. prist, a. monteriù, e. pallotta, p. cicconi, a. freddi, f. giuggioloni, e. caizer, c. verdini, s. longhi, cyber-physical manufacturing systems: an architecture for sensor integration, production line simulation and cloud services, acta imeko 9(4) (2020), pp. 39-52. doi: 10.21014/acta_imeko.v9i4.731 [8] a. cataldo, e. de benedetto, l. angrisani, g. cannazza, e. piuzzi, a microwave measuring system for detecting and localizing anomalies in metallic pipelines, ieee transactions on instrumentation and measurement 70 (2021) art. no 8001711. doi: 10.1109/tim.2020.3038491 [9] cyber-physical systems public working group, framework for cyber-physical systems: volume 1, overview, version 1.0, nist special publication, 1500–201, 2017. online [accessed 09 june 2021] https://pages.nist.gov/cpspwg/ [10] a. ahmadi, c. cherifi, v. cheutet, y. ouzrout, a review of cps 5 components architecture for manufacturing based on standards, in proc. of ieee 2017 11th international conference on software, knowledge, information management and applications (skima), sri lanka, 06-08 december 2017, pp. 1–6. doi: 10.1109/skima.2017.8294091 [11] a. tedesco, m. gallo, a. tufano, a preliminary discussion of measurement and networking issues in cyber physical systems for industrial manufacturing, in proc. of 2017 ieee international workshop on measurement and networking, m&n 2017, naples, italy, 27-29 sept. 2017. doi: 10.1109/iwmn.2017.8078384 [12] a. drago, s. marrone, n. mazzocca, r. nardone, a. tedesco, v. vittorini, a model-driven approach for vulnerability evaluation of modern physical protection systems, software and systems modeling 18 (2019), pp. 523-556. doi: 10.1007/s10270-016-0572-7 [13] a. sforza, c. sterle, p. d'amore, a. tedesco, f. de cillis, r. setola, optimization models in a smart tool for the railway infrastructure protection, in: critis 2013: critical information infrastructures security, lecture notes in computer science, 8328, amsterdam, the netherlands, 16-18 sept. 2013, pp. 191196. doi: 10.1007/978-3-319-03964-0_17 [14] p. d’amore, a. tedesco, technologies for the implementation of a security system on rail transportation infrastructures, in topics in safety, risk, reliability and quality 27 (2015), pp. 123-141. doi: 10.1007/978-3-319-04426-2_7 [15] d. yin, x. ming, x. zhang, understanding data-driven cyberphysical-social system (d-cpss) using a 7c framework in social manufacturing context, sensors 20(18) (2020), art. no 5319. doi: 10.3390/s20185319 [16] s. grazioso, a. tedesco, m. selvaggio, s. debei, s. chiodini, e. de benedetto, g. di gironimo, a. lanzotti, design of a soft growing robot as a practical example of cyber-physical measurement systems, in proc. 2021 ieee international workshop on metrology for industry 4.0 and iot, rome, italy, 79 june 2021. [17] m. friedrich, g. dobie, c. chan, s. pierce, w. galbraith, s. marshall, g. hayward, miniature mobile sensor platforms for condition monitoring of structures, ieee sensors journal 9(11) (2009), pp. 1439–1448. doi: 10.1109/jsen.2009.2027405 [18] m. y. moemen, h. elghamrawy, s. n. givigi, a. noureldin, 3-d reconstruction and measurement system based on multi mobile robot machine vision, ieee transactions on instrumentation and measurement 70 (2021), pp. 1–9. doi: 10.1109/tim.2020.3026719 [19] c. della santina, m. g. catalano, a. bicchi, soft robots, berlin, heidelberg: springer, 2020. doi: https://doi.org/10.1007/978-3-642-41610-1_146-2 [20] s. kim, c. laschi, b. trimmer, soft robotics: a bio inspired evolution in robotics, trends in biotechnology 31(5) (2013), pp. 287–294. doi:10.1016/j.tibtech.2013.03.002 [21] l. angrisani, s. grazioso, g. di gironimo, d. panariello, a. tedesco, on the use of soft continuum robots for remote measurement tasks in constrained environments: a brief overview of applications, in proc. of 2019 ieee international symposium on measurements &networking (m&n). ieee, catania, italy, 810 july 2019, pp. 1–5. doi: 10.1109/iwmn.2019.8805050 [22] w. hu, g. z. lum, m. mastrangeli, m. sitti, small-scale softbodied robot with multimodal locomotion, nature 554 (2018), pp. 81-85. doi: 10.1038/nature25443 [23] i. h. han, h. yi, c.-w. song, h. e. jeong, s.-y. lee, a miniaturized wall-climbing segment robot inspired by caterpillar locomotion, bioinspiration & biomimetics 12(4) (2017), 13 pages. doi: 10.1088/1748-3190/aa728c [24] g. gu, j. zou, r. zhao, x. zhao, x. zhu, soft wall-climbing robots, science robotics 3(25) (2018), 12 pages. doi: https://doi.org/10.1126/scirobotics.aat2874 [25] e. w. hawkes, l. h. blumenschein, j. d. greer, and a. m. okamura, a soft robot that navigates its environment through growth, science robotics 2(8) (2017), 8 pages. doi: https://doi.org/10.1126/scirobotics.aan3028 [26] a. sadeghi, a. mondini, b. mazzolai, toward self-growing soft robots inspired by plant roots and based on additive manufacturing technologies, soft robotics 4(3) (2017), pp. 211223. doi: https://doi.org/10.1089/soro.2016.0080 [27] j. d. greer, t. k. morimoto, a. m. okamura, e. w. hawkes, a soft, steerable continuum robot that grows via tip extension, soft robotics 6 (2019), pp. 95-108. doi: https://doi.org/10.1089/soro.2018.0034 [28] n. d. naclerio, e. w. hawkes, simple, low-hysteresis, foldable, fabric pneumatic artificial muscle, ieee robotics and automation letters 5(2) (2020), pp. 3406-3413. doi: 10.1109/lra.2020.2976309 [29] s. grazioso, g. di gironimo, b. siciliano, a geometrically exact model for soft continuum robots: the finite element deformation space formulation, soft robotics 6 (2019), pp. 790-811. doi: https://doi.org/10.1089/soro.2018.0047 http://dx.doi.org/10.21014/acta_imeko.v7i1.519 http://dx.doi.org/10.21014/acta_imeko.v7i1.528 https://doi.org/10.1016/j.measurement.2021.109280 https://ieeexplore.ieee.org/document/9354808 http://dx.doi.org/10.21014/acta_imeko.v8i2.643 https://doi.org/10.3390/s20102833 http://dx.doi.org/10.21014/acta_imeko.v9i4.731 https://doi.org/10.1109/tim.2020.3038491 https://pages.nist.gov/cpspwg/ https://doi.org/10.1109/skima.2017.8294091 https://doi.org/10.1109/iwmn.2017.8078384 https://doi.org/10.1007/s10270-016-0572-7 https://doi.org/10.1007/978-3-319-03964-0_17 https://doi.org/10.1007/978-3-319-04426-2_7 https://doi.org/10.3390/s20185319 https://doi.org/10.1109/jsen.2009.2027405 https://doi.org/10.1109/tim.2020.3026719 https://doi.org/10.1007/978-3-642-41610-1_146-2 https://doi.org/10.1016/j.tibtech.2013.03.002 https://doi.org/10.1109/iwmn.2019.8805050 https://doi.org/10.1038/nature25443 https://doi.org/10.1088/1748-3190/aa728c https://doi.org/10.1126/scirobotics.aat2874 https://doi.org/10.1126/scirobotics.aan3028 https://doi.org/10.1089/soro.2016.0080 https://doi.org/10.1089/soro.2018.0034 https://doi.org/10.1109/lra.2020.2976309 https://doi.org/10.1089/soro.2018.0047 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 110 [30] m. selvaggio, l. ramirez, n. naclerio, b. siciliano, e. hawkes, an obstacle-interaction planning method for navigation of actuated vine robots, in proc. of 2020 ieee international conference on robotics and automation (icra), paris, france, may 31aug. 31, 2020, pp. 3227–3233. doi: 10.1109/icra40945.2020.9196587 [31] s. grazioso, g. di gironimo, b. siciliano, from differential geometry of curves to helical kinematics of continuum robots using exponential mapping, in proc. of international symposium on advances in robot kinematics, bologna, italy, 01-05 july 2018, pp. 319-326. doi: 10.1007/978-3-319-93188-3_37 [32] m. m. coad, r. p. thomasson, l. h. blumenschein, n. s. usevitch, e. w. hawkes, a. m. okamura, retraction of soft growing robots without buckling, ieee robotics and automation letters 5(2) (2020), pp. 2115-2122. doi: 10.1109/lra.2020.2970629 [33] s. grazioso, g. di gironimo, b. siciliano, analytic solutions for the static equilibrium configurations of externally loaded cantilever soft robotic arms, in proc. of 2018 ieee international conference on soft robotics(robosoft), livorno, italy, 24-28 april 2018, pp. 140-145. doi: 10.1109/robosoft.2018.8404910 https://doi.org/10.1109/icra40945.2020.9196587 https://doi.org/10.1007/978-3-319-93188-3_37 https://doi.org/10.1109/lra.2020.2970629 https://doi.org/10.1109/robosoft.2018.8404910 vibration-based tool life monitoring for ceramics micro-cutting under various toolpath strategies acta imeko issn: 2221-870x september 2021, volume 10, number 3, 125 133 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 125 vibration-based tool life monitoring for ceramics microcutting under various toolpath strategies zsolt j. viharos1,3, lászló móricz2, máté büki1 1 centre of excellence in production informatics and control, institute for computer science and control (sztaki), eötvös loránd research network (elkh), kende u. 13-17., h-1111, budapest, hungary 2 zalaegerszeg center of vocational training, kinizsi pál utca 74., h-8900, zalaegerszeg, hungary 3 john von neumann university, izsáki u. 10., h-6000, kecskemét, hungary section: research paper keywords: ceramics; micro-milling; tool wear; machining strategy; vibration analysis; feature selection citation: zsolt jános viharos, lászló móricz, máté istván büki, vibration-based tool life monitoring for ceramics micro-cutting under various toolpath strategies, acta imeko, vol. 10, no. 3, article 18, september 2021, identifier: imeko-acta-10 (2021)-03-18 section editor: lorenzo ciani, university of florence, italy received february 5, 2021; in final form september 9, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by the european commission through the h2020 project epic (https://www.centre-epic.eu/) under grant no. 739592.; by the hungarian ed_18-2-2018-0006 grant on a “research on prime exploitation of the potential provided by the industrial digitalisation”; by the ministry of innovation and technology nrdi office within the framework of the artificial intelligence national laboratory program, hungary. corresponding author: zsolt jános viharos, e-mail: viharos.zsolt@sztaki.mta.hu 1. introduction machining of rigid materials with regular cutting-edge geometry is one of the main trends in the 21st century. ceramics are such rigid materials that are employed more and more widely as raw materials thanks to their high hardness and thermal resistance [1], [2]. there are various options for machining them, e.g. using water, laser or abrasive grinding [3], [4], [5], however, their high costs and complex setups are important drawbacks of these technologies. therefore, the machining of ceramics with a classical, regular cutting-edge geometry is still a promising solution, however, considering the relative quick wearing process of the cutting tool without an appropriate technological optimization, this methodology will be economically not acceptable. optimizing a technology is typically a multicriteria assignment, like here, the main aim is to find the smallest production cycle time, and at the same time the tool life has to be maximal, too. the effect of technology parameters on tool life has been investigated in several of the authors’ previous articles [6], [7], [8]. an important part of tool life analysis is the investigation of vibrations generated during cutting method. the relative vibration between the micro-milling cutter and workpiece influences the processing quality and tool life [9]. frequency spectrum analysis was executed to establish for example the tool wear and chatter frequency characterization. based on this method, of the cutting process that are clearly not visible in the time domain can be revealed [10]. to determine the source of chatter or other dominant tool wearing frequencies, proper selection of the critical dynamic force signatures is required to perform frequency or power spectrum analysis. dimla and lister [11] identified for turning that the tangential force components abstract the 21st century manufacturing technology is unimagined without the various cam (computer aided manufacturing) toolpath generation programs. the aims of developing the toolpath strategies which are offered by the cutting control software is to ensure the longest possible tool lifetime and high efficiency of the cutting method. in this paper, the goal is to compare the efficiency of the 3 types of tool path strategies in the very special field of micro-milling of ceramic materials. the dimensional distortion of the manufactured geometries served to draw the taylor curve for describing the wearing progress of the cutting tool helping to determine the worn-in, normal and wear out stages. these isolations allow to separate the connected high-frequency vibration measurements as well. applying the novel feature selection technique of the authors, the basis for the vibration based micro-milling tool condition monitoring for ceramics cutting is presented for different toolpath strategies. it resulted in the identification of the most relevant vibration signal features and the presentation of the identified and automatically separated tool wearing stages as well. https://www.centre-epic.eu/ mailto:viharos.zsolt@sztaki.mta.hu acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 126 act on the rake face insert and both cutting forces and vibration signatures were most sensitive to tool wear. youn and yang [12] differentiated cutting force components to detect the difference between flank and crater wear for varying machining conditions in turning. work by sarhan et al. [13] involving the use of a 4flute end mill on steel of 90 bhn indicated that the magnitudes of the first harmonics of cutting force frequency spectrum increased significantly with increase in tool flank wear, feed per tooth and axial depth of cut. elbestawi et al. [14] also had similar observations when face milling of aisi 1020 steel and aluminium alloy. the deterioration in tool wear increases the tool chip and tool workpiece contact areas and these in turn cause an increase in friction, too. a worn tool generates high frequency tonal vibrational energy that does not arise in a new tool [15]. consequently, the noise emitted in the area is amplified [16]. tonshoff et al. [17] employed the use of acoustic emission signals to determine the influence of a hard turning process on the sub surface microstructure of the workpiece. the results obtained showed that the amplitude of the frequency analysis increased with increasing flank wear due to an enlarged contact area. one of an appropriate complement for vibrational analysis is the application of a neural network-based method. qi yao et al. [9] examined the relationship between cutting force amplitude, frequency and vibration displacement and it is ascertained by using a neural network method. the scientific literature mirrors that vibration signal analysis forms an increasing trend among the tool wear monitoring methods. in this paper, the effect of toolpaths generated by cam software on the tool lifetime is examined. the three most popular tool paths (strategies: wave form, cycloid, chained) are analysed as described in the next three sections. 1.1. wave form path the wave form tool path (figure 1) for milling technology results that the tool is working with constant tool diameter sweep. the contact angle along the toolpath has a direct effect on the cutting forces. by adjusting the contact angle, the cutting force can also be controlled. owing to this the tool load is constant in every changing direction during the machining through avoiding sharp changes in direction, which couldn’t be found among the average path generation methods [6], [19]. the other advantage of the wave form strategy is that the value of material removal speed is kept constant, that is different form the other path generation methods. cutting distributes wear evenly along the entire flute length, rather than just on one tip. the radial cutting depth is reduced to ensure consistent cutting force, allowing cutting material escaping from the flutes. so, tool lifetime is further extended as most of the heat is removed in the chip. 1.2. cycloid path the essence of the cycloid path strategy is to move the tool on a circle with the largest possible radius, thus reducing the kinematic contact (and tool load) (figure 2). the cycloid form is a milling technology where the tool milling is going along an arc, avoiding sharp changes in the direction. although it does not control the tool, this strategy also can reduce the tool load, and the roughing strategy is optimized easier [6], [8]. the problem with the average toolpath is that tool load increases significantly in the corners requiring shallower depths of cut and reduced feed. this problem can be avoided with cycloid and wave form path. because the pocket was used during the experiment did not have circle geometry, so the technology was made optimized with entremets. the entremets is an option in the software with which the tool load can be reduced in the corner. by choosing the correct stepover, the contact angle can be kept at a specified level. another advantage of the strategy is that high feeds along some paths cab be achieved. 1.3. constant stepover toolpaths (chained path) most software are usually capable of creating constant stepover toolpaths, contour-parallel, and direction-parallel paths, but these algorithms do not focus on machining parameters but only on material removal [20]. during the generation of the constant stepover path (chained path), the cutting tool removes the material moving back and forth on the horizontal plane within each z (vertical) level (figure 3). the strategy uses both directional and indirectional milling technology, leading to poor surface quality and short tool life. 2. experiments for the machining of ceramics the setup for the experiments is presented in figure 4. one of the main aims is to follow the wearing process of the micromilling tool during machining of ceramics and to compare it in an offline mode against the geometrical changes (length, width and depth of features) of the machined ceramic workpiece. another goal is to using high-frequency online vibration figure 1. element of waveform path [18]. figure 2. cycloid toolpath [20]. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 127 measurement online, during the cutting process on the other hand. connections between the wearing stages and the measured online and offline parameters were determined using a selfdeveloped, artificial neural network-based feature selection solution. 2.1. parameters of the milling machine the basis of the experiments was the milling machine that was planned and built by the cncteamzeg group. it is operated in zalaegerszeg (figure 4), hungary. during the planning, the aim was to cut metal material but the preliminary calculations and tests on ceramic material removal proved that it is able to cut ceramic material as well. 3. indirect, off-line tool wear detection measuring the produced workpiece geometry during the cutting process, microscopic images were taken repeatedly after a certain number of workpiece machining in order to monitor in an offline way the wearing evolution of the tool on workpiece geometry. measurements were performed using zeiss discovery v8 microscope and the wearing in the pictures were evaluated by the authors. beyond the microscopic control of the cutting tool geometry the changes resulted by the tool wear on the machined workpiece was also measured. these dimensional changes of the manufactured geometries are summarized in figure 5. during the measurements, changes in the width, length and depth of the geometries were examined. the applied technology settings (axial depth of cut, radial depth of cut, cutting speed, feed rate) were the same in all three cases, only the machining paths (strategies: wave form, cycloid, chained) were varied. these workpiece measurements represent clearly a complete and valid cutting tool life curves (taylor curves) and various variable conclusions can be drawn from them: • it can be seen in figure 5 that the tool lifetime achieved using chained toolpath is nearly half of the tool lifetime achieved using the wave form and cycloid tool path. • during the application of the chained path, a tool break occurred early, under the machining of the 11th set. • at the cycloid tool path, exponential tool wear and tool break were observed in 20th set. • using the waveform path, the manufacturing time of one feature was 57% longer than in the case of the chained toolpath. • with the cycloid path, the manufacturing time of a feature was nearly five times more than at the chained tool path. • considering the tool lifetime and the manufacturing time, the waveform seems to be the most economical toolpath strategy during ceramic machining. figure 3. contour-parallel, and direction-parallel stepover toolpaths (chained path). figure 4. the applied cnc machine, detailed parameters are in [8]. figure 5. changes in the geometry i.ii. and iii. of the machined geometries for the wave form, cycloid and chained tool path. w id th o f g e o m e tr y i number of machined workpieces chained path wave form cycloid path acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 128 4. direct, on-line tool wear analysis using vibration measurements the scientific literature mirrors that acoustic emission (ae) signals form an increasing trend in the same way as the increase of the tool wear analysis in metal cutting but it is not evident for micro-milling of ceramics. in [21], bhuiyan et al. pointed out: the increase in the tool wear increases the tool-workpiece contact area and friction coefficient, as well. in another experiment, the opposite, so, the decrease in the vibration amplitudes were detected during measurements, in metal cutting field. in the paper of the authors, they reported a similar phenomenon during ceramic machining as well [8]. based on the results of the measurements, decrease in the vibration amplitudes were detected during the wear evolution of the tool, consequently, the contact surfaces between the cutting tool and workpiece became smaller during the wearing of the tool for ceramics milling, mainly because of the complex and multiplicative wearing forms. the related frequency analysis showed that the wear-out process of the tool resulted also in a shift of the dominant frequencies into higher frequency ranges. in this research, the authors supplemented the results of the previous paper [8] with an ae study on various toolpath strategies. in the reported previous research, vibration measurement of ceramics milling was established with a sampling frequency of 100 khz measuring in one direction. instead of analysing over millions of individual measured values, as time series of the vibration amplitudes, several descriptive features (e.g., statistical measures, like amplitude, standard deviation, 3rd moment, etc.) were calculated. such a feature vector was calculated for each workpiece/machining process, while during the experiments the same tool is used, until the tool wear-out, or tool break. feature selection was applied to find the most descriptive features for distinguishing three typical different stages of the tool life. for this division, in the first step, the tool wear curve was determined from the geometry produced on the workpiece by an indirect method (based on the workpiece geometries measured). after this, the curves were divided into three sections depending on the wear phase of the tool and the degree of geometry reduction (worn-in, normal, wear-out (or brake). 4.1. micro-cutting tool wearing stages in contrast to the previous research [8], not the changes of parameters of the geometry (width, length, depth) were analysed, but the volume changes calculated from the geometries obtained by each toolpath strategy (figure 6). during the analysis of the graphs, 3 well-separable sections of the taylor curve were observed for the waveform graph (worn in-tool, normal condition, worn out). in contrast, in the cycloid path, no significant tool wear was observed until a certain geometry was manufactured (period “a”), followed by steep but uniform tool wear (period “b”). at the chained toolpath, the rapid wear process (period “a”) as well as the normal wear condition (period “b”) were observed, however, the tool was already broken at the very beginning of the wear out phase (period “c”). 4.2. most descriptive, direct vibration signal behaviour in respect to the micro-cutting tool wear-out after running the feature selection method, called adaptive, hybrid feature selection (ahfs) developed by some of the authors [22], the variables/features (calculated based on the vibration signal) that most accurately describe the change in three stages of the taylor curve were determined. it has to be mentioned that the feature selection identifies the first, so called most informative feature, however, the second one serves with the additional most informative one, etc., consequently, the features are not independent, it is really important for the evaluation of their meaning and effects. selected features for the waveform strategy: • number of the times the signal crosses its mean value • standard deviation • fourth momentum kurtosis having identified the most informative features calculated from the measured vibration signal, their evolution along the wearing progress can be presented, partly for engineering validation of the results of the mathematical algorithm. the figure 6. changes in the volumes of the manufactured geometries for the wave form (upper), cycloid tool path (middle) and stepover (chained bottom) tool path. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 129 progress in the values of the three, selected vibration signal features are presented along the wearing stages of the cutting tool in figure 7. the first identified variable name of “number of times the signal crosses its mean value” describes the intersection density of the x axis of the vibration signal. in the stage of wear-in (period “a”) the curve is at a high level, but there is a continuous decrease. this means that the tool vibrates at a high frequency in the initial stage. in the normal wearing (period “b”) phase, the signal oscillated at a lower frequency compared to the “a” phase. in the wear-out (period c) phase, there was a further decrease in the number of x-axis incisions. the second feature identified was the standard deviation. the standard deviation showed similar change to the previous variable. in the wear-in period (period “a”), the signal shows large variance. in the normal period of tool lifetime (period “b”), there was a decreasing trend of the standard deviation. in the period of wear out (period “c”), the standard deviation of the signal showed a drastic decrease. the third parameter identified is the fourth momentum, also called as kurtosis. the fourth momentum describes the distribution-flatness of the signal. in the case of a sharp tool, the kurtosis follows a flat trend, while in the case of a worn tool, the distribution curve takes on an increasingly sharper shape. selected features for the cycloid strategy: • number of the times the signal crosses its mean value • mean value • second momentum. the progress in the values of the three, selected vibration signal features are presented along the wearing stages of the cutting tool in figure 8. selected features for the chained strategy: • mean value • fourth momentum • number of the times the signal crosses its mean value. figure 7. selected changes in the vibration signal behaviour at waveform strategy: “number of the times the signal crosses its mean value ““standard deviation” – “fourth momentum” along the tool live-cycle. figure 8. selected changes in the vibration signal behaviour at the cycloid strategy: “number of the times the signal crosses its mean value ““mean value” – “second momentum” along the tool live-cycle. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 130 the progress in the values of the three, selected vibration signal features are presented along the wearing stages of the cutting tool in figure 9. 4.3. most descriptive vibration signal behaviour in frequency domain in respect to the micro-cutting tool wear-out the vibrations are stochastic signals because they represent temporal processes where parts of the phenomenon are characterized by probability variables (ceramic material inhomogeneity, asymmetric wear of tool edges, factors that cannot be described in an exact way in the micromachining process, white noise, etc.). in case of the power density (psd), the power of the signal per unit frequency range is calculated. in the analysis of the psd functions, the goal is to use the feature selection method to find those psd frequency components where the individual sections of the curve are clearly separated from each other based on the separated stages of the taylor curve. feature selection results showed that the highest separation of the three ranges occurs at 21758hz (figure 10) using the waveform strategy. figure 10 shows that the ranges are separated at the analysed frequency applying the cycloid strategy. in the stage of wear out (period “c”), two recorded data sets were damaged during the measurements, so they cannot be considered in the analyses. therefore, only three curves are visible in range of wear out period. at the examined frequency (21758hz), a continuous decrease in amplitude is observed, which indicates a decrease in the contact surface of the tool and the workpiece (figure 11). figure 9. selected changes in the vibration signal behaviour at the chained strategy: mean value fourth momentum number of the times the signal crosses its mean value along the tool live-cycle. figure 10. separation of the stages of wear-in (thin curves), normal wear condition (middle thick curves) and wear-out (thick curves) tool according to the waveform strategy. figure 11. variation of psd amplitude values along the manufactured workpieces at 21758hz at the waveform strategy. -0.12 -0.1 -0.08 -0.06 -0.04 -0.02 0 m e a n v a lu e number of machined workpieces pe riod „c" 1st set of workpieces 2nd set of workpieces 3rd set of workpieces 4th set of workpieces pe riod "a" pe riod „b" f e a tu re r a n g e o f p e ri o d „ a ” f e a tu re r a n g e o f p e ri o d „ b ” acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 131 the next toolpath examined was the cycloid strategy. here, the results showed that the separation of the ranges should be sought at 1709 hz as summarized in figure 12. like at the waveform, a continuous decrease in amplitude is observed at the cycloid strategy. the last toolpath examined was the chained strategy. here, the results showed that the separation of the ranges should be sought at 2796hz as presented in figure 14. in figure 13, the three ranges are well separated at the frequency obtained by the feature selection. in the case of the chained toolpath, based on the volume analysis of the pocket, the geometry of the last pocket did not differ significantly from the size of the previous pocket. however, tool break occurred in the pocket after the last one presented, so, in the labelling, the last pocket was classified to the range of the wear-out period. it can be seen in figure 15 that the 8th point jumps out, so, in this way the last data point (representing alone the wear-out stage) overlaps with the normal stage. based on this, it can be concluded that the tool breakage was not only caused by the tool wearing. a similar phenomenon can be observed for the waveform path (figure 10), where the amplitude measured at the last geometry of the taylor curve is included into the normal tool wear range. however, determining the cause of the fractures requires further investigations. in contrast to the previous toolpath strategies, an opposite, increasing amplitude value trend was observed along the machining, applying the chained toolpath (figure 15). 5. validation and representation of the foundings according to the original, measured vibration signal curves for the engineering-oriented validation and representation, some original vibration measurements for the selected experiments marked as yellow circles on the figure 7, figure 8 and figure 9 (put into the different stages on the identified, important feature curves of the waveform, cycloid and chained toolpaths, respectively) are presented in a tabular form in figure 16. it mirrors clearly that the presented methodology works appropriately, the differences between the three tool wear stages mirror the identified behaviours, consequently, there is an open floor for realizing the vibration-based monitoring and supervision of the micromilling of ceramics. figure 12. separation of the stages of wear-in (thin curves), normal wear condition (middle thick curves) and wear-out (thick curves) tool according to the cycloid toolpath. figure 13. variation of psd amplitude values along the manufactured workpieces at 1709hz at the cycloid strategy. figure 14. separation of the stages of wear-in (thin curves), normal wear condition (middle thick curves) and wear-out (a thick curve) tool according to the chained toolpath. figure 15. variation of amplitude values along the manufactured workpieces at 2796hz using the chained toolpath. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 132 6. conclusions and outlook in this paper, tool wear monitoring was investigated using direct and indirect methods to compare three cam path strategies (waveform, cycloid and chained). the key conclusions of the paper are: • the results represents clearly that the introduced methodology works well; it was found that the most relevant signal features and the original, “pure” signal measurements mirrors the identified behaviour, consequently, there is an open floor for realizing vibration-based monitoring and supervision of the micro-milling of ceramics. • the appointed features of the vibration signals describe the three typical stages (worn-in, normal, wear-out) of the tool life cycle according to the taylor curve identified. • in general, the feature “number of the times the signal crosses its mean value” has the highest relevance based on the vibration signal, so, this measure describes in the most accurate way the tool wearing progress. • using the feature selection method, it was possible to find the frequencies where the individual regions of the taylor curves are well separated from each other. • applying the introduced method, it is possible to determine the actual wear of the cutting tool by analysing the vibration frequencies at micromilling of ceramics. as research outlook, more detailed, novel tool wearing symptoms will be analysed in the future, the milling process/tool path will be split up into individual, homogenous, much smaller sections (layers and curves & linear movements) and so, a more sensitive and more detailed process monitoring will be possible, moreover different types of tool wearing will be considered separately. acknowledgement this work was supported by the european commission through the h2020 project epic (https://www.centre-epic.eu/) under grant no. 739592.; by the hungarian ed_18-2-2018-0006 grant on a “research on prime exploitation of the potential provided by the industrial digitalisation” and by the ministry of innovation and technology nrdi office within the framework of the artificial intelligence national laboratory program, hungary. references [1] p. jansohn, modern gas turbine systems, high efficiency, low emission, fuel flexible power generation. woodhead publishing series in energy, 20, 2013, isbn 978-1-84569-728-0, pp. 8-38. [2] l. móricz, zs. j. viharos, trends on applications and feature improvements of ceramics, manufacturing 2015 conference, budapest university of technology and economics, 15, no. 2, 2005, pp. 93-98. [3] lingfei ji, yinzhou yan, yong bao, yijian jiang, crack-free cutting of thick and dense ceramics with co2 laser by single-pass process, optics and lasers in engineering, vol. 46, issue 10, 2008, pp. 785-790. doi: 10.1016/j.optlaseng.2008.04.020 [4] jiyue zeng, thomas j. kim, an erosion model for abrasive waterjet milling of polycrystalline ceramics, wear, 199, 1996, pp. 275-282. doi: 10.1016/0043-1648(95)06721-3 [5] l. móricz, zs. j. viharos, optimization of ceramic cutting, and trends of machinability, 17th international conference on energetics-electrical engineering 26th international conference w a v e fo rm c y cl o id c h a in e d the tool was broken s tr a te g ie s: worn-in signal from period „a” normal tool condition signal from period „b” wear-out signal from period „c” figure 16. vibration signal examples representing the progress at different tool wearing stages (worn-in, normal condition, wear-out) using the three analysed toolpath strategies (waveform, cycloid, chained). https://doi.org/10.1016/j.optlaseng.2008.04.020 https://doi.org/10.1016/0043-1648(95)06721-3 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 133 on computers and educations, hungarian technical scientific, society of transylvania, 2016, pp. 105-110. [6] l. móricz, zs. j. viharos, a. németh, a. szépligeti, efficient ceramics manufacturing through tool path and machining parameter optimisation, 15th imeko tc10 workshop on technical diagnostics, budapest, hungary, 6-7 june 2017, pp. 143-148. online [accessed 2 september 2021] https://www.imeko.org/publications/tc10-2017/imekotc10-2017-024.pdf [7] l. móricz, zs. j. viharos, zs. a. németh, a. szépligeti, indirect measurement and diagnostics of the tool wear for ceramics micromilling optimisation, xxii imeko world congress, 3-6 september 2018, belfast, united kingdom, journal of physics: conference series (jpcs), 1065, 2018. doi: 10.1088/1742-6596/1065/10/102003 [8] l. móricz, zs. j. viharos, a. németh, a. szépligeti, m. büki, offline geometrical and microscopic & on-line vibration based cutting tool wear analysis for micro-milling of ceramics, measurement, 163, 2020, 108025. doi: 10.1016/j.measurement.2020.108025 [9] xiaohong lu, zhenyuan jia, xinxin wang, yubo liu, mingyang liu, yixuan feng, steven y. liang,, measurement and prediction of vibration displacement in micro-milling of nickel-based superalloy, measurement, 145, 2019, pp. 254-263. doi: 10.1016/j.measurement.2019.05.089 [10] c. k. toh, vibration analysis in high speed rough and finish milling hardened steel, journal of sound and vibration, 278, 2004, pp. 101–115. doi: 10.1016/j.jsv.2003.11.012 [11] d. e. dimla, p. m. lister, on-line metal cutting tool condition monitoring. i: force and vibration analysis, international journal of machine tools and manufacture, 40, 2000, pp. 739–768. doi: 10.1016/s0890-6955(99)00084-x [12] j. w. youn, m. y. yang, a study on the relationships between static/dynamic cutting force components and tool wear, journal of manufacturing science and engineering, transactions of the american society of mechanical engineers 123, 2001, pp. 196205. doi: 10.1115/1.1362321 [13] a. sarhan, r. sayed, a. a. nassr, el. el-zahry, interrelationships between cutting force variation and tool wear in end milling, journal of materials processing technology, 109, 2001, pp. 229235. doi: 10.1016/s0924-0136(00)00803-7 [14] m. a. elbestawi, t. a. papazafiriou, r. x. du, in-process monitoring of tool wear in milling using cutting force signature, international journal of machine tools and manufacture, 31, 1991, pp. 55–73. doi: 10.1016/0890-6955(91)90051-4 [15] a. b. sadat, tool wear measurement and monitoring techniques for automated machining cells, in: h. masudi (ed.), tribology symposium, pd-vol. 61, american society of mechanical engineers, new york, 1994, pp. 103–115. [16] a. b. sadat, s. raman, detection of tool flank wear using acoustic signature analysis, wear, 115, 1987, pp. 265–272. doi: 10.1016/0043-1648(87)90216-x [17] h. k. tönshoff, m. jung, s. männel, w. rietz, using acoustic emission signals for monitoring of production processes, ultrasonics, 37, 2000, pp. 681–686. doi: 10.1016/s0041-624x(00)00026-3 [18] i. szalóki, programming trochoidal trajectories in microsoft office excel, óe / bgk / cutting technology computer design task, budapest, 2012., pp. 30. [19] w. shao, y. li, c. liu, x. hao, tool path generation method for five-axis flank milling of corner by considering dynamic characteristics of machine tool, proc. of intelligent manufacturing in the knowledge economy, procedia cirp 56, 2016, pp. 155160. doi: 10.1016/j.procir.2016.10.046 [20] b. warfield, complete guide to cam toolpaths and operations for milling in 2020, edition. online [accessed 1 august 2020]. https://www.cnccookbook.com/cam-features-toolpath-cnc-restmachining [21] m. s. h. bhuiyan, i. a. choudhury, m. dahari, y. nukman, s. z. dawal, application of acoustic emission sensor to investigate the frequency of tool wear and plastic deformation in tool condition monitoring, measurement, 92., 2016., pp. 208–217. doi: 10.1016/j.measurement.2016.06.006 [22] zs. j. viharos, k. b. kis, á., fodor, m. i. büki, adaptive, hybrid feature selection (ahfs), pattern recognition, 116, 2021, art. 107932. doi: 10.1016/j.patcog.2021.107932 https://www.imeko.org/publications/tc10-2017/imeko-tc10-2017-024.pdf https://www.imeko.org/publications/tc10-2017/imeko-tc10-2017-024.pdf https://doi.org/10.1088/1742-6596/1065/10/102003 https://doi.org/10.1016/j.measurement.2020.108025 https://doi.org/10.1016/j.measurement.2019.05.089 http://dx.doi.org/10.1016/j.jsv.2003.11.012 http://dx.doi.org/10.1016/s0890-6955(99)00084-x https://doi.org/10.1115/1.1362321 http://dx.doi.org/10.1016/s0924-0136(00)00803-7 https://doi.org/10.1016/0890-6955(91)90051-4 https://doi.org/10.1016/0043-1648(87)90216-x https://doi.org/10.1016/s0041-624x(00)00026-3 http://dx.doi.org/10.1016/j.procir.2016.10.046 https://www.cnccookbook.com/cam-features-toolpath-cnc-rest-machining https://www.cnccookbook.com/cam-features-toolpath-cnc-rest-machining https://doi.org/10.1016/j.measurement.2016.06.006 https://doi.org/10.1016/j.patcog.2021.107932 jelly-z: twisted and coiled polymer muscle actuated jellyfish robot for environmental monitoring acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 7 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 jelly-z: twisted and coiled polymer muscle actuated jellyfish robot for environmental monitoring pawandeep singh matharu1, akash ashok ghadge1, yara almubarak2, yonas tadesse1,3,4,5 1 humanoids, biorobotics, and smart systems laboratory, mechanical engineering department, the university of texas at dallas, richardson, tx 78705, usa 2 sorobotics laboratory, mechanical engineering department, wayne state university, detroit, mi 48202, usa 3 biomedical engineering department, the university of texas at dallas, richardson, tx 78705, usa 4 electrical and computer engineering department, the university of texas at dallas, richardson, tx 78705, usa 5 alan g. macdiarmid nanotech institute, the university of texas at dallas, richardson, tx 78705, usa section: research paper keywords: artificial muscles; underwater robots; biomimetics; computer vision; jellyfish; smart materials; tcp citation: pawandeep singh matharu, akash ashok ghadge, yara almubarak, yonas tadesse, jelly-z: twisted and coiled polymer muscle actuated jellyfish robot for environmental monitoring, acta imeko, vol. 11, no. 3, article 6, september 2022, identifier: imeko-acta-11 (2022)-03-06 section editor: zafar taqvi, usa received february 27, 2022; in final form august 25, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was partially supported by office of naval research, usa. corresponding author: pawandeep singh matharu, e-mail: pawandeep.matharu@utdallas.edu 1. introduction in recent times dealing with inhospitable environments has become inevitable. the advancement of technology and increase in demand have forced humans to consider exploration, operation, and data collection in places that are nearly impossible for them to operate in unless equipped with expensive, and in some cases, heavy protective equipment. the human activities are restrained especially in an underwater medium. they are faced with environment limitation such as extreme temperatures (4 °c to 1 °c), radiation, and extreme pressure at depths of over (10,000 ft). moreover, facing physical limitations such as time spent underwater, size, danger when faced with underwater creatures. adaptability is needed to deal with the numerous obstacles presented for humans in an aquatic environment, thus the integration of underwater soft robotics is highly critical and essential for the success of human exploration of the ocean. any soft robot’s development requires improvement in its locomotion, size, weight, flexibility, and control. combining soft materials and artificial muscles along with sensors is the steppingstone of the next generation of smart biomimetic robots making them highly compliant. this part of the development stage is critical and requires many iterations and analysis. the key challenges now are control, movement, and stiffness modulation. control of stiffness is highly essential in soft robots [1]-[4]. abstract silent underwater actuation and object detection are desired for certain applications in environmental monitoring. however, several challenges need to be faced when addressing simultaneously the issues of actuation and object detection using vision system. this paper presents a swimming underwater soft robot inspired by the moon jellyfish (aurelia aurita) species and other similar robots; however, this robot uniquely utilizes novel artificial muscles and incorporates camera for visual information processing. the actuation characteristics of the novel artificial muscles in water are presented which can be used for any other applications. the bio-inspired robot, jelly-z, has the following characteristics: (1) the integration of three 60 mm-long twisted, and coiled polymer fishing line (tcpfl) muscles in a silicone bell to achieve contraction and expansion motions for swimming; (2) a jevois camera is mounted on jelly-z to perform object detection while swimming using a pre-trained neural network; (3) jelly-z weighs a total of 215 g with all its components and is capable of swimming 360 mm in 63 seconds. the present work shows, for the first time, the integration of camera detection and tcpfl actuators in an underwater soft jellyfish robot, and the associated performance characteristics. this kind of robot can be a good platform for monitoring of aquatic environment either for detection of objects by estimating the percentage of similarity to pre-trained network or by mounting sensors to monitor water quality when fully developed. mailto:pawandeep.matharu@utdallas.edu acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 recently, researchers have also taken advantage of using soft material to explore the mariana trench (depth of 11,034 m underwater) [5]. many works have been presented which start replicating the movement and behaviour of animals using synthetic materials to illustrate a high degree of freedom robot and variability in stiffness such as 3d printed musculoskeletal joints [6], [7], biomimetic octopus like tentacles [8]-[10], robotic fish [11]-[13], and most importantly jellyfish like robots [14]-[16]. the jellyfish is considered as the most efficient swimmer in the ocean [17], with its highly flexible and deformable bell, it can propel itself long distances while exerting very low energy. robertson et al. presented a robot with jet propulsion similar to the jellyfish inspired by scallops [18]. origami inspired robots have also been investigated [19]. others have presented “soft growing” robot that can be controlled and actuated with a pneumatic actuation mechanism[20]. reviews on the challenges of maintaining linear assets by effectively utilizing autonomous robots have been shown to reduce maintenance costs, human involvement, etc. [21]. however, the research focus in our work is on the jellyfish for its geometrical simplicity and its swimming advantages. many have also attempted to integrate artificial muscles in various configurations to achieve their desired biomimetic jellyfish design. some include jellyfish like robots actuated by shape memory alloys (smas) [14], [22], twisted and coiled polymers (tcps) [15], dielectric elastomers (des) [23], [24], pneumatics [25], ionic polymer metal composites (ipmcs) [26], and hydrogen fuel powered [16]. one aspect that is missing in the literature is the integration of sensors and the lack of object detection capabilities underwater, except for a katzschmann et al. [13] and lm-jelly that utilizes magnetic field and electromagnetic actuator [27]. haines et al. showed that twisted and coiled polymer muscles from fishing line (tcpfl) can be very promising for robotic applications [28]. these muscles can exhibit large displacements in response to heating, which considerably decreases the stiffness of the tcp muscles [29]. the thermally induced fiber untwists in the coiled structure allowing for tensile and torsional actuation in tcp muscles [30]. wu et al. [31] developed a novel mandrel coiled tcpfl muscle for actuating the musculoskeletal system and showed that these types of actuators can be used for other soft robotic applications. however, these mandrel coiled tcpfl actuators have low blocking force/ high strain; hence, taking into account to application considered in this work, we fabricated self-coiled tcpfl actuators [6], [10] with large blocking force and enough displacement to actuate the soft robot. we seek to present a fully functional swimming underwater robot, jelly-z, that is used for applications such as underwater monitoring and data collection. to this extent, in this work, we present jelly-z as shown in figure 1, inspired by the geometry of the moon jellyfish and other similar robots, but this robot is actuated by twisted and coiled polymer fishing line muscles. it is the first jellyfish-like soft robot equipped with a camera with object detection capabilities and actuated by artificial muscles at the same time. these two features make this robot stand out as it attempts to address the fundamental problems in actuation, design, and the use of vision system of soft robots to be deployed for eco-friendly underwater mission. the main features of this robot are: • mimics the bow (jellyfish bell) and string (tcpfl) arrangement for the actuation mechanism. • noiseless and vibration free swimming underwater. • easy to fabricate and is lightweight. • equipped with a camera for surveillance and object detection underwater. the highlights of the paper can be listed as follows. first, the detailed design and fabrication of jelly-z using tcpfl to mimic the movement of moon jelly is shown. second, the fabrication process and characterization results of twisted and coiled polymer fishing line muscles (tcpfl) in underwater environment. third, successful vertical swimming experiment of soft structure of jelly-z robot which includes swimming analysis and object detection results. the contribution of this work in measurement and estimation is that the proposed small bioinspired underwater soft robot equipped with a small camera utilizes unique artificial muscles (which are silent in actuation, easily manufacturable and have sufficient actuation properties) for swimming. it is able to predict and detect different kinds of objects in the surrounding while swimming in water and estimating the percentage of similarity to the object the camera/robot is trained for. in addition, we provided the characteristics of the artificial muscles (tcpfl) in water which can be used for other similar applications. 2. design and fabrication of jelly-z the main body of jelly-z is fabricated from a round silicone bell (diameter 130 mm). spring steels are added to provide stiffness to the attached artificial muscle [15, 32]. it also contains a jevois smart camera and a piece of foam for buoyancy. a rendered image shows the assembled version of the bot in figure 1(a and b). figure 1. (a) cad design showing major components of the jelly-z robot, (b) top view. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 2.1. assembly of jelly-z robot unlike many rigid and complex underwater robots and rovs jelly-z can be fully assembled with six simple steps. first, prepare the steel springs. the 120-micron steel spring is cut into 120 mm x 10 mm strips. figure 2(1) shows a snapshot of one of the steel springs used in the robot. for this robot, we require a total 6 strips. second, since the tcpfl muscles are directly attached to the steel strip by crimping, an insulating tape is added to prevent any current transfer between the actuating muscles and steel strip. moreover, electrical wires are placed in between the tape and the steel strips to keep them fixed within the robot as shown in figure 2(2). third, a 3d printed (abs plastic) mould (figure 2(3)) is used to fabricate the silicone bell. the mould is prepared by cleaning its surface and spraying it with a non-stick compound. fourth, align the steel strips in the designed direction as shown in figure 2(4). fifth, pour the silicone mix (ecoflex 00-10, 1:1 ratio of part a and part b) into the mould (figure 2(5)) and allow it to cure for a minimum of four hours. finally, integrate the tcpfl muscles into the bell by attaching them to the steel springs. the artificial muscles must be stretched (prestressed) to create gaps between each pitch which allows it to contract while heating. then attach both the foam and camera on the robot of the jellyfish as shown in figure 2(6). the foam is used to set the neutral buoyancy of jelly-z which was calculated based on its experimental volume. the camera is also sealed and waterproofed before attaching. 2.2. fabrication of twisted and coiled polymer fishing line (tcpfl) muscles the fabrication process of tcpfl actuators is simple, scalable, and allows the user to easily manipulate the actuator’s properties such as resistance, diameter, and length. this fabrication process previously presented in detail in our previous work [6], [7], [10]. it is done in-house using a setup which includes two stepper motors, a controller, a power supply, and a computer. the fabrication consists of four major steps (1) inserting a full twist on the fishing line fibre; (2) incorporating the nichrome wire; (3) coiling the wire and nichrome together; and (4) annealing in the oven to allow the actuator to maintain its coiled structure. to assure that all the actuators perform and behave the same, two long fishing line actuator were fabricated and later cut into the desired shorter length for the jellyfish. in this case, two 130 mm fishing line actuators were made and cut into shorter 60 mm length. the precursor fibre is an 80-lb nylon 6,6 monofilament (0.8 mm in diameter) purchased from eagleclaw. the conductive nichrome wire is 160 µm in diameter purchased from mcmaster carr. the coiling speed was kept at 150 rpm. figure 3 shows the fabrication set up. stepper motor 1 (sm1) located at the top is used to insert a twist and coil the fibres. stepper motor 2 (sm2) is used to guide the coiling of the nichrome wire throughout the length of the fishing line fibre. the speed of sm2 (150 rpm) is critical as it controls the pitch and amount of nichrome that is being incorporated which will affect the final electrical resistance of the actuator. 3. isotonic testing of tcpfl in an underwater environment 3.1. characterization setup isotonic testing is one of the most important characterization processes to identify how the muscle behaves when its heated and then cooled under a constant load condition. this test can be performed in both air and underwater environment to fully mimic the muscle’s true actuation condition. this set up, shown in figure 4 (left), includes a power supply for joule heating, ni daq 9219 to record the temperature change using thermocouples, and a camera along with tracker physics program to measure the displacement for an underwater testing environment. 3.2. characterization results figure 5 presents the results of the characterization experiments conducted for the tcpfl muscles in water. figure 4 (right) shows a zoomed in snapshot of the tcpfl in its unloaded and loaded state. the length of the muscle is 60 mm (same as that in the robot), diameter d is 2.5 mm and resistance r is 60 ω. the aim of this experiment is to test the effect of different input currents on the tcpfl actuator in an underwater environment. the properties of tcpfl muscles in water is shown in table 1. figure 2. schematic showing fabrication steps of jelly-z robot. figure 3. schematic diagram of the tcpfl muscle fabrication process (left) twisting process, (middle) nichrome incorporation process and (right) coiling process. twisting and coiling protocol similar to wu et al. [7], hamidi et al. [6] and almubarak et al. [10]. figure 4. (left) isotonic test experimental setup, (right) zoomed-in image of the tcp fishing line muscle; no pre-stress and with pre-stress. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 a 500 g weight attached to the free end of the tcpfl muscle while the other end (top end) is fixed in a glass cylinder filled with 5.5 gallons of water. copper wires are connected to both ends on the muscle and to a power supply. the thermocouple is directly connected to the actuator on one end and into the ni daq 9219 on the other. lastly, a camera is placed at the free end on the muscle to record the displacement. a labview program collects and saves all the data at a frequency of 10 hz for analysis by the user. the open-source tracking program (tracker) captures the actuation displacement from the recording and the data is plotted in matlab 2021b as shown in figure 6. the maximum temperature and actuation strain are measured experimentally at three different input currents (0.45 a, 0.55 a, 0.75 a) as shown in figure 5 (a). the maximum actuation strain is noted at ~7 % (figure 5 (d)) while the highest temperature reached is ~62 °c (figure 5 (c)). the highest voltage consumption is ~65 v (figure 5 (b)), which makes the power consumption ~48 w. 4. jelly-z swimming experiment for efficient propulsion of the robot, pressure gradients are generated during each contraction and relaxation cycle across the bell margin of jelly-z bot that allows an upward movement. during each relaxation cycle there is a slight sinking of the robot due to the reverse movement of the bell. figure 6 shows the total vertical distance jelly-z swam in a 70-gallon fish tank (92 x 58.5 x 46 cm3). the jellyfish robot takes up to 21 actuation cycles to swim 360 mm vertically in 63 seconds. a camera mounted on a tripod, working at 60 fps, is used to track the jelly-z while swimming. the open-source tracking program (tracker physics) is employed to measure the distance travelled by the robot and the data is plotted in matlab 2021b. the background stripes are taken as a measurement reference as each layer (white and black in figure 6(a)) are 8 mm wide. three muscles of 60 mm length (60 ω resistance each) are used to make the robot swim with an input current of 1.8 a for 1.5 s heating (contraction cycle) and 1.5 s cooling (relaxation cycle – 0 a). the velocity of the robot is slower for the first 10 cycles of operation while the velocity increases as the robot reaches closer to the surface of water. this is due to the air bubbles that are formed underneath the bell as the water reacts with the high power supplied to the muscles. figure 6(a) and figure 6(b) show the full assembly of the jelly-z robot through the front and bottom perspective views. figure 6(c) (i, ii, iii) show the zoomed-in graphs showing the first 3 cycles (i) current, (ii) velocity (velocity amplitude for each actuation cycle reaches to ~20 mm/sec.), (iii) displacement for 0.33 hz of actuation frequency (duty cycle – 67 %). the total electrical power used to actuate the jelly-z is 109.8 w, but generally these muscles consume much lesser power in air (1/3rd). this is due to the thermal energy being consumed in the surrounding environment (water); hence, to reach the actuation temperature for muscle movement, the input power is higher in water than in air. water as a medium also allows rapid cooling of the muscles. as this is the first application shown with tcpfl muscles in water, more research must be conducted to reduce the heat lost in the surrounding water medium, thus reducing input power. 5. underwater object detection the experimental set up to test the design consists of a large fish tank (920 mm x 585 mm x 460 mm) made of transparent glass, a dc power supply unit, a standard laptop, a fully assembled jelly-z bot prototype mounted with a jevois smart camera, and the appropriate wirings/cables for connections. for this set up, we have used jevois a-33 smart camera solution, by jevois smart machine vision (jsmv) weighing ~17 g (figure 7, bottom(a)). the camera (hardware specifications given in table 2) is mounted on top of the jelly-z bot as shown in full assembly in figure 7 (top right). the camera combines with a camera sensor, embedded quad-core computer, and usb video link in a tiny package. the advantage of using jsmv can be explained with the help of schematic in figure 7(bottom, (b)). a standard camera displays the output graphics without any processing and leave the analysis of data to the receiver. while a jsmv includes a processing unit which processes the video to interpret contents and provide instant results to the receiver. the hardware specifications are given in table 2 [34]. figure 5. experimental characterization of tcp fishing line muscle in an underwater environment and two-step input sequence with a pre-stress loading of 500 gm for 3 different input currents (0.45 amps, 0.55 amps, 0.75 amps); (a) current vs time of two-step power input, time domain plots for two-step power input: (b) output voltage, (c) temperature, (d) strain. table 1. twisted and coiled polymer fishing line actuator mechanical and electrical properties. property characteristic/quantity material nylon (6,6) fishing line type of actuation electrothermal type of resistance wire nichrome (nickel 70 %, chromium 30 %) nichrome temperature coefficient of resistance (1/°c) 579 · 10−6 [33] fishing line diameter (mm) 0.8 nichrome diameter (µm) 160 length of the actuator after coiling (mm) 60 diameter of the actuator after coiling (mm) 2 mass of the actuator (kg) 0.4 · 10-3 resistance () 60 heating time (s) 5 cooling time (s) 25 duty cycle (%) 16.6 actuation frequency (hz) 0.33 free strain in water (%) ~7 % blocking force in water (n) 5.88 current in water (a) 0.45-0.75 voltage (v)/ power in water (w) 50 v / 30 w acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 the jsmv does not come with waterproof properties and is not suitable for underwater applications. so, in order to make the camera waterproof we have embedded the camera unit inside a silicone rubber mould, can be seen in figure 7 (b). this allows camera to function underwater, however, the camera unit has a cooling fan which has to be removed in the process and that limits the functioning of camera up to 2 mins due to overheating. while the software specifications include already uploaded object detection software such as tensorflow, yolo darknet, matlab module and python object recognition. for this set up, we have used yolo darknet set up. standard module of yolo provided by jsmv detects upto 1000 different types of objects using deep neural network, darknet. darknet is an open source neural network framework written in ‘c’ and ‘cuda’ [35]. figure 7(c) shows architecture of yolo framework. yolo’s architecture is very similar to fcnn (fully connected neural network). this is a neural network that applies multiple convolutional layers to extract features from images to create learning models. the max pool layer sub-samples the image properties for each of the smaller segments in the image and trains the model accordingly. many such layers are implemented in the neural network of yolo. other layers like the fully connected layer combines the weight associated to the feature properties of the image [36]. figure 6. (top) underwater swimming analysis for jelly-z robot at 1.8 a input current / voltage – 60 v, duty cycle – 50 % for an actuation frequency of 0.33 hz at different time intervals. (a) front perspective view. (b) bottom perspective view. (bottom) (c, i-iii) input current, velocity profile and distance vs time graphs for first three cycles. table 2. hardware specifications of jevois a-33 smart camera. parameters specifications weight 17 g size 28 cm³ processor allwinner a33 quad core arm cortex a7 processor @ 1.34ghz with vfpv4 and neon, and a dual core mali-400 gpu supporting opengl-es 2.0 memory 256 mb ddr3 sdram camera sensor 1.3 mp camera with sxga (1280 x 1024) up to 15 fps (frames/s) hardware serial port 5v or 3.3v (selected through vcc-io pin) micro serial port connector to communicate with arduino or other embedded controllers power 3.5 watts maximum from usb port. requires usb 3.0 port or y-cable to two usb 2.0 ports. led one two-color led: green: power is good. orange: power is good, and camera is streaming video frames figure 7. (a) underwater object detection setup. (b) waterproofing of a jevois camera. (c) autonomous object detection in jevoise camera vs standard camera. (d)the cnn architecture of yolo deep learning model. (e)experimental results of object detection from jevois camera. (i) 65 % human (ii) 63% laptop, (iii) 76% human, (iv) multiple humansat 57 % & 35 %. conv. layer 7x7x64 conv. layer 3x3x192 conv. layer 1x1x128 conv. layer 1x1x256 conv. layer 1x1x512 conv. layer 1x1x1024 conv. layer (a) (b) (c) (d) (e) acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 the architecture divides the sample image into a grid size of s*s, known as a residual block. each grid cell is responsible to detect the object cantered within itself. each grid cell predicts the bounding boxes with help of their confidence score. if there is no object in the grid cell, the confidence score would remain zero. each bounding box provides five predictions (x, y, w, h) and confidence score. (x, y) represent the centre of the box and (w, h) represent the dimensions of the box. the predicted bounding boxes are equivalent to the true boxes of the objects when intersection over union is used. this phenomenon gets rid of any unnecessary bounding boxes that don't complement the objects' characteristics (like height and width). the final detection will be unique bounding boxes that correctly fit the objects [37]. the results of object detection experiment can be seen in figure 7(e). the camera mounted on jelly-z bot while being submerged under water, can detect humans passing in-front of fish tank. the camera is also able to identify a laptop during this experiment. to test the detection of multiple objects, two people walked next to fish tank, and as seen in figure 7(e) the camera was able to distinguish between two people and were able to detect them both separately. capabilities of underwater detection are enormous along with application in civil and military domains. 6. conclusions and future work we presented a fully-functional underwater jellyfish like robot, jelly-z. this work shows the first implementation of actuation solely by self-coiled twisted and coiled polymer with fishing line artificial muscles (tcpfl) and the integration of object detection system presented for a soft underwater robot. we showed the unique design of jelly-z inspired by the moon jellyfish and the actuation mechanism using tcps. this design iteration enabled the integration of three tcpfl muscles for the bell contraction and relaxation motion to allow for swimming. jelly-z achieved swimming at a velocity of 5.7 mm/s traveling 360 mm vertically in 63 s. it could generate an instantaneous velocity of ~20 mm/s per cycle, while carrying its own weight of 215 g. it is to be noted that the vertical swimming of this robot can be controlled by turning off the actuation, which allows the jelly-z to essentially sink under the action of its own weight. moreover, we presented the fabrication of tcpfl and underwater isotonic testing. the tcpfl can actuate up to ~7% in underwater condition at a power of 109 w while carrying a load of 500 g, almost 1,000 times heavier than its weight. the tcp muscles are manufactured in-house, and all the muscle integration processes are simple. some of the challenges to improve upon is the life cycle of these (tcpfl) artificial muscles in water, the high-power consumption of the muscles and the operation time of the jevois camera when it is underwater. these aspects need further work and should be done before considering for deployment. as for the system’s behaviour in non-ideal conditions a control system will need to be developed equipped with an internal gps and a 3d imu which will help the robot maintain its specific position when it is pushed or exposed to high underwater currents. autonomous underwater seakeeping is beyond the scope of the presented work but it is worth consideration for research in future projects and applications. we also equipped jelly-z with the jevois smart camera and tested for object and human detection in an underwater environment using the yolo darknet object detection algorithm. future work will include theoretical modelling and comparison with experimental characterization results for tcpfl actuators. improving the efficiency of tcpfl actuators is another major work that has to be done in future. water tunnel tests have to be conducted to study the generation of vortices while the robot is swimming during the contraction and relaxation motion of each cycle. characterization of thrust force generated and simulations for the same have to be carried out from the linear robot motion for improving the efficiency of the robot design. work has to be done on enhancing underwater imaging [37] and object detection capabilities for practical applications. the work presented here is an attempt towards developing a better design of life-like soft robotic jellyfish with no motors or rigid components for actuation. acknowledgement the authors would like to thank dr. ray baughman for his valuable input on the design. we would like to thank the office of naval research for partially supporting this work. supplementary supplementary video is available at the hbs youtube channels; in particular, a combined video showing the structure of the robot, actuation characteristics and object detection. https://youtu.be/2mhchqribjo. references [1] e. g. dos santos, h. richter, design and analysis of novel actuation mechanism with controllable stiffness, actuators 8(1) (2019), art. no. 12, pp. 1-14. doi: 10.3390/act8010012 [2] t. du, j. hughes, s. wah, w. matusik, d. rus, underwater soft robot modeling and control with differentiable simulation, ieee robotics and automation letters 6(3) (2021), pp. 49945001. doi: 10.1109/lra.2021.3070305 [3] b. lu, c. zhou, j. wang, y. fu, l. cheng, m. tan, development and stiffness optimization for a flexible-tail robotic fish, ieee robotics and automation letters 7(2) (2021), pp. 834-841. doi: 10.1109/lra.2021.3134748 [4] y. yang, y. li, y. chen, principles and methods for stiffness modulation in soft robot design and development, bio-design and manufacturing 1(1) (2018), pp. 14-25. doi: 10.1007/s42242-018-0001-6 [5] l. li, x. zheng, r. mao, g. xie, energy saving of schooling robotic fish in three-dimensional formations, ieee robotics and automation letters 6(2) (2021), pp. 1694-1699. doi: 10.1109/lra.2021.3059629 [6] a. hamidi, y. almubarak, y. tadesse, multidirectional 3dprinted functionally graded modular joint actuated by tcpfl muscles for soft robots, bio-design and manufacturing 2(4) (2019), pp. 256-268. doi: 10.1007/s42242-019-00055-6 [7] l. wu, i. chauhan, y. tadesse, a novel soft actuator for the musculoskeletal system, advanced materials technologies 3(5) (2018), art. no. 1700359, pp. 1-8. doi: 10.1002/admt.201700359 [8] m. calisti, m. giorelli, g. levy, b. mazzolai, b. hochner, c. laschi, p. dario, an octopus-bioinspired solution to movement and manipulation for soft robots, bioinspiration & biomimetics 6(3) (2011) art. no. 036002. doi: 10.1088/1748-3182/6/3/036002 [9] j. jiao, sh. liu, h. deng, yu. lai, f. li, t. mei, h. huang, design and fabrication of long soft-robotic elastomeric actuator inspired by octopus arm, in 2019 ieee international https://youtu.be/2mhchqribjo http://dx.doi.org/10.3390/act8010012 https://doi.org/10.1109/lra.2021.3070305 https://doi.org/10.1109/lra.2021.3134748 https://doi.org/10.1007/s42242-018-0001-6 https://doi.org/10.1109/lra.2021.3059629 https://doi.org/10.1007/s42242-019-00055-6 https://doi.org/10.1002/admt.201700359 https://doi.org/10.1088/1748-3182/6/3/036002 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 7 conference on robotics and biomimetics (robio), 06-08 december 2019, dali, china, pp. 2826-2832. doi: 10.1109/robio49542.2019.8961561 [10] y. almubarak, m. schmutz, m. perez, s. shah,y. tadesse, kraken: a wirelessly controlled octopus-like hybrid robot utilizing stepper motors and fishing line artificial muscle for grasping underwater, international journal of intelligent robotics and applications 6 (2022), pp. 543–563. doi: 10.1007/s41315-021-00219-7 [11] z. chen, s. shatara, x. tan, modeling of biomimetic robotic fish propelled by an ionic polymer–metal composite caudal fin, ieee/asme transactions on mechatronics 15(3) (2010), pp. 448-459. doi: 10.1109/tmech.2009.2027812 [12] r. zhang, z. shen, z. wang, ostraciiform underwater robot with segmented caudal fin, ieee robotics and automation letters 3(4) (2018), pp. 2902-2909. doi: 10.1109/lra.2018.2847198 [13] r. k. katzschmann, j. delpreto, r. maccurdy, d. rus, exploration of underwater life with an acoustically controlled soft robotic fish, science robotics 3(16) (2018), pp. 108-116. doi: 10.1126/scirobotics.aar344 [14] y. almubarak, m. punnoose, n. x. maly, a. hamidi, y. tadesse, kryptojelly: a jellyfish robot with confined, adjustable pre-stress, and easily replaceable shape memory alloy niti actuators, smart materials and structures 29(7) (2020) art. no. 075011. doi: 10.1088/1361-665x/ab859d [15] a. hamidi, y. almubarak, y. m. rupawat, j. warren, y. tadesse, poly-saora robotic jellyfish: swimming underwater by twisted and coiled polymer actuators, smart materials and structures 29(4) (2020), art. no. 045039. doi: 10.1088/1361-665x/ab7738 [16] y. tadesse, a. villanueva, c. haines, d. novitski, r. baughman, s. priya, hydrogen-fuel-powered bell segments of biomimetic jellyfish, smart materials and structures 21(4) (2012), art. no. 45013. doi: 10.1088/0964-1726/21/4/045013 [17] j. h. costello, s. p. colin, j. o. dabiri, b. j. gemmell, k. n. lucas, k. r. sutherland, the hydrodynamics of jellyfish swimming, annual review of marine science 13 (2021), pp. 375-396. doi: 10.1146/annurev-marine-031120-091442 [18] m. a. robertson, f. efremov, j. paik, roboscallop: a bivalve inspired swimming robot, ieee robotics and automation letters 4(2) (2019), pp. 2078-2085. doi: 10.1109/lra.2019.2897144 [19] z. yang, d. chen, d. j. levine, c. sung, origami-inspired robot that swims via jet propulsion, ieee robotics and automation letters 6(4) (2021), pp. 7145-7152. doi: 10.1109/lra.2021.3097757 [20] s. grazioso, a. tedesco, m. selvaggio, s. debei, s. chiodini, towards the development of a cyber-physical measurement system (cpms): case study of a bioinspired soft growing robot for remote measurement and monitoring applications, acta imeko 10(2) (2021), pp. 104-110. doi: 10.21014/acta_imeko.v10i2.1123 [21] d. seneviratne, l. ciani, m. catelani, d. galar, smart maintenance and inspection of linear assets: an industry 4.0 approach, acta imeko 7(1) (2018), pp. 50-56. doi: 10.21014/acta_imeko.v7i1.519 [22] a. villanueva, c. smith, s. priya, a biomimetic robotic jellyfish (robojelly) actuated by shape memory alloy composite actuators, bioinspiration & biomimetics 6(3) (2011), art. no. 036004. doi: 10.1088/1748-3182/6/3/036004 [23] c. christianson, c. bayag, g. li, s. jadhav, a. giri, ch. agba, t. li, m. t. tolley, jellyfish-inspired soft robot driven by fluid electrode dielectric organic robotic actuators, frontiers in robotics and ai 6 (2019), art. no. 126, pp. 1-11. doi: 10.3389/frobt.2019.00126 [24] t. cheng, g. li, y. liang, m. zhang, b. liu, t.-w. wong, j. forman, m. chen, g. wang, y. tao, t. li, untethered soft robotic jellyfish, smart materials and structures 28(1) (2018), art. no. 015019, 2018. doi: 10.1088/1361-665x/aaed4f [25] j. frame, n. lopez, o. curet, e. d. engeberg, thrust force characterization of free-swimming soft robotic jellyfish, bioinspiration & biomimetics 13(6) (2018), pp. 64001-64001. doi: 10.1088/1748-3190/aadcb3 [26] s.-w. yeom, i.-k. oh, a biomimetic jellyfish robot based on ionic polymer metal composite actuators, smart materials and structures 18(8) (2009) art. no 085002. doi: 10.1088/0964-1726/18/8/085002 [27] j. ye, y.-ch. yao, j.-y. gao, s. chen, p. zhang, l. sheng, j. liu, lm-jelly: liquid metal enabled biomimetic robotic jellyfish, soft robotics, 2022 (in press). doi: 10.1089/soro.2021.0055 [28] c. s. haines, m. d. lima, n. li, g. m. spinks, j. foroughi, j. d. w. madden, s. h. kim, sh. fang, m. jung de andrade, f. göktepe, ö. göktepe, s. m. mirvakili, s. naficy, x. lepró, j. oh, m. e. kozlov, s. j.kim, x. xu, b. j. swedlove, g. g. wallace, r. h. baughman, artificial muscles from fishing line and dewing thread, science 343(6173) (2014), pp. 868-872. doi: 10.1126/science.1246906 [29] y. tadesse, l. wu, f. karami, a. hamidi, biorobotic systems design and development using tcp muscles, proc. spie 10594, electroactive polymer actuators and devices (eapad) xx, 1059417 (2018). doi: 10.1117/12.2300943 [30] j. a. lee, n. li, c. s. haines, k. j. kim, x. lepró, r. ovallerobles, s. j. kim, r. h. baughman, electrochemically powered, energy-conserving carbon nanotube artificial muscles advanced materials (deerfield beach, fla.) 29(31) (2017), art. no. 1700870. doi: 10.1002/adma.201700870 [31] l. wu, i. chauhan, y. tadesse, a novel doft actuator for the musculoskeletal system, adcanced material technologies 3(5) (2018), art. no. 1700359, pp. 1-8. doi: 10.1002/admt.201700359 [32] y. almubarak, m. punnoose, n. x. maly, a. hamidi, y. tadesse, kryptojelly: a jellyfish robot with confined, adjustable pre-stress, and easily replaceable shape memory alloy niti actuators (2020), art. no. 075011. doi: 10.1088/1361-665x/ab859d [33] matweb. nichrome 70-30 medium temperature resistor material. online [accessed 23 august 2022] http://www.matweb.com [34] i. a. t. u. o. s. c. laurent itti. (2016). jevois smart machine vision camera. online [accessed 23 august 2022] http://jevois.org/ [35] j. du, understanding of object detection based on cnn family and yolo, journal of physics: conference series 1004 (2018), art. no. 012029. doi: 10.1088/1742-6596/1004/1/012029 [36] m. asyraf, i. isa, m. marzuki, s. sulaiman, c. hung, cnn-based yolov3 comparison for underwater object detection, journal of electrical & electronic systems research 18 (2021), pp. 30-37. doi: 10.24191/jeesr.v18i1.005 [37] w. gai, y. liu, j. zhang, g. jing, an improved tiny yolov3 for real-time object detection, systems science & control engineering 9(1) (2021), pp. 314-321. doi: 10.1080/21642583.2021.1901156 https://doi.org/10.1109/robio49542.2019.8961561 https://doi.org/10.1007/s41315-021-00219-7 https://doi.org/10.1109/tmech.2009.2027812 https://doi.org/10.1109/lra.2018.2847198 https://doi.org/10.1126/scirobotics.aar3449 https://doi.org/10.1088/1361-665x/ab859d https://doi.org/10.1088/1361-665x/ab7738 https://doi.org/10.1088/0964-1726/21/4/045013 https://doi.org/10.1146/annurev-marine-031120-091442 https://doi.org/10.1109/lra.2019.2897144 https://doi.org/10.1109/lra.2021.3097757 http://dx.doi.org/10.21014/acta_imeko.v10i2.1123 http://dx.doi.org/10.21014/acta_imeko.v7i1.519 https://doi.org/10.1088/1748-3182/6/3/036004 https://doi.org/10.3389/frobt.2019.00126 https://doi.org/10.1088/1361-665x/aaed4f https://doi.org/10.1088/1748-3190/aadcb3 https://doi.org/10.1088/0964-1726/18/8/085002 https://doi.org/10.1089/soro.2021.0055 https://doi.org/10.1126/science.1246906 https://doi.org/10.1117/12.2300943 https://doi.org/10.1002/adma.201700870 https://doi.org/10.1002/admt.201700359 https://doi.org/10.1088/1361-665x/ab859d http://www.matweb.com/ http://jevois.org/ https://doi.org/10.1088/1742-6596/1004/1/012029 http://dx.doi.org/10.24191/jeesr.v18i1.005 https://doi.org/10.1080/21642583.2021.1901156 continuous monitoring of the health status of cement-based structures: electrical impedance measurements and remote monitoring solutions acta imeko issn: 2221-870x december 2021, volume 10, number 4, 132 139 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 132 continuous monitoring of the health status of cement-based structures: electrical impedance measurements and remote monitoring solutions nicola giulietti1, paolo chiariotti2, gloria cosoli1, giovanni giacometti1, luca violini1, alessandra mobili3, giuseppe pandarese1, francesca tittarelli3,4, gian marco revel1 1 department of engineering and mathematical sciences (diism), v. brecce bianche, 60131, ancona, italy 2 department of mechanical engineering, v. la masa 1, 20156, milano, italy 3 department of materials, environmental sciences and urban planning (simau), v. brecce bianche, instm research unit, 60131, ancona, italy 4 institute of atmospheric sciences and climate, national research council (isac-cnr), v. piero gobetti 101, 40129 bologna, italy section: research paper keywords: concrete health monitoring; electrical impedance; remote monitoring; distributed sensor network citation: nicola giulietti, paolo chiariotti, gloria cosoli, giovanni giacometti, luca violini, alessandra mobili, giuseppe pandarese, francesca tittarelli, gian marco revel, continuous monitoring of the health status of cement-based structures: electrical impedance measurements and remote monitoring solutions, acta imeko, vol. 10, no. 4, article 22, december 2021, identifier: imeko-acta-10 (2021)-04-22 section editor: roberto montanini, università di messina and alfredo cigada, politecnico di milano, italy received july 26, 2021; in final form december 5, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: endurcrete (new environmental friendly and durable concrete, integrating industrial by-products and hybrid systems, for civil, industrial, and offshore applications) project, funded by the european union’s horizon 2020 research and innovation programme under grant agreement n° 760639. corresponding author: gloria cosoli, e-mail: g.cosoli@staff.univpm.it 1. introduction with a view of optimizing the costs related to maintenance operations necessary to guarantee a service life as long and efficient as possible for cement-based structures, structural health monitoring (shm) is paramount [1]. in fact, inspections alone are not sufficient to carry out timely actions intended to cope with possible damages. monitoring remotely structures and infrastructures is undoubtedly advantageous, since this process provides continuous data, often available in real-time, allowing to promptly act in a proactive way to avoid the risk of damages in the target structure. solutions involving iot-based approaches [2] have been also recently proposed, also in the context of smart cities applications [3]. these approaches demonstrated to be very promising given their distributed nature and connectivity characteristics making them prone to cloud analyses [4]. indeed, it is possible to monitor the factors threatening the durability of a structure, intended as “the ability of concrete to resist abstract the continuous monitoring of cement-based structures and infrastructures is fundamental to optimize their service life and reduce maintenance costs. in the framework of the endurcrete project (ga no. 760639), a remote monitoring system based on electrical impedance measurements was developed. electrical impedance is measured according to the wenner’s method, using 4-electrode arrays embedded in concrete during casting, selecting alternating current as excitation, to avoid the polarization of both electrode/material interface and of material itself. with this measurement, it is possible to promptly identify events related to contaminants ingress or damages (e.g. cracks formation). conductive additions are included in some elements to enhance signal-tonoise ratio, as well as the self-sensing properties of concrete. specifically, a distributed sensor network was implemented, consisting of measurement nodes installed in the elements to be monitored, then connected to a central hub (rs-232 protocol). nodes are realized with an embedded unit for electrical impedance measurements (eval-ad5940bioz board with ad5940 chip, by analog device) and a digital thermometer (ds18b20 by maxim integrated), enclosed in cabinets filled with an ip68 gel against moist-related problems. data are available on a cloud through wi-fi network or lte modem, hence can be accessed remotely via a use-friendly multi-platform interface. mailto:g.cosoli@staff.univpm.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 133 weathering action, chemical attack, and abrasion while maintaining its desired engineering properties” (american concrete institute, aci [5]). particular aggressive exposure conditions (e.g. marine environments) and contaminants (e.g. chlorides and sulphates) undermine the structures durability [6], requiring interventions getting more expensive the longer the time since the damage has occurred (“the law of fives” by de sitter states that the repairing costs increase exponentially after the structure damaging [7]). these events modify the concrete element composition and morphology, resulting in changes detectable through different techniques, such as ultrasound [8], [9], computer vision [10], [11], thermography [12], [13], ground penetrating radar (gpr) [14], [15], electrical resistivity [16], only to name a few. the use of electrical resistivity/impedance (cell constant scaling/no scaling) measurements for shm have grown widely [17] and achieved particular success with the introduction of conductive fillers (e.g. char, carbon black, graphene nanoplatelets, nickel powder, graphite powder, iron oxide, titanium dioxide, etc.) and fibres (e.g. virgin and recycled carbon fibres, steel fibres, carbon nanotubes, etc.) to enhance the selfsensing ability of concrete. the resulting lower electrical resistivity of the material allows to obtain a higher signal-tonoise ratio (snr), as well as to use low-cost instrumentation providing levels of accuracy comparable to those levels characterizing laboratory equipment. consequently, the possibility to exploit more affordable electronic equipment paves the way to the development of distributed sensor networks to monitor cement-based structures in those areas most subjected to stress, penetration of contaminants and hence damages and degradation. indeed, the electrical resistivity/impedance measurement is a local measurement, with a “sensing volume” corresponding to the hemisphere with the radius equal to the inter-electrode distance. sensors positioning hence becomes of uttermost importance to get significant data. furthermore, this type of measurement is particularly attractive since allows to monitor a plethora of aspects: the ingress of contaminants [18], [19] (being electrical resistivity linked to the material ability to transport ions, responsible of many degradation processes [20]), the penetration of water (particularly relevant since it carries ions and also aggressive substances, such as chlorides and sulphates), changes in temperature and moisture content, the presence of stress, the formation of cracks, the corrosion of reinforcements, etc. therefore, electrical resistivity/impedance can be assumed as an essential parameter for monitoring the health status of cementbased structures. to perform electrical impedance measurements, a proper excitation signal should be applied to the material; this can be done by means of an electric current/potential, then measuring the corresponding electric potential/current and finally computing the resultant electrical impedance. electrodes are necessary to carry out the measurement; different materials and configurations are reported in literature and both direct or alternating current (dc and ac, respectively) can be used, but the consequent measurement results will be different [6]. indeed, there are no widely accepted standards for this measurement in concrete, even if recommendations are available [21], [22]. in particular, in order to avoid both material and electrode/material interface polarization (the first caused by dc, the second caused by the use of the same two electrodes for both excitation and measurement – giving the so-called “insertion error”), the wenner’s method [23] should be adopted, using four electrodes (two for the material excitation, the other two for the measurement) and ac with a frequency greater than 1 khz [21][24]. in order to remotely monitor all these aspects in a continuous way, a dedicated monitoring system was developed within the endurcrete project (ga no. 760639); in particular, both electrical impedance and temperature of concrete elements are measured. the developed system was tested in a spanish demo site, as it will be described in detail below. the paper is organised as follows: the architecture of the remote measurement system developed to monitor the health status of cement-based structures is presented in section 2, results of laboratory tests for the monitoring system development and preliminary results of long-term monitoring activities are reported in section 3, whereas in section 4 the authors provide their comments and conclude the work. 2. materials and methods 2.1. single node design gold-standard measurement systems to perform electrical impedance measurements are galvanostats/potentiostats; their costs make their in-field use unfeasible, thus alternative sensors should be sought. moreover, given the “local” nature of the measurement, several sensing nodes should be considered to cover those areas prone to damages. the sensing nodes developed go in this direction by exploiting an embedded unit (eval-ad5940bioz board by analog device) controlling the ad5940 chip for electrical impedance measurements (a similar chip, the ad5933, has been already proved to be accurate for damages detection through electrical impedance measurements [25]). the unit was set to carry out electrical impedance measurements according to electrochemical impedance spectroscopy (eis) method in galvanostatic configuration, using ac to excite the material (frequency range: 1 khz – 20 khz; resolution: 1 khz; a signal amplitude of 600 mvpp is used, limiting the flowing electric current through a current limiting resistor to comply to iec 60601 standard – given that the employed acquisition board was originally thought for measurements on the human body). moreover, a digital thermometer was included to monitor the concrete element internal temperature, specifically the ds18b20 by maxim integrated (temperature range: from -55 °c to +125 °c; accuracy: ±0.5 °c from -10 °c to +85 °c; 1-wire bus communication), with stainless steel housing. the electronic board was placed inside an ip66 certified electrical box further filled with a self-sealing polymer-type insulating gel with cold cross-linking targeted to guarantee ip68 performance. the metrological performance of the sensing nodes targeted to the eis measurements was tested at laboratory level by comparison with a potentiostat/galvanostat used in galvanostatic configuration (gamry reference 600 by gamry instruments, considered as the gold-standard instrument). in particular, the comparison was made in terms of the real part of electrical impedance, mainly for two reasons: (i) it is directly related to electrical resistivity, in case of a pure resistive behaviour, according to the second ohm’s law; (ii) it depends on several parameters of interest (e.g. moisture content and water penetration [26], chlorides penetration [18], [27], porosity [28], carbonation depth, cracks formation [29], curing state [30]), given that it is linked to the ionic movement in the material and to the durability itself [6], [31]. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 134 laboratory acquisitions were performed on a concrete block (35 × 35 × 20 cm3) specifically manufactured for this testing activity, using the same mix-design adopted for the in-field validation phase. the concrete mix design was as follows: 375 kg/m3 of cement, 908 kg/m3 of calcareous sand 0 mm – 4 mm, 362 kg/m3 of intermediate gravel 5 mm – 10 mm, 618 kg/m3 of coarse gravel 10 mm – 15 mm, 169 kg/m3 of water, 0.55 kg/m3 and 1.27 kg/m3 of pc2 and pc3 superplasticizers, respectively. the concrete block was manufactured embedding both a temperature sensor and two 4electrode arrays (for performance comparison) for electrical impedance measurement (figure 1). stainless-steel was used for manufacturing the electrodes. furthermore, another 35 × 35 × 20 cm3 concrete specimen was manufactured hosting two couples of electrode arrays with different spacing, namely 1 cm and 4 cm, fixed at a depth of 5 cm and directed downwards and upwards, respectively (figure 2); a temperature sensor was also embedded in the specimen. the results of these laboratory tests allowed to fine-tune the setting parameters of the monitoring system, as well as to better understand the factors influencing the results (e.g. panels positioning). it is worthy to underline the fact that the tests were performed at 20±1 °c ambient temperature and 50±5% relative humidity. 2.2. monitoring system design given the targeted distributed nature of the monitoring system, a star-network architecture was selected. each measurement node is connected to a central gateway acting as data collector (figure 3). serial communication (rs232) was used as communication protocol. the central gateway also hosts a 16 channel usb to db9 rs232 serial adapter hub, an intel® nuc mini-pc and an lte industrial router. a python-based back-end service installed on the nuc sends a serial request to each node every hour and then receives the acquired data. the gateway stores data both locally in sqlite database and on cloud (aws amazon ec2 server) exploiting postrgresql engine. data can be accessed remotely through a dedicated multiplatform application developed in dart language exploiting the google flutter network for android, ios, linux and windows figure 1. first concrete panel prototype: configuration scheme. figure 2. second concrete panel prototype: configuration scheme. figure 3. monitoring system with star-network architecture, with edge devices connected to the central hub, collecting data and sending them to a cloud that can be accessed remotely. figure 4. mobile application for monitoring: home page (left), login and registration (centre, top and bottom) and main page (right). 4 electrode array, 4 cm spacing emperature sensor egend: 2 4 4 electrode array, up ards, 4 cm spacing 4 electrode array, do n ards, cm spacing emperature sensor egend: 2 4 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 135 environments, developed within the endurcrete project. as example, the application that runs on an android device is shown in the following. at the first access, registration is required with an email address and password; after the login, it is possible to access the main page with different options (figure 4). it is possible to switch between different demo sites (tunnel, tu, and marine, ma – the second has been activated in italy more recently and will be the object of future analyses), then select the panel and the electrode array of interest (for example the label tu_03_3s stands for tunnel demo site, panel n. 3, sensor n. 3 short (1-cm spacing) whereas the label tu_03_1l stands for tunnel demo site, panel n. 3, sensor n. 1 long (4-cm spacing)). furthermore, it is possible to choose the measurement frequency for the electrical impedance measurement and select the desired parameters: temperature, magnitude, phase, real and imaginary parts of electrical impedance; finally, it is possible to figure 5. mobile application for monitoring: selection of demo site, panel and electrode array, measurement frequency and parameters of interest. figure 6. mobile application for monitoring: data saving and graph making. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 136 pick out the monitoring period to be shown (figure 5). it is possible to save data, send them to the email address inserted in the registration phase (with the options to save the data related to all the panels exposed in the demo site), as well as to make a graph with the selected options (figure 6). 2.3. the concrete panels monitored in the project demo sites the concrete panels exposed in the demo sites of the endurcrete project were manufactured with low-clinker cement (aimed at reducing the carbon footprint, given that the production of ordinary portland cement – opc – is responsible for approximately 8 % of global co2 emissions [32]), developed within the same project [33]. two different mix designs were formulated, with and without conductive additions. the monitoring system developed was tested in a spanish demo site, representative of a harsh setting: a tunnel in león, which is an underground environment rich in sulphates. in particular, 2 small panels of 35 × 35 × 20 cm3 (with and without conductive additions), 1 long panel of 200 × 200 × 15 cm3 and another long panel of 200 × 400 × 15 cm3 were exposed (figure 7). in each small panel 4-electrode arrays were embedded, whereas 6 ones in longer panels, in order to have a proper distributed sensor network (figure 8). arrays with different electrode spacings were used, to be installed at depths respecting the literature recommendation (a minimum distance of twice the contact spacing from the element edge should be guaranteed): • 4-cm spacing, installed at a depth of 16 cm and 10 cm in small and long panels, respectively; • 1-cm spacing, to be installed at a depth of 4 cm and 5 cm in small and long panels, respectively. the monitoring system configuration is reported in figure 7, whereas the electrodes configuration in figure 8. it is worthy to mention that temperature sensors were embedded in the monitored concrete specimens, whereas the moisture content was not assessed; indeed, it is out of the scope of the proposed monitoring system to distinguish among the different factors contributing to the results, since the main aim is to identify ingress of contaminants that can possibly threaten the concrete durability. the positioning of the concrete panels in the tunnel is shown in figure 9. figure 7. monitoring system at the demo site in león (spain). figure 8. concrete panels exposed to the demo site in león (spain): no. 2 panels of 35 × 35 × 20cm3 (left), no. 1 panel of 200 × 200 × 15 cm3 (centre) and no. 1 panel of 200 × 400 × 15 cm3 (right). legend: 4-electrode arrays with 4-cm spacing are reported in blue, 4-electrode arrays with 1-cm spacing are reported in red, temperature sensors are reported in yellow. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 137 3. results and discussion 3.1. laboratory tests the results concerning the validation of the measurement system are reported for the real part of electrical impedance measured at 10 khz in figure 10 (single measurements are compared on each day of the time interval considered after the initial concrete curing period); measurements were performed until 15 days after casting. it is possible to notice that the measurement node developed provides results compatible to those obtained from the gold standard, with absolute percentage errors lower than 10%. the advantages of the developed node are multiple: • compactness, thus possibility to be installed close to the element to be monitored: this allows to minimize the cables length and consequently to reduce their influence on the measurement results, as well as to minimize parasitic electrical components; • reduced cost, approximately 3% of the reference equipment; • modularity of the system, which results particularly useful for the development of a distributed sensor network, which can be also modified at a later time. a comparison was made also concerning the electrodes spacing (figure 11). as expected, the real part of electrical impedance is higher with the shorter electrode spacing, since the sensing volume is smaller, resulting in higher opposition to the electric current flow. however, both the measurements show a trend increasing with curing time, as expected from the literature [16], because of the continuous hydration of the cement paste and the gradual water evaporation. in this way, it is possible to monitor different sensing volumes within the same element, also considering surfaces looking out different exposure conditions. some data pre-processing capabilities were also embedded in the system, namely a moving median filter (sliding window: 9 samples) combined to a wavelet decomposition (discrete meyer wavelet of level 5). data were also normalised with respect to the initial value, in order to observe the variations over time, independently from the absolute values (less significant for monitoring purposes, looking for variations due to particular events, such as cracks or contaminants penetration). 3.2. results from tunnel demo site some examples of the results related to the data acquired from 8th october 2020 to 28th june 2021 (approximately 9 months) are reported in this paragraph. in particular, three sensors are considered: 2-electrode arrays with 1-cm spacing embedded in two small panels with (figure 12) and without (figure 13) conductive additions, one electrode array with 4-cm spacing embedded in a long panel without conductive additions (figure 14). it can be observed that the monitoring system did not show any faults during the demo test activities and data were not affected by any issues attributable to the aggressive conditions of the exposure environment since there are no anomalous spikes or particular increase/decrease due to external phenomena. as expected, the electrical impedance module increases with decreasing temperature; however, it is worthy to note that figure 9. positioning of concrete panels in the tunnel at the demo site of león (spain); concrete panels with embedded sensors for the measurement of electrical impedance are 4 (2 small and 2 long). figure 10. comparison between measurement results obtained with the system exploiting ad5940 chip (yellow data) and reference equipment (gamry reference 600, blue data) – measurement frequency: 10 khz. figure 11. comparison between results obtained by electrode arrays with different spacings: 1 cm (orange data) and 4 cm (grey data) – measurement frequency: 10 khz. figure 12. results related to a small panel with conductive additions (1-cm spacing electrode array): temperature, electrical impedance module, electrical impedance phase, real and imaginary parts of electrical impedance (from top to bottom) – measurement frequency: 10 khz. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 138 underground environment makes temperature values quite stable and this reflects on the electrical impedance trend (no day-night cycles can be identified, neither in temperature nor in electrical impedance signals). in addition, environmental sensors were employed in the demo site to continuously monitor environmental temperature and relative humidity, which resulted both quite stable in the considered monitoring time interval (t = 10 °c – 15 °c and rh = 70 % – 80 %, respectively). in line with what obtained in laboratory tests, the electrical impedance is generally higher when electrode spacing is shorter (see figure 13 with respect to figure 14), since the electrical current flows in the sensing volume less easily, probably due to the fact that considering a minor material volume enhances the effects of particularly non-conductive elements (e.g. aggregates). moreover, it is confirmed that conductive additions lower the electrical impedance of concrete panels (see figure 12 in comparison to figure 13), hence allowing a monitoring through the use of relatively low-cost instrumentation, with limited metrological performance. the small panels (figure 12 and figure 13) show a positive phase before normalization of the signal (with respect to the initial value); this results in an inductive imaginary part, contrary to what it is expected from the literature [34]. this is probably due to a capacitive coupling with ground, as it happened during laboratory tests; in fact, this do not happen in long panels (figure 14), which are leaning directly on the ground (figure 9). however, this does not impede the monitoring of those elements and the identification of eventual factors hindering durability. the monitoring phase will continue at least until the project end (december 2021), in order to evaluate possible uncertainty sources linked to the ingress of aggressive substances (e.g. sulphates), which could hinder the concrete elements durability. 4. conclusions the ongoing validation phase of the developed monitoring system at the spanish demo site of león is providing interesting results, even though the small variations recorded so far do not highlight any particular event associated to the ingress of contaminants. the measurement system presented in the paper is continuously generating data from the field application, no matter the aggressive exposure conditions. it is worthy to underline the importance of properly choosing the locations of sensors, being a local measurement with a defined sensing volume; the use of conductive additions allows to employ lowcost modular equipment, with the possibility to realize a distribute sensor network with sensing nodes in correspondence of those areas more prone to damages or contaminants ingress, which could impact on the structure durability and efficiency during the whole life cycle. in the future, the monitoring system will be exploited also in different harsh environments, such as a typical marine site which is rich in chlorides. acknowledgement this research activity was carried out within the endurcrete (new environmental friendly and durable concrete, integrating industrial by-products and hybrid systems, for civil, industrial, and offshore applications) project, funded by the european union’s horizon 2020 research and innovation programme under grant agreement n° 760639. authors would like to thank acciona group for having prepared concrete specimens to be tested, nts and sika for having provided aggregates and admixtures, respectively. references [1] m. p. limongelli, shm for informed management of civil structures and infrastructure, j. civ. struct. heal. monit 10 (2020) pp. 739–741. doi: 10.1007/s13349-020-00439-8 [2] f. lamonaca, c. scuro, p. f. sciammarella, r. s. olivito, d. grimaldi, d. l. carnì, a layered iot-based architecture for a distributed structural health monitoring system system, acta imeko. 8 (2019) 2, pp. 45–52. doi: 10.21014/acta_imeko.v8i2.640 [3] a. h. alavi, p. jiao, w. g. buttlar, n. lajnef, internet of thingsenabled smart cities: state-of-the-art and future trends, measurement 129 (2018) pp. 589–606. doi: 10.1016/j.measurement.2018.07.067 [4] f. zonzini, c. aguzzi, l. gigli, l. sciullo, n. testoni, l. de marchi, m. di felice, t. s. cinotti, c. mennuti, a. marzani, structural health monitoring and prognostic of industrial plants and civil structures: a sensor to cloud architecture, ieee instrum. meas. mag. 23 (2020) pp. 21–27. doi: 10.1109/mim.2020.9289069 [5] the portland cement association, america’s cement manufaxcturers durability of concrete, (n.d.). online [accessed 05 december 2021]. https://www.cement.org/learn/concrete-technology/durability figure 13. results related to a small panel without conductive additions (1cm spacing electrode array): temperature, electrical impedance module, electrical impedance phase, real and imaginary parts of electrical impedance (from top to bottom) – measurement frequency: 10 khz. figure 14. results related to a long panel without conductive additions (4-cm spacing electrode array): temperature, electrical impedance module, electrical impedance phase, real and imaginary parts of electrical impedance (from top to bottom) – measurement frequency: 10 khz. https://doi.org/10.1007/s13349-020-00439-8 https://doi.org/10.21014/acta_imeko.v8i2.640 https://doi.org/10.1016/j.measurement.2018.07.067 https://doi.org/10.1109/mim.2020.9289069 https://www.cement.org/learn/concrete-technology/durability acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 139 [6] g. cosoli, a. mobili, n. giulietti, p. chiariotti, g. pandarese, f. tittarelli, t. bellezze, n. mikanovic, g. m. revel, performance of concretes manufactured with newly developed low-clinker cements exposed to water and chlorides: characterization by means of electrical impedance measurements, constr. build. mater. 271 (2020) 121546. doi: 10.1016/j.conbuildmat.2020.121546 [7] w. r. de sitter, costs of service life optimization “the law of fives”, ceb-rilem workshop on durability of concrete structures, copenhagen, denmark, 18-20 may 1983. in ceb bulletin d'information, vol. 152, comité euro-international du béton, (1984) pp. 131-134. [8] n. epple, d. f. barroso, e. niederleithinger, towards monitoring of concrete structures with embedded ultrasound sensors and coda waves – first results of dfg for coda, in: lect. notes civ. eng., springer science and business media deutschland gmbh, (2021) pp. 266–275. doi: 10.1007/978-3-030-64594-6_27 [9] m. goueygou, o. abraham, j. f. lataste, a comparative study of two non-destructive testing methods to assess near-surface mechanical damage in concrete structures, ndt e int. 41(6) (2008) pp. 448–456. doi: 10.1016/j.ndteint.2008.03.001 [10] c. z. dong, f. n. catbas, a review of computer vision–based structural health monitoring at local and global levels, struct. heal. monit. 20 (2021) pp. 692–743. doi: 10.1177/1475921720935585 [11] d. feng, m. q. feng, computer vision for shm of civil infrastructure: from dynamic response measurement to damage detection – a review, eng. struct. 156 (2018) pp. 105–117. doi: 10.1016/j.engstruct.2017.11.018 [12] s. pozzer, f. dalla rosa, z.m.c. pravia, e. rezazadeh azar, x. maldague, long-term numerical analysis of subsurface delamination detection in concrete slabs via infrared thermography, appl. sci. 11(10) (2021), art. no. 4323. doi: 10.3390/app11104323 [13] p. cotič, d. kolarič, v. b. bosiljkov, v. bosiljkov, z. jagličić, determination of the applicability and limits of void and delamination detection in concrete structures using infrared thermography, ndt e int. 74 (2015) pp. 87–93. doi: 10.1016/j.ndteint.2015.05.003 [14] k. tešić, a. baričević, m. serdar, non-destructive corrosion inspection of reinforced concrete using ground-penetrating radar: a review, materials 14(4) (2021), art. no. 975. doi: 10.3390/ma14040975 [15] x. dérobert, g. villain, effect of water and chloride contents and carbonation on the electromagnetic characterization of concretes on the gpr frequency band through designs of experiment, ndt e int. 92 (2017) pp. 187–198. doi: 10.1016/j.ndteint.2017.09.001 [16] a. belli, a. mobili, t. bellezze, f. tittarelli, p. cachim, evaluating the self-sensing ability of cement mortars manufactured with graphene nanoplatelets, virgin or recycled carbon fibers through piezoresistivity tests, sustainability 10(11) (2018), art. no. 4013. doi: 10.3390/su10114013 [17] b. a. de castro, f. g. baptista, f. ciampa, new signal processing approach for structural health monitoring in noisy environments based on impedance measurements, measurement 137 (2019) pp. 155–167. doi: 10.1016/j.measurement.2019.01.054 [18] m. saleem, m. shameem, s. e. hussain, m. maslehuddin, effect of moisture, chloride and sulphate contamination on the electrical resistivity of portland cement concrete, constr. build. mater. 10(3) (1996), pp. 209–214. doi: 10.1016/0950-0618(95)00078-x [19] i.-s. yoon, c.-h. chang, effect of chloride on electrical resistivity in carbonated and non-carbonated concrete, appl. sci. 10(18) (2020), art. no. 6272. doi: 10.3390/app10186272 [20] p. azarsa, r. gupta, electrical resistivity of concrete for durability evaluation: a review, 2017 (2017), art. no. 8453095. doi: 10.1155/2017/8453095 [21] k. r. gowers, s. g. millard, measurement of concrete resistivity for assessment of corrosion severity of steel using wenner technique, mater. j. 96 (1999) pp. 536–541. doi: 10.14359/655 [22] g. cosoli, a. mobili, f. tittarelli, g. m. revel, p. chiariotti, electrical resistivity and electrical impedance measurement in mortar and concrete elements: a systematic review, appl. sci. 10 (24) (2020), art. no. 9152. doi: 10.3390/app10249152 [23] f. wenner, a method for measuring earth resistivity, j. washingt. acad. sci. 5 (1915) pp. 561–563. online [accessed 05 december 2021] https://nvlpubs.nist.gov/nistpubs/bulletin/12/nbsbulletinv12n4 p469_a2b.pdf [24] t.-c. hou, wireless and electromechanical approaches for strain sensing and crack detection in fiber reinforced cementitious materials, university of michigan, ph.d. thesis, 2008. online [accessed 05 december 2021] https://deepblue.lib.umich.edu/bitstream/handle/2027.42/6160 6/tschou_1.pdf?sequence=1&isallowed=y [25] t. wandowski, p. h. malinowski, w. m. ostachowicz, improving the emi-based damage detection in composites by calibration of ad5933 chip, measurement. 171 (2021) art. no. 108806. doi: 10.1016/j.measurement.2020.108806 [26] a. a. ramezanianpour, a. pilvar, m. mahdikhani, f. moodi, practical evaluation of relationship between concrete resistivity, water penetration, rapid chloride penetration and compressive strength, constr. build. mater. 25(5) (2011) pp. 2472-2479. doi: 10.1016/j.conbuildmat.2010.11.069 [27] x. dérobert, j. f. lataste, j. p. balayssac, s. laurens, evaluation of chloride contamination in concrete using electromagnetic nondestructive testing methods, ndt e int. 89 (2017) 19–29. doi: 10.1016/j.ndteint.2017.03.006 [28] j. zhang, z. li, hydration process of cements with superplasticizer monitored by non-contact resistivity measurement, proc. adv. test. fresh cem. mater. ed. by h. w. reinhardt, aug. 3-4, 2006, stuttgart, ger. (2006). [29] j.f. lataste, c. sirieix, d. breysse, m. frappa, electrical resistivity measurement applied to cracking assessment on reinforced concrete structures in civil engineering, ndt e int. 36(6) (2003) pp. 383–394. doi: 10.1016/s0963-8695(03)00013-6 [30] n. wiwattanachang, p. h. giao, monitoring crack development in fiber concrete beam by using electrical resistivity imaging, j. appl. geophys. 75(2) (2011) pp. 294–304. doi: 10.1016/j.jappgeo.2011.06.009 [31] c. g. berrocal, k. hornbostel, m. r. geiker, i. löfgren, k. lundgren, d. g. bekas, electrical resistivity measurements in steel fibre reinforced cementitious materials, cem. concr. compos. 89 (2018) pp. 216–229. doi: 10.1016/j.cemconcomp.2018.03.015 [32] r. m. andrew, global co2 emissions from cement production, earth syst. sci. data. 10 (2018), pp. 195–217. doi: 10.5194/essd-10-195-2018 [33] g. bolte, m. zajac, j. skocek, m. ben haha, development of composite cements characterized by low environmental footprint, j. clean. prod. 226 (2019) pp. 503–514. doi: 10.1016/j.jclepro.2019.04.050 [34] j. torrents dolz, p. juan garcia, a. aguado de cea, electrical impedance as a technique for civil engineer structures surveillance: considerations on the galvanic insulation of samples. online [accessed 05 december 2021]. https://upcommons.upc.edu/bitstream/handle/2117/2865/agu ado_measurement_1.pdf https://doi.org/10.1016/j.conbuildmat.2020.121546 https://doi.org/10.1007/978-3-030-64594-6_27 https://doi.org/10.1016/j.ndteint.2008.03.001 https://doi.org/10.1177/1475921720935585 https://doi.org/10.1016/j.engstruct.2017.11.018 https://doi.org/10.3390/app11104323 https://doi.org/10.1016/j.ndteint.2015.05.003 https://doi.org/10.3390/ma14040975 https://doi.org/10.1016/j.ndteint.2017.09.001 https://doi.org/10.3390/su10114013 https://doi.org/10.1016/j.measurement.2019.01.054 https://doi.org/10.1016/0950-0618(95)00078-x https://doi.org/10.3390/app10186272 https://doi.org/10.1155/2017/8453095 https://doi.org/10.14359/655 https://doi.org/10.3390/app10249152 https://nvlpubs.nist.gov/nistpubs/bulletin/12/nbsbulletinv12n4p469_a2b.pdf https://nvlpubs.nist.gov/nistpubs/bulletin/12/nbsbulletinv12n4p469_a2b.pdf https://deepblue.lib.umich.edu/bitstream/handle/2027.42/61606/tschou_1.pdf?sequence=1&isallowed=y https://deepblue.lib.umich.edu/bitstream/handle/2027.42/61606/tschou_1.pdf?sequence=1&isallowed=y https://doi.org/10.1016/j.measurement.2020.108806 https://doi.org/10.1016/j.conbuildmat.2010.11.069 https://doi.org/10.1016/j.ndteint.2017.03.006 https://doi.org/10.1016/s0963-8695(03)00013-6 https://doi.org/10.1016/j.jappgeo.2011.06.009 https://doi.org/10.1016/j.cemconcomp.2018.03.015 https://doi.org/10.5194/essd-10-195-2018 https://doi.org/10.1016/j.jclepro.2019.04.050 https://upcommons.upc.edu/bitstream/handle/2117/2865/aguado_measurement_1.pdf https://upcommons.upc.edu/bitstream/handle/2117/2865/aguado_measurement_1.pdf twisted and coiled polymer muscle actuated soft 3d printed robotic hand with peltier cooler for drug delivery in medical management acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 6 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 twisted and coiled polymer muscle actuated soft 3d printed robotic hand with peltier cooler for drug delivery in medical management rippudaman singh1, sanjana mohapatra1,2, pawandeep singh matharu1, yonas tadesse1,2,3,4 1 humanoid, biorobotics, and smart systems laboratory, mechanical engineering department, the university of texas at dallas, richardson, tx 78705 usa 2 biomedical engineering department, the university of texas at dallas, richardson, tx 78705 usa 3 electrical and computer engineering department, the university of texas at dallas, richardson, tx 78705 us 4 alan g. macdiarmid nanotech institute, the university of texas at dallas, richardson, tx 78705 usa section: research paper keywords: robotic hand; artificial muscle; tcp muscles; fishing lines muscles; 3d printed hand; peltier cooling; biomimetic; grasping; drug delivery citation: rippudaman singh, sanjana mohapatra, pawandeep singh matharu, yonas tadesse, twisted and coiled polymer muscle actuated soft 3d printed robotic hand with peltier cooler for drug delivery in medical management, acta imeko, vol. 11, no. 3, article 10, september 2022, identifier: imeko-acta11 (2022)-03-10 section editor: zafar taqvi, usa received march 25, 2022; in final form september 27, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work is supported by internal fundresearch enhancement corresponding author: rippudaman singh, e-mail: rippudaman.singh@utdallas.edu 1. introduction soft actuators such as sma muscles and their composites are being widely studied currently. however, smas are expensive and are not manufactured easily in-house, and have high hysteresis behaviour. new actuators such as tcpfl muscles are emerging as a new class of actuators recently. these muscles have a wide range of possibilities in robotics applications. in this paper, the focus is on the application of tcpfl muscles in a robotic hand and a method of improving the actuation frequency. the fabrication of tcpfl muscles has been discussed in a study by haines et al. where [1] different materials among polyethene, nylon 6, sand silver-plated nylon 6,6 were investigated for applications in soft actuators. twisted and coiled nylon 6,6 polymer as artificial muscles were found to be reliable, inexpensive, had high load carrying capacity, and a large stroke. another study [2] discusses the use of tcpfl muscles with nichrome wire, where nichrome acts as a heater wire for the muscles with quick heating capabilities. the incorporation of this novel soft actuator in robotic arms can help reduce the size and weight of existing motor-actuated prosthetic hand models while producing similar actuation. also, abstract this paper presents experimental studies on a soft 3d-printed robotic hand whose fingers are actuated by twisted and coiled polymer (tcpfl) muscles, driven by resistive heating, and cooled by water and peltier mechanism (thermoelectric cooling) for increasing the actuation frequency. the hand can be utilized for pick and place applications of drugs in clinical settings, which may be repetitive for humans. a combination of abs plastic and thermoplastic polyurethane material is used to additively manufacture the robotic hand. the hand along with a housing tank for the muscles and peltier coolers has a length of 380 mm and weighs 560 gm. the fabrication process of the tcpfl actuators coiled with 160 µm diameter nichrome wires is presented. the actuation frequency in the air for tcpfl is around 0.01 hz. this study shows the effect of water and peltier cooling on improving the actuation frequency of the muscles to 0.056 hz. experiments have been performed with a flex sensor integrated at the back of each finger to calculate its bend-extent while being actuated by the tcpfl muscles. all these experiments are also used to optimize the tcpfl actuation. overall, a low-cost and lightweight 3d printed robotic hand is presented in this paper, which significantly increases the actuation performance with the help of cooling methods, that can be used in applications in medical management. file:///c:/users/sanja/appdata/roaming/microsoft/word/rippudaman.singh@utdallas.edu acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 as proved in a study by wu et al. [3] the temperatures of the muscle when actuated in the air can reach higher values in the range of 80 °c to 140 °c. while these high temperatures were optimal during the heat-induced compression cycle, they did not allow a proper relaxation of the muscles. to initiate a proper relaxation, the cooling cycle would have to be configured for a long time which increases the time duration for each actuation cycle and reduces the actuation frequency. the use of cooling methods to optimize the high temperatures of the tcpfl muscle brought about by joule heating for proper relaxation is discussed in this paper. the main purpose of the paper is to reduce the cooling cycle of the tcpfl muscles to reduce the time gap between flexion and extension and increase the overall actuation frequency. water is the most available resource which can act as a natural coolant. water having a high specific heat capacity [4] can extract heat from the tcpfl muscles at a fast pace. it has been used in other applications to optimize the actuation of artificial muscles including shape memory alloys (sma) [5], [6], giant magnetostrictive materials (gmms) [7], and liquid crystal elastomers (lces)[8]. similar methods have been discussed with a robotic finger attached to tcpfl muscles in previous work by wu, l. et al [9] where the tcpfl muscles were actuated with the help of hot and cold water. the use of water was also inspired by previous studies where niti sma [10] and tcp [11] muscles were used in underwater jellyfish robots. it was noted that the actuation frequency drastically increased in a medium where heat was dissipated faster. this study was mimicked by submerging the muscles in a sealed container with water as a medium to dissipate heat. a study by astrain, d. et al [12] introduces a type of cooling system that uses voltage difference to generate a temperature difference between the top and bottom ceramic plates of the device. this uses alternating semiconductor couples to leverage the peltier effect to produce thermoelectric cooling. this type of thermoelectric cooler is called a peltier cooler. another study on peltier coolers [13] incorporated it in a closed insulated container to lower the internal temperatures to the freezing point of water. such peltier-based coolers were also used with other artificial sma muscles [14] to optimize their thermomechanical properties. hence, they were considered for use in optimizing the actuation of tcpfl muscles in this study. the hypothesis of this paper focuses on implementing these water-based and peltier cooling methods to optimize the flexion and extension of a prosthetic finger carried out by the tcpfl muscles. the improvement of actuation frequency of the tcpfl muscles for a faster prosthetic hand grasping movement was the major objective of the experiments designed for this study. this study involved collecting prosthetic performance data using flex and temperature sensors. experiments were conducted using special housing tanks to incorporate the muscles, water, peltier coolers, and sensors. substantial improvement in actuation frequency, from 0.01 hz in the air to 0.056 hz due to the cooling methods, was observed through the sensor data obtained during the experiments. submerging the tcpfl muscles in water inside the container and further dissipating the heat from the container using peltier coolers proved to be very effective. this was especially helpful for improving the performance speed of a prosthetic hand for applications like pick and place of medical drugs in clinical settings. 2. methodology firstly, the fabrication of the core component of this experimental study, the tcpfl artificial muscle is discussed in this paper. these muscles were made of nylon 6,6 fishing line muscles wounded with nichrome wire. the nichrome wire was used to convert the power supplied into heat and this heat was used via conduction to heat the fishing line for it to contract. 2.1. tcpfl fabrication process an approximate 1.5m length of nylon fishing line string was cut from a spool of fishing line. the ends of this cut length were tied with rings/washers to mount on motors that rotated and coiled the muscles. as shown in figure 1 the nylon fishing line was attached to a motor on the top and the other end was suspended with a weight of 500g. this was to ensure that enough tension was provided to keep the coiling uniform since the load should not be heavy enough to break the fishing line while coiling. the top motor was started at a speed of 300 rpm while restricting the rotation of the bottom end of the fishing line. the fishing line was allowed to coil upwards while twisting. only the axial rotation was arrested once coiling started, either from the top, bottom or middle. the motor was stopped, and the rotatory restriction was removed. at this stage, the winding of nichrome over the uncoiled fishing line was initiated. the nichrome was coiled over the fishing line at a speed of 125 rpm. once the nichrome was uniformly wound over the fishing line muscle, the rotatory restriction was applied again so that the nylon coiling process could be carried out. at the end of the coiling process, the weight was removed. at the end of this entire process the bottom end of the coiled muscle must be held firmly before and after removing the weight, slightly releasing the hand pressure and once the weight was removed so that the muscle could ease a bit of the torsional tension. the muscle would not get uncoiled but only get untwisted a few rotations. after this, the muscle was removed from the coiling setup, and each of its ends was mounted on a small platform that could be used to heat treat the muscle. a furnace was heated up to 180 degrees and the muscles were placed inside the hot furnace for 90 min. once the muscles were heated the ends of the same were crimped firmly with the nichrome in contact with gold crimps. the muscles were then placed under a 500gm load and trained under various power cycles that were supplied to their crimped ends. various currents supplied brought about different compressions in steps as shown in figure 1. schematic of the fabrication process of the twisted and coiled polymer (fishing line) muscles. step1: the fishing line twisting process. step 2: the nichrome winding process. step 3: the self coiling process. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 table 1. this training step later supports similar compressions and relaxations when provided with a current across its ends. 2.2. tpu hand setup a single piece robotic palm was 3d printed with the thermoplastic polyurethane (tpu) material. tpu is a flexible material that can provide strength in higher thicknesses and toughness and flexibility in lower thicknesses. this hand was previously utilized in combination with other artificial sma muscles [15]. so, this is adequate to test the tcpfl muscles used in the design presented in this paper. the container was made of transparent acrylic material, to ensure clear visibility of the movement of the muscles inside the container. dedicated slots were accommodated for the peltier plates on the bottom of the container. the container was assembled and sealed with the help of mseal and silicone. the m-seal provides strength to the container and the silicone ensures water saleability. holes were provided to accommodate ten tcpfl muscles, two for each finger one for flexion and the other for extension. the openings were sealed with silicone to allow flexibility and to ensure water saleability. one end of the muscle was fixed to the back of the container. the other end was connected to a finger of a single piece tpu hand as shown in figure 2.a using fishing line strings. when power was applied across the muscles, they contracted at the same time pulling the fishing line string connected to the finger. 2.3. peltier cooler and sensors two peltier coolers were inserted into the dedicated slots inside the same muscle housing container. these peltier coolers each consist of 127 couples of n-type and p-type semiconductor blocks that operate at 12 v 2a to create thermoelectric cooling across their plates. so, both were connected in series to a 24 v 2a battery as seen in figure 3.c. the effect of these peltier coolers on lowering the temperatures of water that is exposed to the cooling plate was observed using underwater temperature probes. the probe used was the ds18b20 digital single-bus intelligent temperature sensor [16] and that has been seen in previous studies involving underwater applications [17], [18]. this sensor when connected across a circuit as shown in figure 3.a provides a digital output of the temperature probe to an arduino microcontroller. the temperature data recorded from the probe attached underwater inside the container during the experiments helped in assessing the benefits of the cooling methods for tcpfl actuation. there are many instances where standard flex sensors have been used along with hand prosthetic applications to either control the bending of the prosthetic fingers [19], [20] or recognize [21], [22], [23] the gestures of hands. also known as stretch sensors, they can be used in wearables [24] to track the bending of a finger. this is a resistive sensor that alters its resistance value as it is bent along its length. hence this resistive property was manipulated for characterizing the bending action of the prosthetic hand mentioned in this paper. the data obtained from the flex sensor helped characterize the tcpfl muscles for their actuation frequency and optimization of the actuation with cooling methods. for this design, the resistance was converted to a voltage reading using an amplification circuit as seen in figure 3.b and read through the adc pins of the arduino. this voltage reading was recorded with respect to time and linearly interpolated to previously obtained angle-vs-voltage flex sensor calibration data. this calibration data helped in calculating the angular position of the finger with respect to the horizontal as shown in figure 2.b. the angle-vs-time results obtained from the flex sensors helped in observing the actuation frequency of the muscles with different cooling conditions. 3. experimental methods and results the experimentations conducted for the tcpfl muscles included a characterization setup utilized in other studies to understand various properties of niti sma [10], tcp [11], and table 1. power supply supplied across both ends of the tcpfl muscles during the training procedure. current (a) deformation (%) 0.18 10 0.2 14 0.24 20 0.26 27 figure 2. (a) the schematic of the experimental setup. the fingers of a tpu prosthetic hand are attached to tcpfl muscles using fishing line strings. bidirectional flexion and extension actions of the finger are produced by the actuation of two tcpfl muscles. the muscles are housed in an acrylic tank filled with water. this tank has integrated two peltier coolers on its base. (b) the angular position of the finger with respect to the horizontal is calculated using data recorded from flex sensors during flexion or extension actions. figure 3. schematic of the electronic circuits of the sensors and peltier coolers. (a) ds18b20 temperature probe used to monitor the heating and cooling of the water during tcpfl actuation and peltier cooling. data collected by the digital pins of the arduino microcontroller. (b) flex sensor used to monitor the finger bending movements during the finger’s actuation. sensor integrated with an amplification circuit. (c) peltier coolers used to cool the water in the tcpfl housing tank. connected in series with a 24v power supply. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 tca [25] muscles using multiple sensors as shown in the study by hamidi, a. et al [26]. the setup as shown in figure 4 included a keyence laser displacement sensor to measure the muscle’s actuation strain, thermocouples used to measure the muscle’s temperature during actuation, and ni daq 9221 used to measure the output voltage due to change of muscle’s resistance during actuation. using this setup characterization experiments were conducted with the tcpfl muscles fabricated in this paper. the properties of the muscle were obtained from these experiments and are listed in table 2. it included an actuation strain ranging from 10 % to 27 %. the voltage across the muscle could rise to 115 v due to a change in resistance after being actuated. the experimental setup of the tpu hand actuated by two tcpfl muscles as described in the methodology and shown in figure 5 was used to obtain results to justify the hypothesis of this paper. there was a separate power supply for each of the two tcpfl muscles. each power supply was configured to form different heating and cooling cycles to respectively compress and relax the muscle. the lower tcpfl muscle responsible for an extension usually followed a flexion by the upper tcpfl muscle. hence the lower muscle required more energy to compress to additionally loosen the upper muscle’s compression. the actuation cycles for flexion (by the upper tcpfl muscle) and extension (by the lower tcpfl muscle) were set separately on the two power supplies. the temperature data obtained from the ds18b20 sensor showed an increase in the temperature of the water when tcpfl muscles were actuated in it. figure 6 portrays the same behaviour when one muscle was actuated inside different volumes of water at a heating cycle of 5 seconds and a cooling cycle of 10 seconds. higher volumes of water had a high capacity to dissipate more heat from the muscles and had a lower rise in temperatures. figure 7 portrays the cooling ability of peltier coolers with respect to the volume of water. these cooling rates show that peltier coolers can be instrumental in optimizing the actuation frequency of the muscles. a 150 ml water was finally chosen as optimal for the experiments of this study as it would get heated more slowly by the muscles as compared to lower volumes. although the cooling rate is smaller than that of lower volumes it would be enough for this application. figure 4. experimental setup for the characterization of tcpfl muscles. table 2. informational data of the fabricated tcpfl muscle obtained through characterization experiments performed during similar other studies on tcp muscles [10], [11]. property value material nylon (6,6) fishing line type of actuation electrothermal type of resistance wire nichrome (nickel, chromium) resistance wire diameter dw = 160 precursor fibre diameter d = 0.8 mm length of precursor fibre l = 1500 mm weight for fabrication mf = 500 g annealing temperature/time ta = 180 °c / (90 min) diameter after cooling d = 2.8 mm length after cooling l = 120 mm resistance r = 110 ohm current (input) i = 0.16-0.26 a (during training: i = 0.26 a is provided to the muscle) voltage (output) v = 57.6-115.2 v actuation power p = 9.2-36.8 w heating time th = 10 s,15 s cooling time tc = 90 s, 85 s actuation frequency (air-cooled) f = 0.16-0.1hz actuation strain, at 500g load epsilon = 27-10% life cycle 2400 cycles in air at 9 mhz and 1 % duty cycle. figure 5. experimental setup of tpu hand with tcpfl muscles and peltier coolers installed housing tank. two power supplies used to actuate each of the two muscles for the movement of one finger. the flex sensor and temperature probe circuits utilized inside the setup. figure 6. heating effect of the tcpfl muscle actuation on water. the muscle was actuated with a heating cycle of 5 seconds and a cooling cycle of 10 seconds. the steady rise in temperature was observed by the ds18b20 temperature probe over 50 cycles of actuation. the heating effect of the muscle was observed for different volumes of water. figure 7. cooling the water in the housing tank with peltier coolers. the reduction of temperature in the water was observed over time by the ds18b20 temperature probe for different volumes of water. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 flex sensor data was obtained for different actuation cycles for the tcpfl muscles and different environmental conditions (in air, in water, and in peltier-cooled water). for actuating the finger in the air at 0.26 a the heating and cooling cycles were set as 10 seconds and 90 seconds respectively for the upper muscle (flexion) and 15 seconds and 85 seconds respectively for the lower muscle (extension). this setting resulted in an actuation frequency of 0.01 hz for the tcpfl muscles. when shorter cycles were selected for actuation, the upper muscle could not relax from its compressed state during its cooling cycle. this prevented the extension of the finger through the lower muscle’s actuation. therefore, longer cooling cycles of 90 seconds were selected for proper in-air actuation of the two muscles as was proved by the flex sensor results (figure 8). the actuation frequency of the tcpfl muscles improved by about five times to 0.056 hz when the muscles were completely submerged in 150 ml of water. the heating and cooling cycles operating at 1 a were set as 5 seconds and 13 seconds respectively for the upper muscle and 7 seconds and 11 seconds respectively for the lower muscle. as was seen from the angles interpolated from the flex sensor results (figure 9.a) there was a very good frequency of actuation. the time to bring about these actuations was minimized from previous experiments in-air. an even smoother actuation was observed during the extension of the finger when the peltier coolers were utilized to remove the heat accumulated in the water medium. the same actuation cycles were used, like before with water. through the interpolated flex sensor results obtained in figure 9.b, a significantly faster extension was observed. the extension angle-vs-time slope is much steeper in peltier-cooled waters as compared to the same in just water. peltier coolers help in maintaining the water at lower temperatures to aid in the cooling/relaxation phase of muscles. 4. conclusions the cooling methods proposed proved to have an impact in optimizing the actuation performance of the tcpfl muscles. the frequency of the muscle’s actuation was increased by over five times from 0.01 hz when they were actuated in-air to 0.056 hz when they were actuated in water. the major cooling effect observed was due to the presence of water as a medium to dissipate heat from the actuators. the finger movement as a result was very smooth and fast-paced. the peltier coolers ensured that the temperature of the water was maintained close to room temperature. this was achieved as the coolers dissipated the heat from the water that was accumulated from the actuating muscles. due to this behaviour, the improved frequency of actuation was maintained at a high of 0.056hz for multiple cycles. the actuation for the extension of the robotic finger by the lower tcpfl muscle took a longer time than the actuation for the flexion of the finger by the upper tcpfl muscle. this was because of the tension remaining in the upper muscle from the compression during the flexion action. this in turn kept the finger tightly flexed and did not allow it to be extended in the other direction. the peltier coolers shortened the extension actuation phase of the robotic finger. the extension phase of the setup depended on how fast the upper tcpfl muscle cooled down and relaxed itself. this faster cooling was achieved due to lower temperatures of water produced by the peltier coolers. further studies and experimentation are required to improve the performance of these muscles in an enclosed setup. some new housing tanks designs and materials are being explored to augment prosthetic actuation efficiency. acknowledgement the authors would like to thank akash ashok ghadge for his support in the experimentation and design and eric wenliang deng for his support with the tpu palm used for the experimentation, they both are affiliated to the humanoid biorobotics and smart systems lab, the university of texas at dallas. figure 8. the angular position of each finger during bi-directional actuation by two tcpfl muscles. this flexion/extension angle of the finger was calculated from the data obtained from the flex sensor. the muscles were actuated in the air with a heating cycle of 10 seconds (for the flexion action shown in yellow) and a cooling cycle of 90 seconds for the upper muscle and a heating cycle of 15 seconds (for the extension action shown in green) and a cooling cycle 85 seconds for the lower muscle. both the muscles complete each cycle in 100 seconds, so the actuation is 1/100 = 0.01 hz. figure 9. the angular position of each finger calculated from flex sensor data during bi-directional actuation by two tcpfl muscles. the muscles were actuated for a heating cycle of 5 seconds (for the flexion action shown in yellow) and a cooling cycle of 13 seconds for the upper muscle and a heating cycle of 7 seconds (for the extension action shown in green) and a cooling cycle 11 seconds for the lower muscle. both the muscles complete each cycle in 18 seconds, so the actuation is 1/18 = 0.056 hz. the tcpfl muscles were submerged inside (a) 150 ml of water, (b) 150 ml of water that was cooled by two peltier coolers acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 references [1] c. s. haines (+20 authors), artificial muscles from fishing line and sewing thread, science 343(6173) (2014), pp. 868-872. doi: 10.1126/science.1246906 [2] a. n. semochkin, a device for producing artificial muscles from nylon fishing line with a heater wire, 2016 ieee international symposium on assembly and manufacturing (isam), 21-22 august 2016, fort worth, tx, usa. doi: 10.1109/isam.2016.7750715 [3] l. wu, f. karami, a. hamidi, y. tadesse, biorobotic systems design and development using tcp muscles. in electroactive polymer actuators and devices (eapad) xx, international society for optics and photonics, 2018, denver, colorado, united states doi: 10.1117/12.2300943 [4] f. g. keyes, the thermodynamic properties of water substance 0° to 150° c part vi. the journal of chemical physics, 15(8) (1947), pp. 602-612. [5] c. h. park, k. j. choi, y. s. son, shape memory alloy-based spring bundle actuator controlled by water temperature. ieee/asme transactions on mechatronics 24(4) (2019), pp. 1798-1807. doi: 10.1109/tmech.2019.2928881 [6] o. k. rediniotis, d. c. lagoudas, h. y. jun, r. d. allen, fuelpowered compact sma actuator, smart structures and materials 2001: industrial and commercial applications of smart structures technologies, proc. of the spie 4698 (2002), pp. 441453. doi: 10.1117/12.475087 [7] z. zhao, x. sui, temperature compensation design and experiment for a giant magnetostrictive actuator, scientific reports 11(1) (2021), pp. 1-14. doi: 10.1038/s41598-020-80460-5 [8] q. he, z. wang, z. song, s. cai, bioinspired design of vascular artificial muscle. advanced materials technologies 4(1) (2019) art. no. 1800244. doi: 10.1002/admt.201800244 [9] l. wu, m. jung de andrade, r. s. rome, c. haines, m. d. lima, r. h. baughman, y. tadesse, nylon-muscle-actuated robotic finger, active and passive smart structures and integrated systems, proceedings volume 9431, active and passive smart structures and integrated systems 2015, spie. doi: 10.1117/12.2084902 [10] y. almubarak, m. punnoose, n. xiu maly, a. hamidi, y. tadesse, kryptojelly: a jellyfish robot with confined, adjustable pre-stress, and easily replaceable shape memory alloy niti actuators, smart materials structures 29(7) (2020) art. no. 075011. doi: 10.1088/1361-665x/ab859d [11] a. hamidi, y. almubarak, y. mahendra rupawat, j. warren, y. tadesse, poly-saora robotic jellyfish: swimming underwater by twisted and coiled polymer actuators, smart materials structures 29(4) (2020), art. no. 045039. doi: 10.1088/1361-665x/ab7738 [12] d. astrain, j. vián, j. albizua, computational model for refrigerators based on peltier effect application, applied thermal engineering 25(17-18) (2005), pp. 3149-3162. doi: 10.1016/j.applthermaleng.2005.04.003 [13] z. slanina, m. uhlik, v. sladecek, cooling device with peltier element for medical applications. ifac-papersonline 51(6) (2018), pp. 54-59. doi: 10.1016/j.ifacol.2018.07.129 [14] y. luo, t. takagi, s. maruyama, m. yamada, a shape memory alloy actuator using peltier modules and r-phase transition, journal of intelligent material systems structures, 11(7) (2000), pp. 503-511. doi: 10.1106/92yh-9yu9-hvw4-rv [15] e. deng, y. tadesse, a soft 3d-printed robotic hand actuated by coiled sma, actuators 10(1) (2021), pp. 1-24. doi: 10.3390/act10010006 [16] a. huang, m. huang, z. shao, x. zhang, d. wu, c. cao, a practical marine wireless sensor network monitoring system based on lora and mqtt, ieee 2nd intern. conf. on electronics technology (icet), 10-13 may 2019, chengdu, china. doi: 10.1109/eltech.2019.8839464 [17] y. ding, t. yan, q. yao, x. dong, x. wang, a new type of temperature-based sensor for monitoring of bridge scour, measurement 78 (2016), pp. 245-252. doi: 10.1016/j.measurement.2015.10.009 [18] k. gawas, s. khanolkar, e. pereira, m. rego, m. naaz, e. braz, development of a low cost remotely operated vehicle for monitoring underwater marine environment, global oceans 2020: singapore–us gulf coast. 2020, 05-30 october 2020, biloxi, ms, usa. doi: 10.1109/ieeeconf38699.2020.9389277 [19] t. mori, y. tanaka, m. mito, k. yoshikawa, d. katane, h. torishima, y. shimizu, y. hara, proposal of bioinstrumentation using flex sensor for amputated upper limb, 36th annual international conference of the ieee engineering in medicine and biology society, chicago, il, usa, 26-30 august 2014. doi: 10.1109/embc.2014.6943816 [20] m. i. rusydi, m. i. opera, a. rusydi, m. sasaki, combination of flex sensor and electromyography for hybrid control robot, telkomnika telecommunication computing electronics control, 16(5) (2018), pp. 2275-2286. doi: 10.12928/telkomnika.v16i5.7028 [21] j. jabin, md. ehtesham adnan, s. s. mahmud, a. m. chowdhury, m. r. islam, low cost 3d printed prosthetic for congenital amputation using flex sensor, 2019 5th international conference on advances in electrical engineering (icaee), 2628 september 2019, dhaka, bangladesh. doi: 10.1109/icaee48663.2019.8975415 [22] s. harish, s. poonguzhali, design and development of hand gesture recognition system for speech impaired people, 2015 ieee international conference on industrial instrumentation and control (icic). 28-30 may 2015, pune, india. doi: 10.1109/iic.2015.7150917 [23] y. r. garda, w. caesarendra, t. tjahjowidodo, a. turnip, s. wahyudati, l. nurhasanah, d. sutopo, flex sensor based biofeedback monitoring for post-stroke fingers myopathy patients, journal of physics: conference series. 2018. iop publishing. doi: 10.1088/1742-6596/1007/1/012069 [24] m. borghetti, p. bellitti, n. f. lopomo, m. serpelloni, e. sardini, validation of a modular and wearable system for tracking fingers movements. acta imeko 9(4) (2020), pp. 157-164. doi: 10.21014/acta_imeko.v9i4.752 [25] t. luong, s. seo, j. jeon, c. park, m. doh, y. ha, j. c. koo, h. r. choi, h. moon, soft artificial muscle with proprioceptive feedback: design, modeling and control, ieee robotics automation letters 7(2) (2022), pp. 4797-4804. doi: 10.1109/lra.2022.3152326 [26] a. hamidi, y. almubarak, y. tadesse, multidirectional 3dprinted functionally graded modular joint actuated by tcpfl muscles for soft robots, bio-design manufacturing, 2(4) (2019), pp. 256-268. doi: 10.1007/s42242-019-00055-6 https://doi.org/10.1126/science.1246906 https://doi.org/10.1109/isam.2016.7750715 http://dx.doi.org/10.1117/12.2300943 https://doi.org/10.1109/tmech.2019.2928881 https://ui.adsabs.harvard.edu/link_gateway/2002spie.4698..441r/doi:10.1117/12.475087 https://doi.org/10.1038/s41598-020-80460-5 https://doi.org/10.1002/admt.201800244 https://doi.org/10.1117/12.2084902 https://doi.org/10.1088/1361-665x/ab859d https://doi.org/10.1088/1361-665x/ab7738 https://doi.org/10.1016/j.applthermaleng.2005.04.003 https://doi.org/10.1016/j.ifacol.2018.07.129 https://doi.org/10.1106/92yh-9yu9-hvw4-rvkt https://doi.org/10.3390/act10010006 https://doi.org/10.1109/eltech.2019.8839464 https://doi.org/10.1016/j.measurement.2015.10.009 https://doi.org/10.1109/ieeeconf38699.2020.9389277 https://doi.org/10.1109/embc.2014.6943816 http://dx.doi.org/10.12928/telkomnika.v16i5.7028 https://doi.org/10.1109/icaee48663.2019.8975415 https://doi.org/10.1109/iic.2015.7150917 https://doi.org/10.1088/1742-6596/1007/1/012069 http://dx.doi.org/10.21014/acta_imeko.v9i4.752 https://doi.org/10.1109/lra.2022.3152326 https://doi.org/10.1007/s42242-019-00055-6 video-based emotion sensing and recognition using convolutional neural network based kinetic gas molecule optimization acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 -7 acta imeko | www.imeko.org june 2022| volume 11 | number 2|1 video-based emotion sensing and recognition using convolutional neural network based kinetic gas molecule optimization kasani pranathi1, naga padmaja jagini2, satish kumar ramaraj3, deepa jeyaraman4 1 dept of information technology, vr siddhartha engineering college, kanuru, vijayawada-520007 andhra pradesh, india 2 dept of computer science and engineering, vardhaman college of engineering, hyderabad-501218, telangana , india 3 sengunthar college of engineering, tiruchengode, tamilnadu, india 4 dept of electronics and communication engineering, k. ramakrishnan college of technology, trichirapalli-621112, tamilndadu, india section:research paper keywords: artificial intelligence; convolutional neural network; kinetic gas molecule optimization; images; video-based emotion recognition citation: kasani pranathi, naga padmaja jagini, satish kumar ramaraj, deepa jeyaraman, video-based emotion sensing and recognition using convolutional neural network based kinetic gas molecule optimization, acta imeko, vol. 11, no. 2, article 13, june 2022, identifier: imeko-acta-11 (2022)-02-13 section editor: francesco lamonaca, university of calabria, italy received october 2, 2021; in final form june 3, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: kasani pranathi, e-mail: pranathivrs@gmail.com 1. introduction emotional recognition is considered a major theme in machine learning and artificial intelligence (ai) [1] in recent years. the huge upsurge in the creation of advanced interaction technologies between humans and computers has further encouraged progress in this area [2]. facial actions convey emotions that transmit the character, the mood, and the intentions of a person, in turn. emotions and moods swiftly lead to the identification of the human mind. the psychologist says that emotions are mostly short and that mood is milder than emotion [3]. human emotions can be detected in different ways, such as verbal or voice responses, physical reactions or the languages of the body, autonomous responses, and so on [4]. in a person, the basic types of emotions are pleasing, normal, surprised, frightened, angry, disgusting or sad. while other expressions like dislike, amusement, or pride, dislike and honesty in humans are very difficult to find by expression of the face [5][6] is easy to detect emotions like happiness, normal, disgust, and fear. as we know, people identify emotion by combining difficult multimodal information and tend only to pay attention to significant information in various ways. for example, some people always talk while they keep a smile, while others talk loudly but not angrily [7]. consequently, we deliberate those human beings do not detect emotions based on modal alignment. the objective of emotional awareness can be generally achieved by means of visual or sound techniques. the field of humancomputer interaction has changed by artificial intelligence, providing many machine learning techniques to achieve our goal abstract human facial expressions are thought to be important in interpreting one's emotions. emotional recognition plays a very important part in the more exact inspection of human feelings and interior thoughts. over the last several years, emotion identification utilizing pictures, videos, or voice as input has been a popular issue in the field of study. recently, most emotional recognition research focuses on the extraction of representative modality characteristics and the definition of dynamic interactions between multiple modalities. deep learning methods have opened the way for the development of artificial intelligence products, and the suggested system employs a convolutional neural network (cnn) for identifying real-time human feelings. the aim of the research study is to create a real-time emotion detection application by utilizing improved cnn. this research offers information on identifying emotions in films using deep learning techniques. kinetic gas molecule optimization is used to optimize the fine-tuning and weights of cnn. this article describes the technique of the recognition process as well as its experimental validation. two datasets such as video-based and image-based datasets, which are employed in many scholarly publications, are also investigated. the results of several emotion recognition simulations are provided, along with their performance factors. mailto:pranathivrs@gmail.com acta imeko | www.imeko.org june 2022| volume 11 | number 2|2 [7]. the extraction of representative modal features using profound learning technology has grown easier. for example, the neural network [8] includes advanced cnn convolutional neural network (alexnet, vgg, resnet, senets, etc.) tuning process is extremely useful for capturing features of fine-grained facial expression; and long short-term memory unit (lstms) are another profound learning technology to store info with short-term interaction of time-step memory functionality [9]. on the basis of these technology of learning, there is much work to be done to define dynamic multimodal interactions by matching the relevance of each lstm memory unit. video-based emotional recognition is multidisciplinary, covering areas such as psychology, affective computing and interaction between humans and computers. the main element of the message is the expression of the face which makes up 55% of the overall impression. in order to create an appropriate model for the recognition of video emotions, proper feature frames of face expression must be provided within the scope. deep learning offers a diversity in terms of accuracy, learning rate and forecasting, rather than using standard techniques. cnn has offered support and platform for the analysis of visual imaging, among the in-depth learning methodologies. convolution is the basic application of a filter in an action to an input that results. reusing a related filter to an input creates an enactment map known as a feature map that shows the areas and the quality of, for instance, an identified element in an image. the growth of neural systems of convolution is thus the ability to learn skills with an enormous number of filters equating explicitly to a training dataset according to the needs of, for example, an image characterisation. the result is deeply explicit highlights which are distinguishable in the input images anywhere. deep learning accomplished a major achievement in the recognition of emotions and cnn is the renowned profound way of learning with exceptional image processing performance. this work aims to develop video-based emotional awareness through optimal cnn, as detailed in the next section. the remaining part of the paper is arranged: the related study on video emotional recognition is presented in section 2. section 3 provides an explanation for the suggested optimized cnn. section 4 discusses the validation of the methodology suggested with its current techniques. finally, section 5 presents the conclusion of this study with its future work. 2. literature review a number of disciplines, such as spam filtering, audio recognition, facial identification, classifying documents and processing of natural languages are addressed by machine learning algorithms. classification is one of the most frequently used domains of machine learning. video-based face-motion research in the computer vision community recently attracted notice. different kinds of input data, including facial expressions, voice, physiological indicators, and corporal motions, are utilised in emotional recognition. the work of michel healy[10], who describes a video feeding system in real time and uses a support vector machine for fast and reliable classification, offers several ways to the detection of emotion by facial expressions. the 68point facial features used in [10] are symbols. the application was taught to detect six emotions by monitoring changes in the expressions of the face. the work of dennis maier [11] currently uses neural networks via tensorflow to train image features and then achieves classification through fully connected neural layers. the advantage of image features over facial landmarks is the larger information space, where the spatial formation of landmarks gives a viable method for analysing facial expressions. however, this is also accompanied by a higher computing power requirement. the structure provides for an outsourced classification service that runs on a server with a gpu. images of faces are brought to the service in real time, which can perform a classification within a few milliseconds. in the future, this approach will be extended to include text and audio features and conversation context to boost accuracy. another approach uses cnns with tensorflow. an example of using tensorflow.js with the sense-care km-ep is discussed in [10], which deploys a web browser and node server. 93% rely on nonverbal’s (facial expressions as 55%, sound: 38%) and 7% rely on verbal language in terms of human emotional understanding. that is why various efforts have been carried out to recognize facial expression (fer) and acoustic emotion (aer) tasks. most of these works use deep learning (dl) skills to extract computer features to get recognition of high emotions. pramerdorfer et al., [12] used and confirmed contemporary dnn (vgg, resnet, inception) architectures to extract aspects of facial expression to enhance fer performance. on the other hand, the most typically employed features include pitch, log-mel filters, and cepstral coefficients for mfccs, as far as aer tasks are concerned. huang et al. [13] used four kinds to extract more complete emotional characteristics using log-mel. multiple spatial-time fusion feature framework (msff) was proposed by lu et al. [14]. they have improved the pretrained model for photos of facial expression to draw on facial expression characteristics and have applied the vgg-19 and blstm models to extract audio emotional aspects. however, the interactions between different modes were not taken into account. zadeh et al. [15], on the other hand, considered consistency and attributes complementary to the diverse modal information, proposing a memory fusion network which model modal and multimodal interactions through time to capture more efficacious emotional characteristics in the cmu-mosi dataset. liang et al. [16] have presented the neural model dynamic fusion graph to shape multimodal interactions, to capture one, two and three modal interactions, and, based on the importance, dynamically adjust multimodal dynamics of individual fusions. although [16] is able to dynamically collect interactions in several modalities, different modalities must be aligned with the word utterance time interval through the average of their modalities. the word-based alignment technique can nonetheless miss the chance to capture more active relationships between modes. 3. proposed methodology this section provides a description of the overall architecture for the development of a deep learning algorithm as a videobased emotional recognition model. in addition, architectural diagrams are briefly described together with various operations before and after processing. the system overview with cnn displays the suggested training and testing method in figure 1. the video input must pass through a number of procedures before cnn takes action. 3.1. pre-processing this is the first procedure used for the video sample input. emotions are typically classified as happiness, sadness, anger, pride, fear and surprise. frames should therefore be removed from the video input. the number of frames depends on complexity and computational time for different researchers. acta imeko | www.imeko.org june 2022| volume 11 | number 2|3 the pictures are transformed to the greyscale. the frame is rather black and white or grey monochrome after grey scaling. the contrast with low intensity leads to grey and white with high intensity. the histogram equalization of the frames is monitored by this step. histogram is a computer image management strategy to improve photographic contrast. this is achieved by extending, for instance by loosening the intensity of the image, the most successive intensity estimates. the intensity of a picture is shown by the histogram and it is the number of pixels for each intensity value deliberated in simple terms. 3.2. face detection emotions are usually characterized by the face. it is therefore important to detect the face for processing and recognition. many face detection algorithms are used by several investigators such as opencv, dlib, eigenfaces, the local histograms of binary patterns (lbph) and viola-jones (vj). conventional procedures included face recognition work in which facial highpoints are distinguished from the face image by extracting highlights or milestones. the calculation may, for example, survey the shape and size of the eyes, nose size and their relative position with the eyes in order to delete face highlights. the cheekbones and mastic may also be dissected. these highlights extracted would be used to view different images with matching features. the industry has gone deeply into learning throughout the years. cnn was recently used to improve the accuracy of calculations for facial recognition. these controls accept an image as information and concentrate on a very complex arrangement of features. these features include facial width, facial stature, nose width, lips, eyes, width proportion, skin shading, and surface. basically, a cnn divides a huge sum of highlights from a picture. this is then synchronised with the highlights in the database 3.3. image cropping and resizing during this phase, the face of the facial detection procedure is trimmed so that the facial image looks broader and clearer. cropping is the ejection from the photographic or graphical image of unwanted exterior parts. the technique often consists of expelling a section of the outermost regions of a picture, expelling the image's incidental waste, improving its surroundings, changing its perspective, or highlighting and disentangling the subject. the size of the images varies after the frames have been cropped. those photographs are therefore subject to resize, say 80 to 80 pixels for instance, in order to achieve homogeneity. a digital picture is only a quantity of information displaying a variety of red, green and blue pixels in a certain location. we notice these pixels more often than not as smaller than normal pixels on the pc screen wedged together. the frame size determines how long it takes to process. resizing is therefore very important if processing time is to be reduced. in addition, techniques of better resizing should be used to maintain image attributes following resizing. whether the features represent the expression well or not depends on the accuracy of the classification. the optimization of the selected features, therefore, improves classification precision automatically. 3.4. classification in this section, the classification includes learning rate optimization for cnn using kinetic gas molecule optimization (kgmo) is described briefly. initially, the cnn is explained as follows: cnn is a neural transmission network with several layer feeders, comprising several sorts of layers, including convolution layer and relu, pooling layers and fully connected output layers. figure 2 shows the architecture of cnn, which is intended to recognize visual characteristics such as borders and forms. 3.4.1. cnn cnn employs the vector x of the trained samples as the input for the associated target group y to support the back propagation technique for training information. learning is performed by comparing the desired target with each cnn output, and a learning error occurs in the difference between the two. taking mathematical responsibility for the future cnn, 𝐸(𝜔) = 1 2 ∑ ∑(𝑜j,p l − 𝑦j,p) 2 𝑁ι j=1 . 𝑝 ρ=1 (1) our goal is the cost function lessening of 𝐸(𝜔), discovery a minimizer �̃� = �̃�1, �̃�2, … , �̃�v ϵ ℝv, where v = ∑ weightnum (𝑘)lk=1 and indicate that the space of weight ℝ v is equal to the number of weights (weigtnum(. )) of the cnn network at each k layer of total l layers. ∇𝐸i(𝜔i) = ( ∂𝐸i ∂𝜔i 1 , … , ∂𝐸i ∂𝜔i v) (2) 𝜔i+1 = 𝜔i − 𝑛 ∇𝐸i(𝜔i) , (3) where 𝑛 is the learning rate (step) value. cnn is adapted to the video-based emotion detection, but how fast cnn is adapted can be controlled by this learning rate. more training epochs are acquired for smaller learning rates that give only small changes to the weights during each update. on the other hand, fewer training epochs are required for larger learning rates. specifically, the learning rate is a configurable hyper-parameter used in the training of neural networks that has a small positive value, often in the range between 0.0 and 1.0. to find the optimized learning rate value, this work uses the kgmo algorithm. here, the 𝑛 has figure 1. proposed workflow (top) and proposed training (bottom) testing processes. figure 2. typical architecture of some of the pre-trained deep cnn networks used in the study acta imeko | www.imeko.org june 2022| volume 11 | number 2|4 been designated with the assistance of the kgmo technique, which is explained as follows. 3.4.2. kgmo the boyle laws introduced gas, which are based on unproven descriptions to label gas glimmer's macroscopic plants. the motion of the gas molecule is based on certain characteristics such as pressure, temperature and gas volume. five suggestions and a cinematic molecular system of spectacular blasts were used to indicate the fauna of the gas molecules: • a gas involves the movement in straight-line movement of tiny molecules. the processing of the movement presents according to newton's law. • there is no capacity for the gas glimmer. it's like a subject. • no repulsive or attracting force exists between molecules, the average molecular kinetic energy of 3 k t/2 is described as t and boltzmann constant is described as k, income which has 1.38 × 10-23 m2 kg s-2 k-1. take on, that the scheme contains 𝑀particles. the locality of 𝑗𝑡ℎ the agent is labelled by (4) 𝑌𝑗 = (𝑦𝑗 1 , … … 𝑦 𝑗 𝑑, … … 𝑦 𝑗 𝑚) for (𝑗 = 1, 2, … , 𝑀), (4) where 𝑌𝑗 𝑑 defines the position of the 𝑗𝑡ℎ agent, that in 𝑑th dimension. the velocity of the 𝑗𝑡ℎ agent, it stated as below in (5) 𝑉𝑗 = (𝑣𝑗 1 , … . . , 𝑣𝑗 2 , … . . 𝑣𝑗 𝑚) for (𝑗 = 1, 2, … , 𝑀), (5) where 𝑉𝑗 𝑑 represents the 𝑗𝑡ℎ agent velocity, that in the 𝑑𝑡ℎ dimension. units are modified by the circulation of boltzmann. random element yield speed is related to the kinetic energy of the unit. equation (6) is the kinetic force of the atom as, 𝑘𝑗 𝑑(𝑢) = 3 2 𝑀𝑏 𝑇𝑗 𝑑 (𝑢), 𝐾𝑗 = (𝑘𝑗 1, … . . , 𝑘𝑗 𝑑, … . . 𝑘𝑗 𝑚) for (𝑗 = 1, 2, … , 𝑀), (6) where the number of atoms is represented 𝑀𝑏 as the boltzmann relentless and the high temperature of an 𝑗𝑡ℎ agent in the 𝑑𝑡ℎdimension at a time 𝑢 is characterized as 𝑇𝑗 𝑑(𝑢). the molecules rate is updated by (7) 𝑣𝑗 𝑑 (𝑢 + 1) = 𝑇𝑑 𝑗 (𝑢)𝑤𝑉𝑗 𝑑 (𝑢) + 𝐷1rand𝑗 (𝑢) (𝑔𝑏𝑒𝑠𝑡 𝑑 − 𝑦𝑗 𝑑 (𝑢)) + 𝐷2rand𝑗 (𝑢) (𝑝𝑏𝑒𝑠𝑡𝑗 𝑑 (𝑢) − 𝑦𝑗 𝑑 (𝑢)) , (7) where 𝑇𝑗 𝑑(𝑢) for the meeting molecules reduce the exponentially over time and is planned in (8). 𝑇𝑗 𝑑(𝑢) = 0.95 × 𝑇𝑗 𝑑(𝑢 − 1). (8) the mass 𝑚 of individual sub-division is a random number and it consumes a range of 0 < 𝑚 ≤ 1. once the mass is wellknown, the whole algorithm remains unchanged because only one type of gas is taken into consideration at any time. a random number shall be used to claim places in diverse types to produce various executions of the procedure. the constituent part is expressed as equation, based on the gesticulation equation (9), 𝑦𝑢+1 𝑗 = 1 2 𝑎𝑗 𝑑 (𝑢 + 1)𝑢2 + 𝑣𝑗 𝑑 (𝑢 + 1)𝑢 + 𝑦𝑗 𝑑 (𝑢), (9) where acceleration of the 𝑗𝑡ℎagent is in the 𝑑𝑡ℎ dimension exemplified as 𝑎𝑗 𝑑 . centered on the acceleration dismiss achieve in (10). 𝑎𝑑 𝑗 = (d𝑣𝑗 𝑑 ) d𝑢 . (10) built on the gas specks law is signified in (11). d𝑘𝑑 𝑗 = 1 2 𝑚(d𝑣𝑗 𝑑 )2 ⇒ d𝑣𝑗 𝑑 = √ 2(d𝑘𝑗 𝑑 ) 𝑚 . (11) from (10) and (11), the acceleration is mentioned in (12) 𝑎𝑑 𝑗 = √ 2(d𝑘𝑗 𝑑) 𝑚 d𝑢 . (12) the acceleration reckoning is re-written depends on the duration interval δ𝑢which is shown in (13) 𝑎𝑑 𝑗 = √ 2(δ𝑘j d) 𝑚 δ𝑢 . (13) in a unit time interval, the acceleration would be (14) 𝑎𝑑 𝑗 = √ 2(d𝑘𝑗 𝑑 ) 𝑚 . (14) from (9) and (14), the section of the particle is computed by uttered by (15) 𝑦𝑢+1 𝑗 = 1 2 𝑎𝑗 𝑑 (𝑢 + 1)δ𝑢2 + 𝑣𝑗 𝑑 (𝑢 + 1)δ𝑢 + 𝑦𝑗 𝑑 (𝑢) ⇒ 𝑦𝑢+1 𝑗 = 1 2 √ 2(δ𝑘𝑗 𝑑 ) 𝑚 (𝑢 + 1)δ𝑢2 + 𝑣𝑗 𝑑 (𝑢 + 1)δ𝑢 + 𝑦𝑗 𝑑 (𝑢). (15) in the past, the assumption is that the molecular mass is a random element in all rules exercising, but the execution is same in all particles. the location, which is expressed in (16). is sporadically reorganized to make the approach easier. 𝑦𝑢+1 𝑗 = √ 2(δ𝑘𝑗 𝑑 ) 𝑚 (𝑢 + 1) + 𝑣𝑗 𝑑 (𝑢 + 1) + 𝑦𝑗 𝑑 (𝑢) (16) the lowest appropriateness utility is firm by using resulting (17). 𝑝𝑏𝑒𝑠𝑡𝑗 = 𝑓(𝑦𝑗 ), if 𝑓(𝑦𝑗 ) < 𝑓(𝑝𝑏𝑒𝑠𝑡𝑗 ) 𝑔𝑏𝑒𝑠𝑡𝑗 = 𝑓(𝑦𝑗 ), if 𝑓(𝑦𝑗 ) < 𝑓(𝑔𝑏𝑒𝑠𝑡𝑗 ). (17) in the position(𝑥𝑗 𝑑 ) of that, each element is studied by consuming space that amid the current situation and the current position among in the space and 𝑔𝑏𝑒𝑠𝑡𝑗 . the next section will show the validation of proposed methodology with existing techniques. acta imeko | www.imeko.org june 2022| volume 11 | number 2|5 4. results and discussion all of our results were created on a single nvidia geforce rtx 2080 ti gpu. all the code was applied using pytorch2. 4.1. datasets description 4.1.1. image-based emotion recognition in the wild in this work, we have selected appropriate data sets to train the face extraction model. the data sets must address environments in the wild, in which numerous factors, such as obstruction, poses, lighting, etc., are uncontrolled. affectnet [17] and raf-db [18] are the most extensive datasets that meet these criteria by far. the photographs in the data sets are acquired using emotional keywords on the internet. experts note emotion labels to ensure trustworthiness. affectnet has two kinds of data, manual and automatic, with over 1,000,000 photos marked with 10 categories of emotions and dimensional emotions (valence and arousal). we only utilized photos in the manual group of seven fundamental categories of emotions. for example, we used 283901 training photos and 3500 validation images. the rafdb dataset comprises of approximately 30,000 facial photographs in basic emotional categories that have taken lighting variations, arbitrary poses, and occlusion under in-thewild situations. in this study, we selected 12,271 training images and 3068 evaluation images, all of them from the basic set of emotions. 4.1.2. video-based emotion recognition in the wild we used the afew dataset [19] to assess our work to determine face emotions in video clips. the video samples in the collection are obtained through uncontrolled occlusion, lighting and head positions from films and tv shows. each video clip was selected based on its label including emotional keywords that reflect the emotion shown by the main topic. using this information, we have assisted to deal with the challenge of temporality in the wild. we have used 773 trainings with the afew dataset and 383 validation video clips with labels for the seven basic types of emotion (anger, happiness, neutrality disgust, fear, sadness, and surprise). 4.2. evaluation parameters as quantitative measurements in this investigation, we employed precision (acc.) and the f 1 score. we also employed the average 𝑀𝑒𝑎𝑛𝐴𝑐𝑐 depending on the major diagonal of the standardized 𝑀norm confusion matrix, for the performing results to be evaluated as in [18]. these measurements are derived accordingly. 𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑇𝑃 + 𝑇𝑁 𝑇𝑃 + 𝐹𝑃 + 𝑇𝑁 + 𝐹𝑁 (18) 𝐹1 = 2 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛. 𝑅𝑒𝑐𝑎𝑙𝑙 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 + 𝑅𝑒𝑐𝑎𝑙𝑙 (19) 𝑀𝑒𝑎𝑛𝐴𝑐𝑐 = ∑ 𝑔𝑖,𝑗 𝑛 𝑖=1 𝑛 (20) 𝑆𝑡𝑑𝐴𝑐𝑐 = √ ∑ (𝑔𝑖,𝑗 − 𝑀𝑒𝑎𝑛𝐴𝑐𝑐 ) 2𝑛 𝑖=1 𝑛 , (21) where 𝑔𝑖,𝑗 ∈ diag(𝑀norm) is the ith diagonal value of the normalized confusion matrix 𝑀norm , 𝑛is the size of 𝑀norm , and tp, tn, fp, and fn, respectively, are true positive, false positive, true negative, and false negative. table 1 shows the confusion matrix for proposed methodology. 4.3. evaluation of proposed model with kgmo for two different datasets. in this section, the different cnn models include convnet, densenet and resnet are compared with and without kgmo techniques in terms of accuracy, f1-score and mean accuracy with standard metrics on two different datasets such as imagebased and video-based datasets. table 2 and figure 3 shows the results of projected model with kgmo on image-based datasets. in the accuracy experiments, the convnet, densenet and resnet without kgmo achieved 56.26 %, 61.51 % and 61.57 %, where these techniques are implemented with kgmo technique and achieved 81.23 %, 83.64 % and 87.22 %. these results proved that resnet with kgmo achieved better accuracy than other models. in the f1-score analysis, the convnet, densenet and resnet without kgmo achieved 56.38 %, 61.50 % and 61.46 %, where these techniques are implemented with kgmo technique and achieved 81.79%, table 1. confusion matrix of proposed model with kgmo on image-based emotion recognition evaluation. overall classification with manual id ground truth anger happiness neutrality disgust fear sadness surprise p re d ic te d anger 93.30 0.93 4.19 2.33 1.40 1.86 happiness 2.79 83.93 5.89 2.65 2.33 2.33 neutrality disgust 6.51 8.35 87.37 2.65 7.44 6.05 fear 3.26 2.72 0.93 88.19 4.19 3.72 sadness 2.79 3.14 4.36 1.86 86.28 7.91 surprise 1.26 1.69 72.56 2.65 11.16 84.70 table 2. validation of proposed model with kgmo on image-based emotion recognition evaluation. cnn-model acc (%) 𝑭𝟏 (%) 𝑴𝒆𝒂𝒏𝑨𝒄𝒄 ± 𝑺𝒕𝒅 convnet 56.26 56.38 56.23 ± 11.18 densenet 61.51 61.50 61.51 ± 10.40 resnet 61.57 61.46 61.57 ± 10.79 convnet-kgmo 81.23 81.79 77.08 ± 08.10 densenet-kgmo 83.64 83.81 76.96 ± 11.12 resnet-kgmo 87.22 87.38 82.45 ± 09.20 figure 3. graphical representation of proposed model with kgmo in terms of accuracy and f1-score on image-based dataset. acta imeko | www.imeko.org june 2022| volume 11 | number 2|6 83.81% and 87.38%. the mean accuracy of each technique without kgmo achieved nearly 56% to 61% with standard deviation of 11. but, when these techniques are implemented with kgmo, they achieved nearly 77 % to 82 % of mean accuracy with standard deviation of 9.20 on proposed model with kgmo. table 3 and figure 4 shows the results of proposed model, existing techniques by implementing with and without kgmo on video-based datasets. in the accuracy experiments, the convnet, densenet and resnet without kgmo achieved 51.70 %, 52.22 % and 54.05 %, where these techniques are implemented with kgmo technique and achieved 55.87 %, 56.14 % and 58.66 %. these results proved that resnet with kgmo achieved better accuracy than other models, however the proposed technique achieved less performance than image-based dataset. in the f1score analysis, the convnet, densenet and resnet without kgmo achieved 46.17 %, 48.26 % and 50.78 %, where these techniques are implemented with kgmo technique and achieved 52.76 %, 54.61 % and 58.50 %. the mean accuracy of each technique without kgmo achieved nearly 46 % to 48 % with standard deviation of 32. but, when these techniques are implemented with kgmo, they achieved nearly 52 % to 56 % of mean accuracy with standard deviation of 15.63 on proposed model with kgmo. the reason for the less performance is that it is difficult to identify the proper emotion recognition while the video is continuously playing. 4.4. comparative evaluation of proposed technique with existing techniques the following table 4 shows the comparative analysis of proposed technique with various existing techniques in terms of accuracy. figure 5 shows the graphical representation of proposed model on video-based dataset in terms of accuracy. the existing techniques [20]-[21] such as dsn and cnn features with lstm achieved nearly 49 % of accuracy. the existing fan [23] and caer-net [24] achieved only 51 % of accuracy, where the existing techniques achieved nearly 52 % to 53 % of accuracy. noisy student training with multi-level attention [26]-[30] achieved 55.17 % of accuracy, where the proposed model achieved 58.66 % of accuracy. the reason is that the cnn is implemented with optimization technique called kgmo for fine-tuning and weight optimization. 5. conclusion computer vision research on facial expression analysis has extensively studied in the past decades. in recognition of emotions, the success of this method has improved greatly over time. our work has shown the general architectural model for developing a deep learning recognition system. the objective is to examine preand post-process methods. using kgmo algorithm, the smoothing and weight of cnn is optimized. this report also included the data sets available for academics in this subject on pictures and video. in different study, the advancement in this area is measured by different performance measures. the testing is performed on the various datasets, in which resnet-kgmo has achieved an accuracy of 58.66 % in the video dataset, and an accuracy of 87.22 % for the imagebased dataset. there is a very attractive scope of future developments in this area. various deep learning multimodal and various architectures can be employed to increase the performance parameters. besides the realization of the feelings alone, the intensity level can be raised further. this can contribute to forecasting the intensity of the feeling. in future works multi-medians may also be used; for example, the construction of a model with multi-data sets can be used with video and audio. table 3. validation of proposed model with kgmo on video-based emotion recognition evaluation. cnn-model acc (%) 𝑭𝟏 (%) 𝑴𝒆𝒂𝒏𝑨𝒄𝒄 ± 𝑺𝒕𝒅 convnet 51.70 46.17 46.51 ± 34.38 densenet 52.22 48.26 47.33 ± 31.73 resnet 54.05 50.78 48.98 ± 32.28 convnet-kgmo 55.87 52.76 51.21 ± 29.87 densenet-kgmo 56.14 54.61 52.35 ± 25.53 resnet-kgmo 58.66 58.50 56.25 ± 15.63 table 4. comparative analysis of proposed model with existing techniques on video-based dataset author technique accuracy (%) vielzeuf et al. [20] (2018) max score selection with temporal pooling 52.20 fan et al. [21] (2018) deeply-supervised cnn (dsn) weighted average fusion 48.04 duong et al. [22] (2019) cnn features with lstm 49.30 li et al. [23] (2019) vgg-face features with bi lstm 53.91 meng et al. [24] (2019) frame attention networks (fan) 51.18 lee et al. [25] (2019) caer-net 51.68 kumar et al. [26] (2019) noisy student training with multi-level attention 55.17 proposed (2021) cnn-kgmo 58.66 figure 4. graphical representation of proposed model with kgmo in terms of accuracy and f1-score on video-based dataset. figure 5. graphical representation of proposed model with kgmo in terms of accuracy on video-based dataset. acta imeko | www.imeko.org june 2022| volume 11 | number 2|7 references [1] d. l. carnì, e. balestrieri, i. tudosa, f. lamonaca, application of machine learning techniques and empirical mode decomposition for the classification of analog modulated signals, acta imeko, vol. 9, 2020, no. 2, pp. 66–74. doi: 10.21014/acta_imeko.v9i2.800 [2] p. c. vasanth, k. r. nataraj, facial expression recognition using svm classifier, indonesian journal of electrical engineering and informatics (ijeei), vol. 3, no. 1, pp. 16-20, 2015. doi: 10.11591/ijeei.v3i1.126 [3] anurag de, ashim saha, a comparative study on different approaches of real time human emotion recognition based on facial expression detection, 2015 international conference on advances in computer engineering and applications, icacea, ghaziabad, india, 19-20 march 2015, pp. 483–487. doi: 10.1109/icacea.2015.7164792 [4] m.a. ozdemir, b. elagoz, a. alaybeyoglu, r. sadighzadeh, a. akan, real time emotion recognition from facial expressions using cnn architecture, tiptekno 2019, izmir, turkey teknolkongresi 3-5 october 2019, pp. 1–4. doi: 10.1109/tiptekno.2019.8895215 [5] d. sokolov, patkin m, real-time emotion recognition on mobile devices. in: and others, editor. proc 13th ieee intconfautom face gesture recognition, vol. 787. 2018. doi: 10.1109/fg.2018.00124 [6] h. kaya, f. gürpınar, a. a. salah, video-based emotion recognition in the wild using deep transfer learning and score fusion, image and vision computing. 2017, 65, 66–75. doi: 10.1016/j.imavis.2017.01.012 [7] g. betta, d. capriglione, m. corvino, a. lavatelli, c. liguori, p. sommella, e. zappa, metrological characterization of 3d biometric face recognition systems in actual operating conditions, acta imeko, vol. 6, 2017, no. 1, pp.33-42, 2017. doi: 10.21014/acta_imeko.v6i1.392 [8] a. s. volosnikov, a. l. shestakov, neural network approach to reduce dynamic measurement errors, acta imeko, vol. 5, 2016, no. 3, pp. 24-31. doi: 10.21014/acta_imeko.v5i3.294 [9] y. xie et al., deception detection with spectral features based on deep belief network, acta acustica, vol. 2, 2019, pp. 214-220. [10] m. healy, r. donovan, p. walsh, h. zheng, a machine learning emotion detection platform to support affective well being, in 2018 ieee international conference on bioinformatics and biomedicine (bibm), madrid, spain, 3-6 december 2018, pp. 2694–2700. doi: 10.1109/bibm.2018.8621562 [11] d. maier, analysis of technical drawings by using deep learning, m.sc. thesis, department of computer science, hochschule mannheim, germany, 2019 [12] c. pramerdorfer, m. kampel, facial expression recognition using convolutional neural networks: state of the art. arxiv preprint arxiv:1612.02903 (2016). doi: 10.48550/arxiv.1612.02903 [13] c. huang, s. narayanan, characterizing types of convolutions in deep convolutional recurrent neural networks for robust speech emotion recognition, arxiv preprint arxiv:1706.02901 (2017). doi: 10.48550/arxiv.1706.02901 [14] c. lu, w. zheng, c. li, chuangao tang, s. liu, s. yan, y. zong, multiple spatio-temporal feature learning for video-based emotion recognition in the wild, proceedings of the international conference on multimodal interaction. acm, boulder, co, usa, 16-20 october 2018, 646–652. doi: 10.1145/3242969.3264992 [15] a. zadeh, p. pu liang, n. mazumder, s. poria, e. cambria, l.-p. morency, memory fusion network for multi-view sequential learning, thirty-second aaai conference on artificial intelligence, new orleans, la, usa, 2-7 february 2018, 9 pp. doi: 10.48550/arxiv.1802.00927 [16] p. liang, r. salakhutdinov, l. p. morency, computational modeling of human multimodal language: the mosei dataset and interpretable dynamic fusion, 2018. [17] a. mollahosseini, b. hasani, m. h. mahoor, affectnet: a database for facial expression, valence, and arousal computing, in the wild. ieee trans. affect. comput. 2019, 10, 18–31. doi: 10.48550/arxiv.1708.03985 [18] s. li, w. deng, j. du, reliable crowdsourcing and deep localitypreserving learning for expression recognition in the wild, ieee conference on computer vision and pattern recognition (cvpr), honolulu, hi, usa, 21–26 july 2017, pp. 2584–2593. doi: 10.1109/cvpr.2017.277 [19] a. dhall, r. goecke, s. lucey, t. gedeon, collecting large, richly annotated facial-expression databases from movies. ieee multimed. 2012, 19, 34–41. doi: 10.1109/mmul.2012.26 [20] v. vielzeuf, c. kervadec, s. pateux, a. lechervy, f. jurie, an occam’s razor view on learning audiovisual emotion recognition with small training sets, 20th acm international conference on multimodal interaction, boulder, co, usa, 16–20 october 2018, pp. 589–593. doi: 10.48550/arxiv.1808.02668 [21] y. fan, j. c. k. lam, v. o. k. li, video-based emotion recognition using deeply-supervised neural networks, 20th acm international conference on multimodal interaction, boulder, co, usa, 16–20 october 2018, pp. 584–588. doi: 10.1145/3242969.3264978 [22] d. h. nguyen, s. kim, g. s. lee, h. j. yang, i. s. na, s. h. kim, facial expression recognition using a temporal ensemble of multi-level convolutional neural networks, ieee trans. affect. comput. 2019, 33, 1. doi: 10.1109/taffc.2019.2946540 [23] s. li, w. zheng, y.zong, c. lu, c. tang, bi-modality fusion for emotion recognition in the wild, international conference on multimodal interaction, jiangsu, china, 14–18 october 2019, pp. 589–594. doi: 10.1145/3340555.3355719 [24] d. meng, d. peng, y. wang, y. qiao, frame attention networks for facial expression recognition in videos, ieee international conference on image processing (icip), taipei, taiwan, 22–25 september 2019, pp. 3866–3870. doi: 10.48550/arxiv.1907.00193 [25] j. lee, s. kim, s. kim, j. park, k. sohn, context-aware emotion recognition networks, ieee international conference on computer vision, seoul, korea, 27 october–2 november 2019, pp. 10142–10151. doi: 10.48550/arxiv.1908.05913 [26] v. kumar, s. rao, l. yu, noisy student training using body language dataset improves facial expression recognition. in computer vision—eccv 2020 workshops, a. bartoli, a. fusiello, eds., springer international publishing: cham, switzerland, 2020, pp. 756–773. doi: 10.48550/arxiv.2008.02655 [27] f. vurchio, g. fiori, a. scorza, s. a. sciuto, comparative evaluation of three image analysis methods for angular displacement measurement in a mems microgripper prototype: a preliminary study, acta imeko, vol.10, 2021, no.2, pp.119-125. doi: 10.21014/acta_imeko.v10i2.1047 [28] h. ingerslev, s. andresen, j. holm winther, digital signal processsing functions for ultra-low frequency calibrations, acta imeko, vol.9, 2020, no.5, pp. 374-378. doi: 10.21014/acta_imeko.v9i5.1004 [29] m. florkowski, imaging and simulations of positive surface and airborne streamers adjacent to dielectric material, measurement, vol. 186, 2021, pp.1-14. doi: 10.1016/j.measurement.2021.110170 [30] g. ke, h. wang, s. zhou, h. zhang, encryption of medical image with most significant bit and high capacity in piecewise linear chaos graphics, measurements, vol. 135, 2021, pp. 385-391. doi: 10.1016/j.measurement.2018.11.074 http://dx.doi.org/10.21014/acta_imeko.v9i2.800 http://dx.doi.org/10.11591/ijeei.v3i1.126 https://doi.org/10.1109/icacea.2015.7164792 https://doi.org/10.1109/tiptekno.2019.8895215 https://doi.org/10.1109/fg.2018.00124 https://doi.org/10.1016/j.imavis.2017.01.012 http://dx.doi.org/10.21014/acta_imeko.v6i1.392 http://dx.doi.org/10.21014/acta_imeko.v5i3.294 https://doi.org/10.1109/bibm.2018.8621562 https://doi.org/10.48550/arxiv.1612.02903 https://doi.org/10.48550/arxiv.1706.02901 https://doi.org/10.1145/3242969.3264992 https://doi.org/10.48550/arxiv.1802.00927 https://doi.org/10.48550/arxiv.1708.03985 https://doi.org/10.1109/cvpr.2017.277 https://doi.org/10.1109/mmul.2012.26 https://doi.org/10.48550/arxiv.1808.02668 http://dx.doi.org/10.1145/3242969.3264978 https://doi.org/10.1109/taffc.2019.2946540 https://doi.org/10.1145/3340555.3355719 https://doi.org/10.48550/arxiv.1907.00193 https://doi.org/10.48550/arxiv.1908.05913 https://doi.org/10.48550/arxiv.2008.02655 http://dx.doi.org/10.21014/acta_imeko.v10i2.1047 http://dx.doi.org/10.21014/acta_imeko.v9i5.1004 https://www.sciencedirect.com/science/article/pii/s026322412101085x#! https://doi.org/10.1016/j.measurement.2021.110170 https://doi.org/10.1016/j.measurement.2018.11.074 thermoelasticity and aruco marker-based model validation of polymer structure: application to the san giorgio’s bridge inspection robot acta imeko issn: 2221-870x december 2021, volume 10, number 4, 177 184 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 177 thermoelasticity and aruco marker-based model validation of polymer structure: application to the san giorgio’s bridge inspection robot lorenzo capponi1, tommaso tocci1, mariapaola d’imperio2, syed haider jawad abidi2, massimiliano scaccia2, ferdinando cannella2, roberto marsili1, gianluca rossi1 1 department of engineering, university of perugia, via g. duranti 93, 06125 perugia, italy 2 industrial robotic unit, istituto italiano di tecnologia, via morego 30, 16163 genova, italy section: research paper keywords: thermoelasticity; aruco markers; structural dynamics; carbon fibre reinforced polymer; robot inspection citation: lorenzo capponi, tommaso tocci, mariapaola d'imperio, syed haider jawad abidi, massimiliano scaccia, ferdinando cannella, roberto marsili, gianluca rossi, thermoelasticity and aruco marker-based model validation of polymer structure: application to the san giorgio’s bridge inspection robot, acta imeko, vol. 10, no. 4, article 28, december 2021, identifier: imeko-acta-10 (2021)-04-28 section editor: roberto montanini, università di messina and alfredo cigada, politecnico di milano, italy received july 30, 2021; in final form december 9, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: lorenzo capponi, e-mail: lorenzocapponi@outlook.it 1. introduction in design and materials engineering, experimental validation of numerical models is commonly required in order to verify the quality of the simulation [1], [2]. in general, the level of validation is directly tied to the intended use of the model and then, the supporting testing experiments are defined [3], [4]. while indirect validation uses experimental results that cannot be controlled by the user (e.g., from the literature or from previous researches), a direct approach performs experiments on the quantities of interest [5], with the aim of reproduce, through the experiments, actual behaviour of simulated model [4], [6]. when irregularities in the geometry or in the molecularstructure of the material are present, localised stress concentrations can lead to fractures [3]. due to this, a stress concentration factor is usually considered during the design of a structure and, moreover, it is one of the experimental validations focuses. local stress and strain measurements have been widely performed by means of established contact techniques (e.g., strain-gauges) [7]–[10]. however, in last decades, non-contact measurement methods for full-field stress and strain distribution estimation were developed and commonly employed in experimental validation tests, such as the thermoelastic stress analysis (tsa) [11]. according to the thermoelastic effect, for a dynamically excited structure, the surface temperature changes, measured by means of an infrared detector, are proportional to the stress and strain tensors changes, caused by the input load [12]. thermoelastic stress analysis was involved in multiple researches, regarding non-destructive testing [13], defect identification [14], and material properties characterization [15], [16]. thermoelasticity has also been used to determine fatigue limit parameters and crack propagation [17], [18], for modaldamage identification in frequency domain [19], and for stress intensity factor evaluation in complex structures [20]. however, due to the demands of high-speed operation and the use of light structures in modern machinery, static measurements of stress and strain distributions are no longer sufficient [21], [22]. in fact, when a flexible structure is excited at or close to one of its natural frequencies, significantly increased fatigue damage occurs [23]– abstract experimental procedures are often involved in the numerical models validation. to define the behaviour of a structure, its underlying dynamics and stress distributions are generally investigated. in this research, a multi-instrumental and multi-spectral method is proposed in order to validate the numerical model of the inspection robot mounted on the new san giorgio's bridge on the polcevera river. an infrared thermoelasticity-based approach is used to measure stress-concentration factors and, additionally, an innovative methodology is implemented to define the natural frequencies of the robot inspection structure, based on the detection of aruco fiducial markers. established impact hammer procedure is also performed for the validation of the results. mailto:lorenzocapponi@outlook.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 178 [25]. due to this, modal parameters (i.e., modal frequencies, modal damping and mode shapes) of structures and systems in frequency range of interest are widely researched to properly simulate their behaviour in real operating conditions [26], [27], and, thus, to avoid fatigue damage. modal parameters definition is usually reached experimentally via impact-hammer procedure [26]. nevertheless, in last years, the use of non-contact imagebased measurement techniques in structural dynamics applications has grown. in fact, displacements, deformations and mode shapes can be measured with cameras operating in the visible spectrum by applying both digital image correlation and other computer-vision methods [28]–[30]. one of the more promising approach for displacement and motion detection involves markers, either they are physical or virtual. virtual markers are directly generated through computer-vision algorithms, such as scale invariant feature transform (sift) [31], [32], and speeded up robust features (surf) [33]. these algorithms are able to detect and describe local characteristics (i.e., features) in images. moreover, virtual markers are often used as they allow tracking objects in subsequent acquired frames without introducing physical targets, avoiding potential misleading elements. in recent years, several researches were developed using virtual markers. khuc et al. [34] and dong et al. [35] investigated structural and modal analysis via computervision algorithms through virtual markers. however, in the cases when the points of interest are not directly identified as markers, the physical targets need to be involved. furthermore, virtual markers are strongly influenced by lighting changes and low contrast and, moreover, they can not be detected in uniform intensity distribution areas with no gradients. physical markers, also known as fiducial markers, have been also widely employed in structural monitoring applications [36], [37]. a commonly employed group of fiducial markers are the squared planar markers [38], which are characterized by a binary-coded squared area, enclosed by a black border. several sets of this marker type have been developed in years [39]–[42]. however, in case of nonuniform light conditions and desired simultaneous detection of multiple markers, the aruco marker library was found to be very efficacious and robust to detection errors and occlusion [38], [43]. additionally, if the camera is calibrated, the relative position of the camera with respect to the markers can be directly estimated, and, through a custom configuration process, the system becomes more insensitive to detection errors and false positives [38]. due to this, the aruco markers applicability was widely studied in last years. sani et al. [44] and lebedev et al. [45] employed them for drone quad-rotor and uav autonomous navigation and landing, while elangovan et al. [46] used them for decoding contact forces exerted by adaptive hands. structural dynamics applications were also researched by abdelbarr et al. [47] for structural 3d displacement measurement. moreover, tocci et al. investigated the measurement uncertainty of the aruco marker-based technique for displacement up to order of 1/100 mm, using a comparison laser doppler vibrometry technique, defining the influence of the measurement parameters on the resulting measured displacement [48]. in this research, a non-contact multi-instrumental approach is presented for the numerical model results validation, which involves stress concentration factor evaluation through thermoelasticity measurements and natural frequencies identification by means of aruco markers detection. the proposed method is applied to the san giorgio's bridge robot inspection structure. 2. materials and methods 2.1. robot inspection the robot inspection is the first platform for the automatic inspection of a bridge. it was developed within a collaboration between the italian institute of technology, camozzi group, sda engineering, ubisive and the university of ancona (patent pt190478), for the new viaduct on the polcevera river, the socalled san giorgio’s bridge, designed and built after the morandi’s bridge collapse. the structure is a 3-degrees-offreedom platform, and it is fully autonomous. its main purpose is to carry 3 technological instrumented supports with high performances cameras, lasers, ultrasonic sensors, and anemometers that scan the lower surface of the bridge and collect more than 35000 pictures. these pictures are then processed by pattern analysis algorithms and given to the operator information whether any changes that on the investigated surface occurs. the robot weights around 1.8 t and it is shown in the figure 1. 2.2. thermoelasticity-based stress-concentration factor estimation thermoelasticity is a full-field stress-distribution measurement technique based on the thermoelastic effect [11], [12]. according to this effect, in case of adiabatic process and linear, homogeneous, and isotropic material behaviour, a dynamically excited structure presents surface temperaturechanges proportional to the changes in the stress and strain tensor traces, caused by the external load [19], [49]. moreover, if the excitation is harmonic, the thermal fluctuation is expected to be at the same frequency of the input load, and its normalized amplitude variation is given by: 𝛥𝑇 𝑇0 = −𝐾𝑚  𝛥𝜎𝑘𝑘 , (1) where 𝑇0 is the ambient temperature, 𝛥𝜎𝑘𝑘 is the first stress invariant variation and 𝐾𝑚 is the thermoelastic coefficient, defined from [11]: 𝐾𝑚 = 𝛼 𝜌 𝐶𝜎 , (2) where 𝛼 is the thermal expansion coefficient, 𝜌 is the material density and 𝐶𝜎 is the specific heat at constant pressure (or stress) [12]. in general, the temperature variation caused by the thermoelastic effect is within the noise produced by the infrared detector [50]. thus, thermal acquisitions are necessarily post processed in order to obtain readable results [51]. although general frequency-domain approaches are well established nowadays [19], [52], in this research a classical lock-in analysis figure 1. robot installed on the new viaduct on the polcevera river. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 179 was performed in order to single out the thermoelastic signal at a particular frequency (i.e., the load frequency) from the noisy signal acquired through the thermal camera [53], [54]. being 𝜔𝐿 the input load frequency, the digital lock-in amplifier gives the temperature fluctuation at 𝜔𝐿 as magnitude 𝛥𝑇𝜔𝐿 and phase 𝛩𝜔𝐿 [55]: 𝛥𝑇𝜔𝐿 = √𝐼𝑥 2(𝜔𝐿 ) + 𝐼𝑦 2(𝜔𝐿 ) , (3) 𝛩𝜔𝐿 = arctan ( 𝐼𝑦 (𝜔𝐿 ) 𝐼𝑥 (𝜔𝐿 ) ) , (4) where 𝐼𝑥 (𝜔𝐿 ) and 𝐼𝑦 (𝜔𝐿 ) are the phasorial components of the thermoelastic signal evaluated at 𝜔𝐿 , respectively. once applied the lock-in data processing, the spatial information of the temperature (i.e., stress) distribution is obtained, and further structural analysis can be performed. in particular, the stressconcentration factor 𝐾𝑓 can be estimated in areas where critical behaviour is shown. 𝐾𝑓 is defined, on a linear profile, as the ratio of the highest stress max(𝛥𝜎) and a reference stress, here chosen as the mean stress 𝛥𝜎 on the same profile [20]: 𝐾𝑓 = max(δ𝜎) δ𝜎 . (5) by substituting (1) in (5), the 𝐾𝑓 factor on a linear profile becomes: 𝐾𝑓 = max(δ𝑇𝜔𝐿 ) δ𝑇𝜔𝐿 . (6) 2.3. aruco-based resonant frequencies identification as discussed earlier, the aruco marker library was used for measurements, due to theoretical considerations [38]. an example of aruco 6x6 marker is presented in figure 2, which shows its geometrical parameters (i.e., corners and reference system). to identify a marker in each captured frame, multiple steps are required [38], [43]. firstly, a local-adaptive threshold contour segmentation is performed [56]. then, a contour extraction and a polygonal approximation are applied to keep the enclosing rectangular borders and remove the irrelevant information [57]. potential perspective projections are compensated using a homography transformation. the resulting image is binarized and divided into a regular grid, where at each element is assigned 0 or 1, depending on the preponderant value of the corresponding pixel, as shown in figure 3 [58]. aruco markers are usually created in groups, to ensure their geometric diversity and avoid misleading detection. due to this, another filter is generally applied to the image to determine potential matches between the recognized marker and the used marker-dictionary [43] even if manual corrections are normally made available by modifying threshold algorithm parameters. finally, through the identification of the four corners, the spatial coordinates of each recognised marker are estimated with respect to the camera [38], [56]. in this study, the implementation of the aruco library is exploited to measure the spatial temporal coordinates of the centre of the marker 𝐶(𝑥(𝑡),  𝑦(𝑡)) recorded in a video. firstly, the acquired data are pre-processed to improve the marker detection, which, moreover, can be compromised by different factors (e.g., too much distance between the camera and the marker or blurred images measurement) [43]. then, each frame is subjected to a sharpening and dilatation filters. the sharpening filter was used to reduce apparent blurring in each frame by means of 2d spatial convolution [59]: (𝐼 ∗ 𝑘)(𝑥, 𝑦) = ∑ ∑ 𝑘(𝑖, 𝑗) ⋅ 𝐼(𝑥 − 𝑖, 𝑦 − 𝑗) ∞ 𝑗=−∞ ∞ 𝑖=−∞ (7) where 𝐼(𝑥, 𝑦) is the original frame, 𝑘(𝑥, 𝑦) is the kernel and (𝑥, 𝑦) are the pixel coordinates and (𝑖, 𝑗) are the coordinates of the elements in the kernel matrix. then, dilatation filter was also applied as a morphological operation, involved for removing noise, for isolating individual elements and for merging disparate elements in an image [59]. also, this filter is based on convolution operation [60]. being 𝑏(𝑥, 𝑦) the structuring function, by the grey-scale dilation of 𝐼 by 𝑏 is obtained [59]: (𝐼 ⊕ 𝑏)(𝑥, 𝑦) = (𝑖, 𝑗) ∈ 𝑏𝑚𝑎𝑥 [𝐼(𝑥 + 𝑖, 𝑦 + 𝑗) + 𝑏(𝑖, 𝑗)] , (8) and the grey-scale erosion of i by b is given by: (𝐼 ⊖ 𝑏)(𝑥, 𝑦) = (𝑖, 𝑗) ∈ 𝑏𝑚𝑖𝑛 [𝐼(𝑥 + 𝑖, 𝑦 + 𝑗) − 𝑏(𝑖, 𝑗)] . (9) the marker detection is based on its 4 corners identification in each captured frame (see figure 1). from the corners, the spatial coordinates of the centre of the marker (𝑥𝑐 , 𝑦𝑐 ) are evaluated frame-by-frame during the acquisition: 𝐶 = (𝑥𝑐 , 𝑦𝑐 ) = 𝐺 ⋅ ( 1 4 ∑|𝑥𝑟 | 4 𝑟=1 , 1 4 ∑|𝑦𝑟 | 4 𝑟=1 ) , (10) where (𝑥𝑟 , 𝑦𝑟 ) are the coordinates of the 𝑟-th vertex and 𝐺 is the calibration factor from pixel units to si units, defined as the ratio figure 2. aruco 6 × 6-marker: corners and centre coordinates. figure 3. aruco 6 × 6-marker: pixel values (example). acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 180 between the side length of the physical marker in si units and the average of the four side lengths (in pixels) of the captured marker in the fov. once the coordinates of the centre of the marker (𝑥𝑐 , 𝑦𝑐 ) in the time-domain are obtained, frequency domainbased analysis is performed using the discrete fourier transform (dft) [61]. the dft for the n-points time series p is defined as: 𝑃(𝜔) = ∑ 𝑝𝑛 −𝑖 𝑛𝜔 𝑁 𝑁−1 𝑛=0 . (11) considering the spatial properties of image-based analysis, the dft for each component of the displacement 𝐶(𝑥𝑐 (𝑡), 𝑦𝑐 (𝑡)) and for the input force f(t) is obtained: 𝐶(𝑋(𝜔), 𝑌(𝜔)) = (𝐷𝐹𝑇(𝑥(𝑡)), 𝐷𝐹𝑇(𝑦(𝑡))) , (12) 𝐹(𝜔) = 𝐷𝐹𝑇(𝑓(𝑡)) . (13) the cross-spectra (15) and auto-spectra (16) are computed [61]: (𝑆𝑓𝑥 (𝜔), 𝑆𝑓𝑦 (𝜔)) = ( 1 𝑇 [𝑋(𝜔)∗ ⋅ 𝐹(𝜔)], 1 𝑇 [𝑌(𝜔)∗ ⋅ 𝐹(𝜔)]) , (14) 𝑆𝑓𝑓 (𝜔) = 1 𝑇 [𝐹(𝜔)∗ ⋅ 𝐹(𝜔)] . (15) finally, the compliance frequency response functions (frf) along x-axis and y-axis, using 𝐻1 estimator, can be obtained [61]: (𝐻1𝑥 (𝜔), 𝐻1𝑦 (𝜔)) = ( 𝑆𝑓𝑥 (𝜔) 𝑆𝑓𝑓 (𝜔) , 𝑆𝑓𝑦 (𝜔) 𝑆𝑓𝑓 (𝜔) ) . (16) on the other hand, the accelerance frequency response function obtained through the impact hammer procedure is given using the 𝐻1 estimator by [61]: 𝐻1 = 𝑆𝑓𝑎 (𝜔) 𝑆𝑓𝑓 (𝜔) , (17) where 𝑆𝑓𝑎 and 𝑆𝑓𝑓 are the crossand auto-spectra of the output acceleration and of the input force, respectively. 3. experimental methodology as already addressed in sec. 2, two different measurement approaches were used and, due to this, two experimental setups were built. the global tested structure is presented in figure 4, where the analysed areas and markers are shown in detail: thermoelasticity was applied in t1 and t2 areas, while m1-2 is the id of the marker detected whose results are presented in this research. in fact, as shown in figure 4, 13 markers have been mounted on the structure and multiple measurements, of single markers and of groups of them, were performed. for this reason, for the sake of clarity, only results related to the marker at the tip of the structure (i.e., m1-2) are presented. moreover, the structure was tested in two different boundary configurations, that are schematized in figure 5. in this manuscript, the authors will refer to the configuration with one fixed constraint as c1 configuration and as c2 configuration to the one with two fixed constraints. for the sake of clarity, the experiments are combined as follows: t1 area was analysed in c1 configuration, t2 area in c2 configuration and the m1-2 marker was detected in both c1 and c2 configurations. 3.1. thermoelasticity the thermoelastic stress analysis was performed as explained in sec. 2.2. firstly, the yellow painting was removed, and a proper matt black wrapper paint was used for the surface conditioning, due to theoretical considerations [55]: the emissivity of the surface was increased and homogenized, and appreciable results were thus obtained. then, a harmonic load at 1.1 hz and 2.7 hz were given to the structure in t1 and t2 analysed area, respectively, and the temperature changes of the surface were measured, in each test, for 60 seconds with a mwir cooled thermal camera flir a6751sc, operating at 125 hz of sampling rate and 640 × 512 pixels of resolution. 3.2. aruco-based resonant frequencies identification in order to define the natural frequencies of the tested structure, classic impact hammer procedure was also performed to validate further the results obtained through image-based analysis. due to this, a pcb 086d20 hammer was used for the input broadband excitation, a uniaxial pcb 352c34 amplified accelerometer and a picoscope data acquisition system. the accelerometer was positioned on the tip of the structure, along the y-axis of the system. input and output data were acquired at 1 khz for 50 seconds of duration. simultaneously with the impact hammer tests, a canon eos 7d camera, mounting a 2470 mm optic (f 2.8), was used for measuring the position of the framed markers. spatial resolution of 1920 x 1080 pixels and sampling frequency of 30 hz were used as acquisition parameters. in this experiment, a 6x6 bit aruco marker dictionary was used. 4. results 4.1. stress concentration factor the stress concentration factor was evaluated in two critical areas, t1 and t2, whose thermal acquisitions and finite element models are shown in figure 6 and figure 7, respectively. by means of (3), the lock-in analysis was performed and the magnitude of the thermoelastic signal is shown in figure 8 and figure 9, where it is compared to the data obtained from the figure 4. tested cfrp structure: t1) first tsa analysed area; t2) second tsa analysed area; m1-2) aruco marker detected. figure 5. boundary configurations of the structure: c1) single fixed constraint; c2) double fixed constraint.. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 181 numerical simulation in the same corresponding linear region of interest. the stress concentration factors 𝐾𝑓 , as described by (6), were evaluated from the profiles in figure 7 and figure 8, and they are shown in table 1, obtained from the experiments and the fem model. the obtained results are in line with the expectations. in fact, usually, the thermoelasticity slightly underestimate the stress, due to theoretical considerations, but, principally, actual stresses measured on the structure are presumed to be lower than the numerical values due to designing consideration. 4.2. aruco-based resonant frequencies identification as already addressed in sec. 3.3, firstly, the marker position in the time-domain was collected using modal testing procedure (see figure 10) through (10), and then the frf was obtained using (16). the frequency response functions were reconstructed using the least-squares complex exponential (lsce) algorithm [62]. the frfs obtained by means of the two experimental methods are shown in figure 11 and figure 12 for c1 and c2 boundary configurations, respectively. although in the amplitudes are different (the marker gave compliance frf while the impact hammer gave accelerance frf), the natural frequencies identified along the x-axis are totally comparable. furthermore, the comparison with the numerical simulation was performed and it is shown in figure 13 and figure 14, in terms of normalized frequency for c1 and c2 boundary configurations, respectively. figure 6. t1 area stress profile: (a) experimental; (b) numerical. figure 7. t2 area stress profile: (a) experimental; (b) numerical. figure 8. t1 area stress and temperature profiles. figure 9. t2 area stress and temperature profiles. table 1. stress concentration factors results. t1 area t2 area experiments 1.19 1.21 numerical 2.40 1.40 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 182 the results show a well-founded numerical model. in fact, the comparison of the experimental resonant frequencies, obtained using impact hammer and aruco markers, with the numerical results establish high reliability in both the two boundary condition configurations, for all the modes in the considered frequency range. 5. conclusions the validation of numerical models through experimental procedures is a mandatory step for several applications. in this research, a non-contact multi-instrumental approach was proposed to validate the numerical model of the robot inspection of the new san giorgio's bridge on the polcevera river. in particular, the thermoelastic technique was used to measure stress-concentration factors in two areas and, moreover, an innovative methodology, involving the detection of the aruco fiducial markers, was implemented to define the resonant frequencies of the cfrp structure, by estimating its frequency response function. the impact-hammer procedure was also performed for the validation of the results. the proposed approach gave excellent results and, due to this, it can be used for testing large structures. further investigations on the material properties and on the dynamics of the inspection robot are planned as extension of this research. acknowledgement the authors acknowledge camozzi group and sda engineering for allowing and supporting this research through the collaboration with the italian institute of technology. moreover, ubisive, fincantieri and pergenova participated in this research with the university of ancona (univpm). figure 10. modal testing procedure: input impulse and damped output signals from accelerometer and aruco marker. figure 11. frequency response function using aruco markers and impact hammer: c1 boundary configuration. figure 12. frequency response function using aruco markers and impact hammer: c2 boundary configuration. figure 13. frequency response function using aruco markers and impact hammer: c2 boundary configuration. figure 14. natural frequencies comparison: c2 boundary configuration. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 183 references [1] x. d. li, n. e. wiberg, structural dynamic analysis by a time‐ discontinuous galerkin finite element method, int. j. numer. methods eng., vol. 39, no. 12, 1996, pp. 2131–2152. doi: 10.1002/(sici)1097-0207(19960630)39:12%3c2131::aidnme947%3e3.0.co;2-z [2] f. cianetti, g. morettini, m. palmieri, g. zucca, virtual qualification of aircraft parts: test simulation or acceptable evidence?, procedia struct. integr., vol. 24, no. 2019, 2019, pp. 526–540. doi: 10.1016/j.prostr.2020.02.047 [3] r. c. juvinall, k. m. marshek, fundamentals of machine component design, vol. 83. john wiley & sons new york, 2006. [4] a. lavatelli, e. zappa, uncertainty in vision based modal analysis: probabilistic studies and experimental validation, acta imeko, vol. 5, no. 4, 2016, pp. 37-48. doi: 10.21014/acta_imeko.v5i4.426 [5] a. c. jones, r. k. wilcox, finite element analysis of the spine: towards a framework of verification, validation and sensitivity analysis, med. eng. phys., vol. 30, no. 10, 2008, pp. 1287–1304. doi: 10.1016/j.medengphy.2008.09.006 [6] g. morettini, c. braccesi, f. cianetti, experimental multiaxial fatigue tests realized with newly developed geometry specimens, fatigue fract. eng. mater. struct., vol. 42, no. 4, 2019, pp. 827– 837. doi: 10.1111/ffe.12954 [7] e. o. doebelin, d. n. manik, measurement systems: application and design, mcgraw-hill college, 2007, isbn 978-0072922011. [8] a. schäfer, high-precision amplifiers for strain gauge based transducers-first time realized in compact size, acta imeko, vol. 6, no. 4, 2017, pp. 31-36. doi: 10.21014/acta_imeko.v6i4.477 [9] z. lai, y. xiaoxiang, y. jinhui, vibration analysis of the oscillation support of column load cells in low speed axle-group weigh-inmotion system, acta imeko, vol. 9, no. 5, pp 63-69, 2020. doi: 10.21014/acta_imeko.v9i5.940 [10] l. capponi, m. česnik, j. slavič, f. cianetti, m. boltežar, nonstationarity index in vibration fatigue: theoretical and experimental research, int. j. fatigue, vol. 104, 2017, pp. 221–230. doi: 10.1016/j.ijfatigue.2017.07.020 [11] w. thomson, on the dynamical theory of heat, transactions of the royal society of edinburgh, vol. 20, no. 2, pp. 261-288, 1853. doi: 10.1017/s0080456800033172 [12] w. weber, über die specifische warme fester korper, insbesondere der metalle, ann. phys., vol. 96, no. 10, pp. 177–213, 1830. [13] j. qiu, c. pei, h. liu, and z. chen, quantitative evaluation of surface crack depth with laser spot thermography, int. j. fatigue, vol. 101, pp. 80–85, 2017. doi: 10.1016/j.ijfatigue.2017.02.027 [14] x. guo and y. mao, defect identification based on parameter estimation of histogram in ultrasonic ir thermography, mech. syst. signal process., vol. 58, pp. 218–227, 2015. doi: 10.1016/j.ymssp.2014.12.011 [15] g. allevi, l. capponi, p. castellini, p. chiariotti, f. docchio, f. freni, r. marsili, m. martarelli, r. montanini, s. pasinetti, a. quattrocchi, r. rossetti, g. rossi, g. sansoni, e. p. tomasini, investigating additive manufactured lattice structures: a multiinstrument approach, ieee trans. instrum. meas., 2019. doi: 10.1109/tim.2019.2959293 [16] f. cannella, a. garinei, m. d’imperio, g. rossi, a novel method for the design of prostheses based on thermoelastic stress analysis and finite element analysis, j. mech. med. biol., vol. 14, no. 05, p. 1450064, 2014. doi: 10.1142/s021951941450064x [17] g. fargione, a. geraci, g. la rosa, a. risitano, rapid determination of the fatigue curve by the thermographic method, int. j. fatigue, vol. 24, no. 1, pp. 11–19, 2002. doi: 10.1016/s0142-1123(01)00107-4 [18] x. d. li, h. zhang, d. l. wu, x. liu, j. y. liu, adopting lock-in infrared thermography technique for rapid determination of fatigue limit of aluminum alloy riveted component and affection to determined result caused by initial stress, int. j. fatigue, vol. 36, no. 1, 2012, pp. 18–23. doi: 10.1016/j.ijfatigue.2011.09.005 [19] l. capponi, j. slavič, g. rossi, m. boltežar, thermoelasticitybased modal damage identification, int. j. fatigue, vol. 137, aug. 2020, p. 105661. doi: 10.1016/j.ijfatigue.2020.105661 [20] r. marsili and g. rossi, tsa infrared measurements for stress distribution on car elements, j. sensors sens. syst., vol. 6, no. 2, p. 361, 2017. doi: 10.5194/jsss-6-361-2017 [21] m. d’imperio, d. ludovico, c. pizzamiglio, c. canali, d. caldwell, f. cannella, flegx: a bioinspired design for a jumping humanoid leg, in 2017 ieee/rsj international conference on intelligent robots and systems (iros), 2017, pp. 3977–3982. doi: 10.1109/iros.2017.8206251 [22] j. schijve, fatigue of structures and materials. springer science & business media, 2001, isbn 978-1402068072 [23] d. benasciutti, f. sherratt, and a. cristofori, basic principles of spectral multi-axial fatigue analysis, procedia eng., vol. 101, pp. 34–42, 2015. doi: 10.1016/j.proeng.2015.02.006 [24] p. wolfsteiner, a. trapp, fatigue life due to non-gaussian excitation–an analysis of the fatigue damage spectrum using higher order spectra, int. j. fatigue, vol. 127, pp. 203–216, 2019. doi: 10.1016/j.ijfatigue.2019.06.005 [25] g. morettini, c. braccesi, f. cianetti, s. m. j. razavi, k. solberg, l. capponi, collection of experimental data for multiaxial fatigue criteria verification, fatigue fract. eng. mater. struct., vol. 43, no. 1, pp. 162–174, 2020. doi: 10.1111/ffe.13101 [26] d. j. ewins, modal testing: theory and practice. hertfordshire, uk, 1986, isbn: 978-0863802188. [27] m. mršnik, j. slavič, m. boltežar, vibration fatigue using modal decomposition, mech. syst. signal process., vol. 98, pp. 548–556, 2018. doi: 10.1016/j.ymssp.2017.03.052 [28] b. d. lucas, t. kanade, an iterative image registration technique with an application to stereo vision, proc. darpa image underst. work., pp. 121–130, 1981. [29] j. javh, j. slavič, and m. boltežar, experimental modal analysis on full-field dslr camera footage using spectral optical flow imaging, j. sound vib., vol. 434, pp. 213–220, 2018. doi: 10.1016/j.jsv.2018.07.046 [30] d. gorjup, j. slavič, m. boltežar, frequency domain triangulation for full-field 3d operating-deflection-shape identification, mech. syst. signal process., vol. 133, p. 106287, 2019. doi: 10.1016/j.ymssp.2019.106287 [31] t. tocci, l. capponi, r. marsili, g. rossi, j. pirisinu, suction system vapour velocity map estimation through sift-based alghoritm, in journal of physics: conference series, 2020, vol. 1589, no. 1, p. 12004. doi: 10.1088/1742-6596/1589/1/012004 [32] g. allevi, l. casacanditella, l. capponi, r. marsili, g. rossi, census transform based optical flow for motion detection during different sinusoidal brightness variations, journal of physics: conference series, 2018, vol. 1149, no. 1, p. 12032. doi: 10.1088/1742-6596/1149/1/012032 [33] h. bay, t. tuytelaars, l. van gool, surf: speeded up robust features, european conference on computer vision, 2006, pp. 404–417. doi: 10.1007/11744023_32 [34] t. khuc, f. catbas, computer vision-based displacement and vibration monitoring without using physical target on structures, struct. infrastruct. eng., vol. 13, no. 4, pp. 505–516, 2017. doi: 10.1080/15732479.2016.1164729 https://doi.org/10.1002/(sici)1097-0207(19960630)39:12%3c2131::aid-nme947%3e3.0.co;2-z https://doi.org/10.1002/(sici)1097-0207(19960630)39:12%3c2131::aid-nme947%3e3.0.co;2-z https://doi.org/10.1016/j.prostr.2020.02.047 http://dx.doi.org/10.21014/acta_imeko.v5i4.426 https://doi.org/10.1016/j.medengphy.2008.09.006 https://doi.org/10.1111/ffe.12954 http://dx.doi.org/10.21014/acta_imeko.v6i4.477 http://dx.doi.org/10.21014/acta_imeko.v9i5.940 http://dx.doi.org/10.1016/j.ijfatigue.2017.07.020 https://doi.org/10.1017/s0080456800033172 https://doi.org/10.1016/j.ijfatigue.2017.02.027 http://dx.doi.org/10.1016/j.ymssp.2014.12.011 https://doi.org/10.1109/tim.2019.2959293 http://dx.doi.org/10.1142/s021951941450064x https://doi.org/10.1016/s0142-1123(01)00107-4 http://dx.doi.org/10.1016/j.ijfatigue.2011.09.005 http://dx.doi.org/10.1016/j.ijfatigue.2020.105661 http://dx.doi.org/10.5194/jsss-6-361-2017 http://dx.doi.org/10.1109/iros.2017.8206251 http://dx.doi.org/10.1016/j.proeng.2015.02.006 https://doi.org/10.1016/j.ijfatigue.2019.06.005 https://doi.org/10.1111/ffe.13101 http://dx.doi.org/10.1016/j.ymssp.2017.03.052 http://dx.doi.org/10.1016/j.jsv.2018.07.046 http://dx.doi.org/10.1016/j.ymssp.2019.106287 http://dx.doi.org/10.1088/1742-6596/1589/1/012004 http://dx.doi.org/10.1088/1742-6596/1149/1/012032 https://doi.org/10.1007/11744023_32 http://dx.doi.org/10.1080/15732479.2016.1164729 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 184 [35] c. dong, o. celik, and f. catbas, marker-free monitoring of the grandstand structures and modal identification using computer vision methods, struct. heal. monit., vol. 18, no. 5–6, pp. 1491– 1509, 2019. doi: 10.1177/1475921718806895 [36] f. lunghi, a. pavese, s. peloso, i. lanese, d. silvestri, computer vision system for monitoring in dynamic structural testing, in role of seismic testing facilities in performance-based earthquake engineering, springer, pp. 159–176, 2012. doi: 10.1007/978-94-007-1977-4 [37] s. w. park, h. s. park, j. h. kim, h. adeli, 3d displacement measurement model for health monitoring of structures using a motion capture system, measurement, vol. 59, pp. 352–362, 2015. doi: 10.1016/j.measurement.2014.09.063 [38] f. j. romero-ramirez, r. muñoz-salinas, and r. medina-carnicer, speeded up detection of squared fiducial markers, image vis. comput., vol. 76, pp. 38–47, 2018. doi: 10.1016/j.imavis.2018.05.004 [39] h. kato, m. billinghurst, marker tracking and hmd calibration for a video-based augmented reality conferencing system, in proceedings 2nd ieee and acm international workshop on augmented reality (iwar’99), 1999, pp. 85–94. doi: 10.1109/iwar.1999.803809 [40] m. fiala, designing highly reliable fiducial markers, ieee trans. pattern anal. mach. intell., vol. 32, no. 7, pp. 1317–1324, 2009. doi: 10.1109/tpami.2009.146 [41] d. flohr, j. fischer, a lightweight id-based extension for marker tracking systems, the eurographics association, 2007. doi: 10.2312/pe/ve2007short/059-064 [42] e. olson, apriltag: a robust and flexible visual fiducial system, in 2011 ieee international conference on robotics and automation, 2011, pp. 3400–3407. doi: 10.1109/icra.2011.5979561 [43] s. garrido-jurado, r. muñoz-salinas, f. j. madrid-cuevas, m. j. marín-jiménez, automatic generation and detection of highly reliable fiducial markers under occlusion, pattern recognit., vol. 47, no. 6, pp. 2280–2292, 2014. doi: 10.1016/j.patcog.2014.01.005 [44] m. f. sani, g. karimian, automatic navigation and landing of an indoor ar. drone quadrotor using aruco marker and inertial sensors, 2017 international conference on computer and drone applications (iconda), 2017, pp. 102–107. doi: 10.1109/iconda.2017.8270408 [45] i. lebedev, a. erashov, a. shabanova, accurate autonomous uav landing using vision-based detection of aruco-marker, international conference on interactive collaborative robotics, 2020, pp. 179–188. doi: 10.1007/978-3-030-60337-3_18 [46] n. elangovan, a. dwivedi, l. gerez, c. chang, m. liarokapis, employing imu and aruco marker based tracking to decode the contact forces exerted by adaptive hands, 2019 ieee-ras 19th international conference on humanoid robots (humanoids), 2019, pp. 525–530. doi: 10.1109/humanoids43949.2019.9035051 [47] m. abdelbarr, y. l. chen, m. r. jahanshahi, s. f. masri, w. shen, u. qidwai, 3d dynamic displacement-field measurement for structural health monitoring using inexpensive rgb-d based sensor, smart mater. struct., vol. 26, no. 12, p. 125016, 2017. doi: 10.1088/1361-665x/aa9450 [48] t. tocci, l. capponi, and g. rossi, aruco marker-based displacement measurement technique: uncertainty analysis, eng. res. express (2021) doi: 10.1088/2631-8695/ac1fc7 [49] w. n. sharpe, springer handbook of experimental solid mechanics. springer science & business media, 2008. doi: 10.1007/978-0-387-30877-7 [50] g. m. carlomagno and p. g. berardi, unsteady thermotopography in non-destructive testing, proc. 3rd biannual exchange, st. louis/usa, 1976, vol. 24, p. 26. [51] j. m. dulieu-barton, p. stanley, development and applications of thermoelastic stress analysis, j. strain anal. eng. des., vol. 33, no. 2, pp. 93–104, 1998. [52] n. harwood, w. m. cummings, calibration of the thermoelastic stress analysis technique under sinusoidal and random loading conditions, strain, vol. 25, no. 3, pp. 101–108, 1989. doi: 10.1111/j.1475-1305.1989.tb00701.x [53] l. capponi, thermoelasticity-based analysis: collection of python packages, 2020. doi: 10.5281/zenodo.4043102 [54] r. montanini, g. rossi, d. alizzio, l. capponi, r. marsili, a. di giacomo, t. tocci, structural characterization of complex lattice parts by means of optical non-contact measurements, in 2020 ieee international instrumentation and measurement technology conference (i2mtc), 2020, pp. 1–6. doi: 10.1109/i2mtc43012.2020.9128771 [55] n. harwood, w. m. cummings, applications of thermoelastic stress analysis, strain, vol. 22, no. 1, pp. 7–12, 1986. doi: 10.1111/j.1475-1305.1986.tb00014.x [56] s. suzuki, topological structural analysis of digitized binary images by border following, comput. vision, graph. image process., vol. 30, no. 1, pp. 32–46, 1985. doi: 10.1016/0734-189x(85)90016-7 [57] d. h. douglas and t. k. peucker, algorithms for the reduction of the number of points required to represent a digitized line or its caricature, cartogr. int. j. geogr. inf. geovisualization, vol. 10, no. 2, pp. 112–122, 1973. doi: 10.1002/9780470669488.ch2 [58] n. otsu, a threshold selection method from gray-level histograms, ieee trans. syst. man. cybern., vol. 9, no. 1, pp. 62– 66, 1979. doi: 10.1109/tsmc.1979.4310076 [59] g. bradski and a. kaehler, learning opencv: computer vision with the opencv library. o’reilly media, inc., 2008, isbn: 9780596516130 [60] f. yu, v. koltun, multi-scale context aggregation by dilated convolutions, arxiv prepr. arxiv1511.07122, 2015. [61] k. shin, j. hammond, fundamentals of signal processing for sound and vibration engineers. john wiley & sons, 2008, isbn: 978-0470511886. [62] p. mohanty and d. j. rixen, operational modal analysis in the presence of harmonic excitation, j. sound vib., vol. 270, no. 1–2, pp. 93–109, 2004. doi: 10.1016/s0022-460x(03)00485-1 http://dx.doi.org/10.1177/1475921718806895 http://dx.doi.org/10.1007/978-94-007-1977-4 http://dx.doi.org/10.1016/j.measurement.2014.09.063 http://dx.doi.org/10.1016/j.imavis.2018.05.004 https://doi.org/10.1109/iwar.1999.803809 http://dx.doi.org/10.1109/tpami.2009.146 http://dx.doi.org/10.2312/pe/ve2007short/059-064 http://dx.doi.org/10.1109/icra.2011.5979561 http://dx.doi.org/10.1016/j.patcog.2014.01.005 http://dx.doi.org/10.1109/iconda.2017.8270408 http://dx.doi.org/10.1007/978-3-030-60337-3_18 http://dx.doi.org/10.1109/humanoids43949.2019.9035051 http://dx.doi.org/10.1088/1361-665x/aa9450 http://dx.doi.org/10.1088/2631-8695/ac1fc7 https://doi.org/10.1007/978-0-387-30877-7 https://doi.org/10.1111/j.1475-1305.1989.tb00701.x https://doi.org/10.5281/zenodo.4043102 https://doi.org/10.1109/i2mtc43012.2020.9128771 https://doi.org/10.1111/j.1475-1305.1986.tb00014.x https://doi.org/10.1016/0734-189x(85)90016-7 https://doi.org/10.1002/9780470669488.ch2 https://doi.org/10.1109/tsmc.1979.4310076 http://dx.doi.org/10.1016/s0022-460x(03)00485-1 building information modelling and digital fabrication for the valorization of archival heritage acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 8 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 building information modelling and digital fabrication for the valorization of archival heritage giulia bertola1 1 politecnico di torino, dad, modlab arch, viale mattioli 39, 10125, torino, italy section: research paper keywords: architectural archives; archives; bim modelling; digital fabrication; rapid prototyping citation: giulia bertola, building information modelling and digital fabrication for the valorization of archival heritage, acta imeko, vol. 11, no. 1, article 18, march 2022, identifier: imeko-acta-11 (2022)-01-18 section editor: fabio santaniello, university of trento, italy received march 7, 2021; in final form march 29, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: giulia bertola, e-mail: giulia.bertola@polito.it 1. introduction through this contribution, the author intends to deepen some aspects and themes already addressed previously through two different papers both focused on the project of "due case a capri" by aldo morbelli (1942): the first one focused on the role of redesign and the use of traditional and manual techniques (figure 1) [1] for the understanding of the project, the second on the realization of a real model through the application of digital fabrication techniques from a digital reconstructive model using the bim revit® software [2]. archives of 20th century architecture represent today an important source for scholars to gain a profound understanding of architectural designs. the interpretation, use, and sharing of archive materials are activities aimed at deepening the knowledge of contemporary masters and the different architectural movements [3], [4]. building information modelling (bim) is applied here as a tool to rediscover, analyse, interpret and highlight architectural design, thus contributing to the recognition of architectural archives as cultural heritage. the author, after a careful analysis of the project drawings of the villas, obtained the 3d model working directly with revit® on the archive drawings, and then focused the final attention on the different methods of rapid prototyping and on the realization of a real scale model. the realization of 3d digital models and real models of architectures that have never been built represents an important contribution to the study and knowledge of archival drawings and morphological aspects. if the virtual space facilitates and increases our perceptual knowledge of architecture, allowing a different way of understanding space, the physical model produced by a 3d printer allows easier reading of architectural morphology [5]. 2. drawing and architectural language of aldo morbelli and the project for the “due case a capri” aldo morbelli (1903-1963) is an italian architect, born in orsara bormida in piedmont. after graduating from the faculty of architecture in rome, he founded his professional studio in turin in the 1930s. during his professional activity, aldo morbelli produced several architectural projects concerning single-family houses, social housing for the ina-casa plan, entertainment buildings, company representative offices, and post-war reconstruction works. in addition to these projects, he has also worked on interior design and furniture design studies. abstract archives of the 20th century are today the focus of many scholars in the disciplines of conservation, valorization and communication. the enhancement of archival heritage could benefit from the methodologies, techniques and tools offered by the current digital revolution. this is the case of the work shown in this proposal. a parametric modelling experience was developed for the project of "due case a capri" by architect aldo morbelli (1942) starting from archival documentation and from a previous graphic, manual and critical reading of the project. the aim of this research is to build a methodology able to reproduce 3d objects through building information modelling technology, integrating geometry with semantic information up to the realization of a scale models, through the application of different prototyping techniques. mailto:giulia.bertola@polito.it acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 aldo morbelli's archive is kept in the biblioteca di architettura "roberto gabetti" of politecnico di torino and contains several documents relating to his projects, both completed and unfinished. despite the fact that in the past many of his projects have obtained recognition in internationally renowned magazines and the critics have dedicated a monographic issue in l' architettura italiana to his single-family houses, to date, the figure of aldo morbelli is still little studied [6]. he affirms himself through a poetics that tends to a process of formal simplification, to a careful research of balance between project and insertion in the context (figure 2). this is particularly evident in the project of "due case a capri”. the buildings, never realized and defined by morbelli large house (located lower down) and little house (higher up), are thought to be situated on a land among the olive groves at the foot of mount tuoro, in the region "la cercola". the two buildings are facing west because of the steep slope of the land. he breaks the compactness of the volumes through the reference to the local architecture, solved through the insertion of segmental arches, the choice of colours and materials for walls, and coatings such as white. also, in the interior spaces there is a clear intention to give the rooms a plastic sense: the walls and the distributive elements, such as the "s" staircase of the large house adapt to the plan (figure 3). the little house is on three levels, basement floor for servants and storage, ground floor with living room and kitchen, which is accessed through a sloping wall with arched entrance, and first floor with two bedrooms. the large house, instead, is only on two levels, the lower level is for the living area, with the living and dining area at double height that occupies all the sea view front and that of the services develops towards the mountain, while the highest level is for the sleeping area, with four bedrooms. looking at the drawings, it can be seen that the architect wanted to experiment with the insertion of different types of roofing: horizontal flat and sloping flat for the little house, barrel and net vaulted for the large house. 3. the bim model generated by archive drawings this contribution aims to show a methodology for the management, preservation, and communication of archival material by integrating data with 3d modelling techniques. this type of documentation, if inserted within a virtual database, can become an active component of the archive itself, contributing to the general knowledge of lost or non-existent architectural artefacts. this theme opens a series of reflections regarding the philological interpretation of the unrepresented parts of an architectural project and the translation of the drawings of an unrealized architecture into a three-dimensional model. digital modelling with archival sources involves investigative work starting with the hypotheses of reconstruction of the sketches, checking the consistency of the scale drawings, and proposals for integrating missing data. the source of research, in this case, is the analogical documentary heritage. these consist of graphic, iconographic, and textual sources. for each reconstructive model, it is necessary to identify the phase of the project to which it refers according to the cognitive values that one wishes to emphasize through the research work [7]. figure 1. interpretative drawings for the project of the “due case a capri” (drawings by giulia bertola) figure 2. a. morbelli, real-life drawings of mediterranean architecture (archives of the library of architecture "roberto gabetti", polytechnic of turin. in the following archivi bca. fondo aldo morbelli). figure 3. a. morbelli, study sketches for the two houses in capri (archivi bca. fondo aldo morbelli). acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 archives could therefore become a place where to build, communicate and share knowledge within a complex system of relationships made up of different actors (institutions, curators, scholars, public), types of heritage (material, immaterial, real three-dimensional artefacts), and digital technologies (interaction, immersion, virtual and augmented reality) [8]. the construction of the model makes it possible to produce a variety of elaborates including real three-dimensional models to be made available to different users. as underlined by the london charter (2008) [9], it is the responsibility of the scientific community to ensure the sustainability of digital heritage, for example by promoting the use of open formats, favouring as much as possible the access to data to the community of users, experts or not. these operations are well suited to bim tools. they provide a good level of interactivity and consider the stages in the temporal evolution of the project. a bim information system can represent the "sustainable container" and the appropriate means for the transmission of knowledge (for example through advanced building information exchange systems). in the hbim environment, as in this case study, the operator must be able to manage complex and heterogeneous survey data (photographs, documents, sketches), master geometrical constructions, know construction techniques. the potential of historical building information modeling systems, despite the methodological and operational difficulties, allows building a database capable of managing large amounts of multidisciplinary data [10], [11], [12]. bim also offers the possibility of managing the life cycle of architecture, from the first hypotheses to the design and construction phases in a single model, and can facilitate the creation of models from the archival heritage. some significant examples of the use of bim technology as a tool for interoperability and advanced information management technology have emerged in some studies by saygi and remondino (2013) [13] which emphasize the semantic enrichment of three-dimensional digital models through the integration of heterogeneous data sets and in the arches (2018) [14] and inception (2016) [15] projects; the latter aim to make available interoperable information through multiple services such as websites, digital maps, applications, and 3d models. while the first one is focused on the creation of an international community of computer scientists and heritage professionals to share experiences, knowledge, and skills for the management of digital inventories, the second one is a project that aims at the development of advanced 3d modelling for the access and understanding of european cultural heritage. in particular, the hbim modelling process starts with the documentation of user needs, the identification of cultural heritage buildings semantic ontology, and data structure for information catalogue integrated with the 3d geometric model. a last interesting study is cult project (2018) [16], a project in which a software toolkit was developed to store data from architectural heritage research projects and share them with websites, tourism applications, and bim and gis interfaces. for the “due case a capri”, it was decided to use the building information modelling methodology and build a threedimensional model using revit 2021®. after scanning and digitizing the original drawings of the project, the documents were inserted into the revit 2021® software. to optimize the modelling process, the files in .jpg format were not imported into the model but only linked to it. initially, the plans were linked at 1:200 and 1:500 scale. thanks to these, it was possible to correctly set the geolocation data, create a topographic surface and import contour data. after having identified the correct positioning of the buildings, we proceeded with the insertion of technical drawings at scale 1:100 containing plans, elevations and sections of each villas and territorial sections. each drawing has been scaled with reference to the dimensions reported on them and used as a base on which to set the model. following the scaling of the images, the most significant work plans have been identified and a grid has been built for the alignment of plans, elevations and sections (figure 4). the axes of the grid, besides being a useful trace for the construction of all the elements that will compose the model, also represent the tool to facilitate the reading and verification of the original drawings. this phase of work, to be done correctly, requires prior knowledge of the main volumes that make up the buildings. once the volumes had been identified, we proceeded with the identification of the types of walls and floors, attributing to each one a specific thickness: perimeter walls (60 cm), partition walls (10 cm, 20 cm), and floors (40 cm) (figure 5). where it was not viable to refer to a system family, it was decided to use the in-place masses to create the roofs and curved elements present in some portions of the external walls and on the railing of the internal staircase of the casa grande. all the masses have been transformed into surfaces. during the modelling of these elements, some problems related to the rigidity of the bim method in the modelling of unconventional shapes emerged. the in-place masses were created using the swept blend command, which, however, did not make it possible to faithfully reproduce some of the most characteristic architectural elements of the project and typical of aldo morbelli's poetic. during the final phase of elaboration of the bim model, it was chosen to refer to the original plastic models not preserved but documented photographically. in order to maintain formal coherence, it was decided to create monochrome grayscale rendered views (figure 6). figure 4. inserting archive images (archivi bca. fondo aldo morbelli) on revit 2021®. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 also, in the elaboration of plans and sections, it has been chosen to reach a level of detail that can be hypothesized at a scale of 1:200 in order to avoid distorting the original project by providing incorrect interpretations and additional information. moreover, since it is a preliminary project, without detailed information about structures, stratigraphies, and construction details, it was decided to use generic model families, avoiding the customization of the library of parametric objects. in the future, this work aims to use a philological approach based on the classification of archive documents. the different sources (sketches, final and executive drawings, documents, articles, photographs, and physical models) can be linked to the digital model and referred to at different levels of analysis. each level corresponds to a quantity of information that, when added to the previous level, increases the level of reliability of the reconstructive model with the original drawings. the bim methodology provides a gradual definition of the 3d model, with geometric accuracy and data content. in some countries the levels of detail coincide with the levels of the 3d model, level of detail (lod), and in others with the levels of information, level of information (loi), transmitted where graphic data are missing, thus suggesting a different relationship between model and real object [17]. these levels can be declined within the framework of historic-bim (h-bim), even though partial documentary sources, such as those in the case of unbuilt architecture. the concept of the lod development level applied to bim management is based on a linear process that progressively enriches both the model and the information through the different phases. these phases correspond to: lod 100 represents an a-dimensional conceptual model, lod 300 a three-dimensional model in the executive design phase, lod 350-400 represents the implemented model for the construction phase, lod 500 the as-built update after the construction phase. in this case, the lod will correspond to the design level at which the architectural design was interrupted [18], [19], [20]. 4. digital fabrication and rapid prototyping technologies this paper aims to focus on the effectiveness of digital fabrication processes and rapid prototyping techniques for the valorization of the archival heritage. in this context, digital fabrication is considered a working method to support the entire design process for the study of form and the built environment. digital fabrication is a process by which solid objects can be created from digital drawings. a process capable of exploiting different manufacturing techniques. the choice of a technique is usually made following figure 5. plans, section and elevation of “casa grande” created through revit 2021®. figure 6. a.morbelli, photograph of the original model (archivi bca. fondo aldo morbelli) and view rendered through revit 2021®. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 some considerations made on the speed of processing, the cost, the material, and the final aesthetic performance [21]. here below, we compare the two main categories of prototyping techniques: subtractive and additive methods of manufacturing. the first are based on the idea of reproducing an object by sculpting a block, removing material over a predetermined path. this operation is feasible through two types of machines: the computer numerical controlled (cnc) machine and laser beam machine. whereas the cnc machine is a milling tool, the laser beam machine involves a thermal separation process. the laser beam hits the surface of the material and heats it up to the point where it is melted or completely vaporized. once the laser beam has completely penetrated the material at a certain point, the cutting process begins. the laser system follows the selected geometry and during this process the material is separated. the additive process, usually called 3d printing, is instead a process by which solid shapes (usually of small size) are constructed by building one layer at time. nowadays there are several additive manufacturing processes that differ from each other depending on the different materials that can be used and how they are deposited to create the various objects. all 3d printing processes involve the simultaneous collaboration of software, hardware, and materials and have the great advantage, compared to subtractive processes, of being independent from the geometric complexity of the digital model. in addition to the fused deposition modeling (fdm) printing technique, explored in the next paragraph there are other three main types of 3d printing [22]. two prototyping techniques were used for this work: fdm for buildings and laser beam machining (lbm) for the ground. 4.1. fused deposition modelling the fdm it is the most common 3d printing technology. this method uses a filament (a string of solid material), which, under the thrust of a heated nozzle, melts. the printer continuously moves this nozzle around, laying down the melted material at a precise location, where it instantly cools down and solidifies. this builds up the model layer by layer. during the construction of these solid shapes it is often necessary to use vertical supports to sustain overhanging parts. vertical supports can be made with water-soluble filaments that when immersed in water can be removed easily without leaving hard-to-remove residue. the filament used in the fdm process usually requires a specific diameter, strength, and other properties. during the extrusion of a polymer, the diameter of the filament must be uniform. to achieve this, the machine must have adjustable screw speed, pressure, and temperature. all these parameters are examined and selected until an optimal filament diameter is reached. for a smooth filament, a different calibration nozzle was used. for low-temperature materials (polylactic acid pla, polyethylene pe) the calibration nozzle is made of copper and the thermal seal is made of polytetrafluoroethylene. for high-temperature materials (acrylonitrile butadiene styrene abs, polyamide pa) the calibration nozzle is made of aluminium [23]. following the printing process, to avoid seeing the layers it is often necessary to sand and polish the surfaces. for this case study, the buildings were made in pla and printed using the delta wasp 2040 industrial line 4.0® printer whose characteristics are shown in table 1 [24]. once the standard triangle language (stl) format was obtained, a test was made to verify the file through the software cura®, an open-source slicing application for 3d printers through which an analysis of the model was carried out: thickness, stability, positioning, and orientation of the model on the surface. the stl file was also automatically divided by the software into sections (slices). the software also automatically generates the support structures. the plastic filament is conducted in a reel, pushed, and melted through the extrusion nozzle. when the loose filament comes into contact with the construction plane it hardens and the rest of the material is support structure needed gradually released [25]. 4.2. laser beam machine for the realization of the ground we chose instead to use another rapid prototyping tool: lbm. the soil was made of 2 mm thick cardboard using the totrec speedy 400® printer. totrec speedy 400 is a type of cnc machine. the user can process an object through a design software, send it to the laser cutting machine and have it cut it automatically. once the design is sent to the machine, the device uses a laser beam to cut or engrave the material on the cutting plane. this type of processing allows wide versatility in materials, no need for subsequent machining and high precision. from the 2d file generated by revit 2021® you can proceed with the print layout operations, defining the cutting power values using the job control® software. 5. from bim model to prototype some of the studies regarding the relationship between bim objects and digital fabrication processes reflect on how such objects can be incorporated with the semantics of fabrication and how they can then be used to support the workflow between designers and fabricators. a reflection that is applied in particular to cnc fabrication and proposes specific process maps for the conventional workflow between design and fabrication disciplines from the domain of custom cabinetry [26]. bim then, in support of digital workflow design for all building disciplines, including the use of structural information models for digital fabrication of steel structures[27]. another area of reference is the construction industry explored in-depth in sakin and kiroglu studies (2017) regarding new 3d printing technologies for sustainable buildings and pointing to contour crafting as a promising technique that may be table 1. general characteristics of delta wasp 2040 industrial line 4.0®. system printer model max. build size accuracy materials advantages disadvantages fused deposition modelling delta wasp 2040 industrial line 4.0® 200 × 200 × 40 mm³ 0.1-0.2 mm pla (polylactic acid) good accuracy. functional materials medium range of materials. office friendly. support structure needed acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 able to revolutionize the construction industry in the near future [28]. for the current case study following the choice of prototyping techniques, attention was paid to the preparation of files for printing. for the construction of the terrain, we started with the 2d file generated by revit 2021® and proceeded with the print layout operations, defining the cutting power values with the job control® software (figure 7). to proceed instead with the 3d printing, since an stl exporter for revit 2021® is not yet available, the file was exported in 1:100 scale in fbx format, imported in rhinoceros® and then exported in stl. during the import phase in rhinoceros®, the 3d model was scaled to 1:200 and the unit of measurement was changed to millimeters. for dimensional issues related to the printing dimensions of the machine, the models were divided into parts: building blocks and outer walls. in addition, for the realization of the volumes, neutral colours were deliberately chosen so as to focus the viewer's attention on the three-dimensional geometries and the composition of the volumes. the level of complexity of the model determines the number of triangles needed and their size. in turn, the number of triangles determines the size of the file. as happened in this case, it may happen that during the conversion of the revit file to stl, critical issues emerge, and the exported file may contain some errors. these errors can be of various types: holes or blanks, inverted or intersected triangles. (figure 8) during the printing phases, the goal is to obtain objects characterised by continuous and well-finished surfaces. for additive manufacturing, the surfaces of the 3d model are converted into mesh, a mesh composed of triangular faces and vertices connected to each other. during conversion, you may get a model with mismatched edges, holes, and triangles that intersect each other in incorrect positions. in particular, the mesh for 3d printing must have the following characteristics: the surfaces must be closed and all triangles must connect with other triangles along the edges without intersecting and must be correctly oriented. in particular, netfabb® software was used for file repair operations. netfabb® is a software for editing stl files and offers a rich set of tools that optimizes workflows and minimizes building errors. the procedure used for the small house was as follows: open stl file on netfabb® and verify the quality of the 3d model by checking the number of triangles that make up the model (13316). subsequently the software indicates that the file has errors, then proceed to click on automatic part repair to repair the part automatically. this operation will show us what's wrong with this model. this was followed up by checking for incorrectly oriented triangles and then clicking on the commands prepare and repair part. following these operations in the table on the right of the screen appear the new values for edges (3588), triangles (2392), inverse orientation (615), holes (0), border edges (0). at this point we can start the repair process of the file with the command run repair script and extended repair. from this procedure it has obtained a new model with new values of inverse orientation (0), holes (0), border edges (0), edges (4968) and triangles (3312). (figure 9) at this point was possible to export the stl file, open it with cura® and print it. (figure 10) the final result has numerous imperfections. the surfaces, despite the high level of accuracy (table 2), are not smooth and the material has not been deposited evenly. this is due to the poor quality of the mesh. these problems can hardly be completely solved with mesh repair software. figure 7. organization of the model for the printing process through revit 2021® and rhinoceros®. table 2. values and print settings used on the slicer cura® for the "due case a capri" 3d printing project. quality infill support time material quantity layer height density pattern grid enable 12 h pla (polylactic acid) gray 40 g 0.1 mm 20 % figure 8. the external wall of the "casa grande", imperfections during the printing phase due to errors on the model after the export from revit®. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 to obtain higher quality printable 3d files that can still be computed in revit®, an alternative method could be used using rhino.inside.®revit. this would involve modelling the object in rhinoceros® (one of the most suitable software for transferring 3d geometries to rapid prototyping tools) [29], importing it into the revit® file, and assigning families and types to the surfaces of the mesh. 6. conclusion in the scenario of the digitization of archives, the 3d modelling phase allows extending the consultation of archival material, placing drawings and photographs alongside threedimensional models that can be explored through virtual reality and augmented reality experiences [30], with the application of different digital interfaces, machine learning techniques and computer supports. these technologies allow archives to catalogue and publicly show their content at any time through interactive support, allowing curators and scholars to greatly enrich the narrative experience. archives can place visitors virtually within the experience, leaving them free to explore the content actively, helping to build a deeper connection between visitors and archival [31]. the prototyping phase could, in addition to becoming an ideal context to experiment with the flexibility of the different printing techniques, give users the possibility, during the visit to the archive, to consult not only the original drawings but also plastic models (possibly decomposable) that allow a better understanding of the three-dimensionality of the artefact. this could be particularly useful and significant in the case of unrealized architectures, such as those treated in this case study, and thus become the first and only physical representation of the artefact. the aim is to demonstrate that the conjectural reconstruction through digital models is an act of clarification of some aspects of architecture, often only left to the written word; through the construction of new representations the words take shape through a new figurative corpus; the digital model is not only the virtual image of the building but it becomes a possible image, becoming its only existential reality [32]. given the problems that emerged during the preparation of the file for printing and the medium to low quality of the final printed model, the application of bim technology to achieve high-quality 3d printing operations still needs to be further investigated. references [1] r. spallone, g. bertola, design drawings as cultural heritage. intertwining between drawing and architectural language in the work of aldo morbelli, in: the graphics of heritage. l. agustín (editor). springer, cham, 2020, pp.73-85. doi: 10.1007/978-3-030-47983-1_7 [2] g. bertola, archives enhancement through design drawings survey, bim modeling and prototyping, proc. of the imeko tc4 international conference on metrology for archaeology and cultural heritage, trento, italy, 14-16 september 2020, pp. 67-71. online [accessed 20 march 2022] https://www.imeko.org/publications/tc4-archaeo2020/imeko-tc4-metroarchaeo2020-013.pdf [3] r. spallone, g. bertola, f. ronco, sfm and digital modelling for enhancing architectural archives heritage, in metrology for archaeology and cultural heritage, proc. of imeko tc-4 international conference on metrology for archaeology and cultural heritage, firenze, italy, 4-6 december 2019, pp. 142-147. online [accessed 20 march 2022] https://www.imeko.org/publications/tc4-archaeo2019/imeko-tc4-metroarchaeo-2019-27.pdf figure 9. the stl file before repair operation in netfabb® and the stl file after repair operation. figure 10. 3d printing process and the final real model. https://doi.org/10.1007/978-3-030-47983-1_7 https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-013.pdf https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-013.pdf https://www.imeko.org/publications/tc4-archaeo-2019/imeko-tc4-metroarchaeo-2019-27.pdf https://www.imeko.org/publications/tc4-archaeo-2019/imeko-tc4-metroarchaeo-2019-27.pdf acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 8 [4] r. spallone, g. bertola, f. ronco, sfm and digital modelling for enhancing architectural archives heritage, in metrology for archaeology and cultural heritage, acta imeko, vol. 10, 1 (2021), pp. 224-233. doi: 10.21014/acta_imeko.v10i1.883 [5] m. incerti, g. mele, u. velo, the productive role of model from a virtual to a physical entity. the communication of 36 projects of never constructed villas, in: mo.di.phy. modeling from digital to physical. innovation in design languages and project procedures. m. pignataro (editor). maggioli editore, santarcangelo di romagna, 2013, pp. 128-141, isbn 978-88-3876-274-1. doi: 10.1007/978-3-030-33570-0 [6] a. melis, architetti italiani. aldo morbelli, l'architettura italiana, 3 (1942) pp. 49-72. [in italian] [7] r. spallone, f. natta, h-bim modelling for enhancing modernism architectural archives. reliability of reconstructive modelling for “on paper” architecture, in: digital modernism heritage lexicon, springer tracts in civil engineering. c. bartolomei et al. (editors). springer nature, cham, 2021, pp. 809829. doi: 10.1007/978-3-030-76239-1_34 [8] m. lo turco, the digitization of museum collections for the research, management and enhancement of cultural heritage, in: digital & documentation. database and models for the enhancement of heritage, s. parrinello (editor), pavia university press, pavia, 2019, pp. 92-103. doi: 10.1109/digitalheritage.2018.8810128 [9] london charter (carta di londra). londoncharter.org. online [accessed 25 january 2021] http://www.londoncharter.org/fileadmin/templates/main/docs /london_charter_2_1_it.pdf [10] s. nicastro, l'applicazione del bim come sistema informativo localizzato nel processo di conoscenza del patrimonio culturale, in: 3d modeling & bim. applicazioni e possibili futuri sviluppi. t. empler (editor). dei tipografia del genio civile, roma, 2016, pp. 172-183, isbn 978-88-4961-931-7. [in italian] [11] c. bianchini, survey, modeling, interpretation as multidisciplinary components of a knowledge system, scires-it-scientific research and information technology, 4 (2014), pp. 15-24. doi: 10.2423/i22394303v4n1p15 [12] a. r. m. cuperschmida, m. m. fabricio, j. franco, hbim development of a brazilian modern architecture icon: glass house by lina bo bardi, heritage, 2 (2019), pp. 1927-1940. doi: 10.3390/heritage2030117 [13] g. saygi, f. remondino, management of architectural heritage information in bim and gis: state-of-the-art and future perspectives, international journal of heritage in the digital era, 2 (2013), pp. 695-713. doi: 10.1260/2047-4970.2.4.695 [14] d. myers, a. dalgity, i. avramides, the arches heritage inventory and management system: a platform for the heritage field, journal of cultural heritage management and sustainable development, 6 (2016), pp. 213-224. doi: 10.1108/jchmsd-02-2016-0010 [15] f. maietti, r. di giulio, e. piaia, m. medici, f. ferrari, enhancing heritage fruition through 3d semantic modelling and digital tools: the inception project, iop conference series: materials science and engineering, 364 (2018), pp.1-8. doi: 10.1088/1757-899x/364/1/012089 [16] cult project. online [accessed 25 february 2021] http://cult.dicea.unipd.it [17] v. croce, g. caroti, a. piemonte, m.g. bevilacqua, from survey to semantic representation for cultural heritage: the 3d modeling of recurring architectural elements, acta imeko, vol. 10, 1 (2021), pp. 98-108. doi: 10.21014/acta_imeko.v10i1.842 [18] l. carnevali, f. lanfranchi, m. russo, built information modeling for the 3d reconstruction of modern railway stations, heritage, 2 (2019), pp. 2298-2310. doi: 10.3390/heritage2030141 [19] r. brumana, s. della torre, m. previtali, l. barazzetti, l. cantini, d. oreni, f. banfi, generative hbim modelling to embody complexity (lod, log, loa, loi): surveying, preservation, site intervention the basilica di collemaggio (l’aquila), in applied geomatics, 10 (2018), pp. 545-567. doi: 10.1007/s12518-018-0233-3 [20] p. parisi, m. lo turco, e. c. giovannini, the value of knowledge through h-bim models: historic documentation with a semantic approach, the international archives of the photogrammetry, remote sensing and spatial information sciences, volume xlii2/w9, 8th intl. workshop 3d-arch “3d virtual reconstruction and visualization of complex architectures”, bergamo, italy , 68 february 2019, pp. 581 – 588. doi: 10.5194/isprs-archives-xlii-2-w9-581-2019 [21] l. sass, r. oxman, materializing design: the implications of rapid prototyping in digital design, design studies, 27(3) (2006), pp. 325-355. doi: 10.1016/j.destud.2005.11.009 [22] r. scopigno, p. cignoni, n. pietroni, m. callieri and m. dellepiane, digital fabrication techniques for cultural heritage: a survey, computer graphics forum, 36 (1) (2017), pp. 6-21. doi: 10.1111/cgf.12781 [23] p. dudek, fdm 3d printing technology in manufacturing composite elements, archives of metallurgy and materials, 58 (4) (2013), pp. 1415-1418. doi: 10.2478/amm-2013-0186 [24] g. ryder, b. ion, g. green, d. harrison, b. wood, rapid design and manufacture tools in architecture, automation in construction, 11(2002), pp. 279-290. doi: 10.1016/s0926-5805(00)00111-4 [25] l. sass, r. oxman, materializing design: the implications of rapid prototyping in digital design, design studies, vol. 27, issue 3, 2006, pp. 325-355. doi: 10.1016/j.destud.2005.11.009 [26] m. hamid, o. tolba, a. el antably, bim semantics for digital fabrication: a knowledge-based approach. automation in construction, 91 (2018) pp. 62–82. doi: 10.1016/j.autcon.2018.02.031 [27] autodesk, bim and digital fabrication. online [accessed 28 february 2021] https://images.autodesk.com/latin_am_main/files/revit_ bim_and_digital_fabrication_mar08.pdf [28] m. sakin, y. c. kiroglu, 3d printing of buildings: construction of the sustainable houses of the future by bim, proc. of the 9th international conference on sustainability in energy and buildings, chania, crete, greece, 5-7 july 2017, pp. 702-711. doi: 10.1016/j.egypro.2017.09.562 [29] m. stavrić, p. šiđanin, b. tepavčević, digital technology software used for architectural modelling, in architectural scale models in the digital age, springer, vienna, 2013, pp. 161-183, isbn 9783-7091-1447-6. [30] m. lo turco, a. marotta, modellazione 3d, ambienti bim, modellazione solida per l’architettura e il design, in uno (nessuno) centomila| prototipi in movimento, in: trasformazioni dinamiche del disegno e nuove tecnologie per il design. m. rossi, a. casale (editors). maggioli editore, sant’arcangelo di romagna, 2014, pp. 17-24, isbn 978-88-9160-449-1. [in italian] [31] v. palma, tra spazio reale e realtà virtuale, in: progetto e data mining, l. siviero (editor), lettera ventidue, siracusa, 2019, pp.8899, isbn 978-88-6242-390-8. [in italian] [32] f. maggio, architetture nel cassetto, in: territori e frontiere della rappresentazione. a. di luggo, p. giordano, r. florio, l. m. papa, a. rossi, o. zerlenga (editors), gangemi editore, roma, 2017, pp. 451-458, isbn 978-88-4923-507-4. [in italian] https://doi.org/10.21014/acta_imeko.v10i1.883 https://doi.org/10.1007/978-3-030-33570-0 https://doi.org/10.1007/978-3-030-76239-1_34 https://doi.org/10.1109/digitalheritage.2018.8810128 http://www.londoncharter.org/fileadmin/templates/main/docs/london_charter_2_1_it.pdf http://www.londoncharter.org/fileadmin/templates/main/docs/london_charter_2_1_it.pdf https://doi.org/10.2423/i22394303v4n1p15 https://doi.org/10.3390/heritage2030117 https://doi.org/10.1260/2047-4970.2.4.695 https://doi.org/10.1108/jchmsd-02-2016-0010 https://doi.org/10.1088/1757-899x/364/1/012089 http://cult.dicea.unipd.it/ https://doi.org/10.21014/acta_imeko.v10i1.842 https://doi.org/10.3390/heritage2030141 https://doi.org/10.1007/s12518-018-0233-3 https://doi.org/10.5194/isprs-archives-xlii-2-w9-581-2019 https://doi.org/10.1016/j.destud.2005.11.009 https://doi.org/10.1111/cgf.12781 https://doi.org/10.2478/amm-2013-0186 https://doi.org/10.1016/s0926-5805(00)00111-4 https://doi.org/10.1016/j.destud.2005.11.009 https://doi.org/10.1016/j.autcon.2018.02.031 https://images.autodesk.com/latin_am_main/files/revit_bim_and_digital_fabrication_mar08.pdf https://images.autodesk.com/latin_am_main/files/revit_bim_and_digital_fabrication_mar08.pdf https://doi.org/10.1016/j.egypro.2017.09.562 acta imeko  september 2014, volume 3, number 3, 63 – 67  www.imeko.org    acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 63  experimental evaluation of the air trapped during the water  entry of flexible structures  riccardo panciroli 1 , giangiacomo minak 2   1  università degli studi niccolò cusano. via don carlo gnocchi, 3. 00166 roma, italy  2  din – alma mater studiorum. viale del risorgimento, 2. 40136 bologna, italy      section: research paper  keywords: hull slamming; hydro‐elasticity; air trapping; flexible structures  citation: riccardo panciroli, giangiacomo minak, experimental evaluation of the air trapped during the water entry of flexible structures , acta imeko, vol. 3,  no. 3, article 13, september 2014, identifier: imeko‐acta‐03 (2014)‐03‐13  editor: paolo carbone, university of perugia   received june 25 th , 2013; in final form july 13 th , 2014; published september 2014  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by the office of naval research through the grant n00014‐12‐1‐0260.  corresponding author: giangiacomo minak, e‐mail: giangiacomo.minak@unibo.it      1. introduction  predicting the impact-induced stresses during the water entry of flexible structures is of major interest for the design of marine structures. during the water entry of flexible structures, several fluid structure interaction (fsi) phenomena might appear [1]–[9]. the most important are: cavitation, air trapping, and repetition of impact and separation between the fluid and the structure. the occurrence of such fsi phenomena might strongly influence the impact dynamics. although air trapping is a well-known phenomenon [10], to the author's knowledge none of the previous works in the literature presented a methodology to quantify it and evaluate the effect of the structural deformation on it. the present work faces this challenge, proposing the use of an optical technique to achieve this aim. optical techniques have been recently utilized to measure the structural deformation of compliant wedges entering the water in [11]. in this work, we firstly develop a digital imaging technique for the post-processing of high-speed images to isolate the regions of the fluid where air is trapped. this methodology is later used to study the evolution of the air trapping in time and to dissect the role of impact velocity and structural deformation. although there are no previous results in the literature to validate the proposed method, the present results are found in good agreement with the expectations. 2. experimental setup  experiments are conducted on a drop weight machine for water impacts with a maximum impact height of 4 m. wedges are comprised by two panels joined together and to the falling sledge on one edge to assume a cantilever boundary condition, where the boundary corresponds to the keel of the wedge. panels of various material and thickness can be mounted on the abstract  deformable  structures  entering  the  water  might  experience  several  fluid‐structure  interaction  (fsi)  phenomena;  air  trapping is one of these. according to its definition, it consists of air bubbles trapped between the structure and the fluid  during  the  initial  stage  of  the  impact.  these  bubbles  might  reduce  the  peak  impact  force.  this  phenomenon  is  characteristic for the water entry of flat‐bottom structures. above a deadrise angle of 10°, air trapping is negligible. in  this  work,  we  propose  a  methodology  to  evaluate  the  amount  of  air  trapped  in  the  fluid  during  the  water  entry.  experiments are performed on wedges with varying stiffness, entry velocity, and deadrise angle. a digital image post‐ processing technique is developed and utilized to track the air trapping mechanism and its evolution in time. interesting  results are found on the effect of the impact velocity and the structural deformation on the amount of air trapped during  the slamming event.  acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 64  sledge at any deadrise angle (β in figure 1) ranging smoothly from 0° to 50°, where 0° and 90° are the extreme case of a flat panel and a vertical blade, respectively. teflon insets minimize friction between the sledge and the prismatic rails. the sledge holds wedges 300 mm long and 250 mm wide. the falling body hits the fluid at the centre of a tank 1.2 meters wide, 1.8 meters long and 1.1 m deep. the tank was filled with water only up to 0.6 m to prevent the water waves generated during the impact to overflow. the drop height, defined as the distance between the keel and the water surface, ranged from 0.5 m to 3 m at 0.25 m increments. impact acceleration is measured by a v-link microstrain wireless accelerometer (±100g) located at the tip of the wedge. all reported accelerations are referenced to 0 g for the free-falling phase. the sampling frequency is set to its maximum of 4 khz. entering velocity is recorded by a laser sensor (με ils 1402) capturing the sledge position over 350 mm of ride at a frequency of 1.5 khz with a definition of 0.2 mm. the entry velocity is obtained by the numerical differentiation of the position. a high-speed camera is utilized to capture the images during the water entry. the camera is located to view the wedge from the side, as shown in figure 2. the capturing frequency is set to 1.5 khz with a definition of 1200×1024 pixels. a vertical transparent screen is located inside the water tank just before the wedge (clearance is ≈2 mm) to prevent fluid spray in the ydirection, which would have made it impossible to see the evolution of the fluid jet (figure 3) generated during the water entry. as the water on the front side of the screen remains still during the impact, pictures show both the still water surface (on the front side of the screen) and the fluid jet (on the back side of the screen), as shown in figure 3. aluminium (a), e-glass (mat) / vinyl ester (v) and e-glass (woven) / epoxy (w) panels 2 mm thick were used. composite panels were produced by vartm by infusion of vinyl ester resin on e-glass fibre mat, while the e-glass (woven 0°/90°) / epoxy panels were produced in an autoclave. the assumed material properties and the measured fundamental frequency of the panels are listed in table 1. for given material and panels thickness, the impact variables are: deadrise angle β (range 4° to 35°) and falling height (ranging 0.25 to 2.5 m). during the experiments, structural deformations are recorded by strain gauges located at various positions, while an accelerometer and a laser position sensor record the impact dynamics. the wedges are built as an open-structure, as the sides of the panels are open and the water is free to flow from the sides during impact (this setup is necessary to allow higher flexibility). this leads the entire structure to be theoretically negatively buoyant. however, the impact time is very short, as it typically lasts less than 40 ms (the structure enters the water with an entry velocity in the range of 4 to 6 m/s). in such a short duration, the water has not enough time to flow into the wedge from the sides, so it behaves like a closed-shape wedge, with a positively buoyant behaviour (the acceleration range is approximately 20 g to 100 g, increasing with the entry speed). wedges are manually lifted to the desired height and released. laser sensor, strain gauges, and accelerometer signals are triggered together in a single manual start. since the position of the sledge relative to the free surface is known, the initial impact time is calibrated during the data post-processing on the basis of the position recorded by the laser sensor. 3. preliminary experimental results  in the following, we display some images captured during the water entry of wedges with deadrise angles higher than 10°. the examples evaluate the air trapped for variable (high to low) deadrise angle. two impact velocities are shown for each deadrise angle. figure 4 shows the water entry of a wedge with deadrise angle of 30° entering the water at 4.2 m/s and 6 m/s. in both cases no water is trapped in the fluid, as indicated by the smooth uniform colour in the fluid region. figure  2.  sketch  of  the  experimental  set‐up.  the  wedge  is  hinged  to  the  sledge  that  enters  the  water  with  pure  vertical  velocity.  the  high‐speed  camera is located on the side of the water tank.  figure 1. conceptual scheme of the wedge used for experiments. l = panel  length.    =  deadrise  angle.  solid  line:  undeformed  panels,  dashed  line:  expected deformation during impact.   figure 3. sample of an image captured by the high‐speed camera. the still  water above the transparent screen is clearly visible. the lighter dots clearly  visible in the fluid are air bubbles that are used as tracers in the piv analysis. table 1. collection of the estimated material properties.  material  e1=e2 [gpa]    [kg/m 3 ]  n [hz]  6068 t6  68  0.32  2700  18.01  e‐glass/vinylester  20.4  0.28  2050  9.77  e‐glass /epoxy  30.3  0.28  2015  19.69    acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 65  figure 5 shows the water entry of a wedge with deadrise angle of 20° entering the water at 4.2 m/s and 6 m/s. while in the first image there is no evidence of air trapped in the fluid, the wedge impacting at higher speed (picture on the right) is trapping some air in the form of small bubbles that appear at the middle of the wedge. although existing, air trapping is still negligible for this deadrise angle. wedges with deadrise angle of 15° (shown in figure 6) show results similar to the last example. even for this deadrise angle some negligible air bubbles are trapped in the fluid. air bubbles are definitely spread on a wider region in the case of a deadrise angle of 12°, as shown in figure 7. until the deadrise angle of 12° no air cushions are formed: air is trapped in the form of small bubbles dispersed in the fluid and its effect can be neglected. instead, air cushions are formed for deadrise angles lower than 12°, indicating that our results are in line with the literature. in the following sections, the research will focus on the water entry of wedges with deadrise angle lower than 12° with particular effort on evaluating the amount of air trapped in the fluid, its evolution in time, and the effect of the structural deformation on it. as the first step, a digital image postprocessing methodology capable of evaluating the amount of air trapped in the fluid has been developed and is presented in the next section. 4. the use of an optical method to account for the  air trapped during the water entry  to track the evolution of the air trapped during the water entry, a digital image technique has been developed to post– process the high-speed images. this technique relies on the properties of the water surface to diffract the light: as air bubbles are entrapped in the fluid, if lightened by a light source, their surface will diffract the light making them brighter than the surrounding fluid. images with a definition of 1200×1024 pixels are captured by the camera at a rate of 1.5 khz. an example of a typical image where air is trapped into the fluid is shown in figure 8. images (originally in colour) are converted to greyscale, that is, images are represented by a matrix where the cells assume a value between 0 and 255, where 0 corresponds to a fully black pixel and 255 to a fully white pixel. all the values in between define the grey level. the intensity of the grey levels vs. the pixels counts are plotted as a histogram (figure 9). we choose for the relation between the magnitude of air trapped below the structure and the number of pixels exceeding a certain grey level. each pixel corresponds to an area of 0.23×0.23 mm2, since the calibration gave that 1305 pixels correspond to 300 mm in length. upon an independent study on the role of the threshold level on the evaluated results on the computed amount of trapped air, we choose 200 as the threshold level above which the pixels have to be considered air. such threshold level was found to be extremely affected by the lighting parameters used during the image acquisition: diaphragm aperture and exposition time (inversely proportional to the capturing figure 4. image of a wedge with deadrise angle of 30° entering the water at 4.2 m/s (left) and 6 m/s (right). the fluid shows a uniform colour, meaning that no air has been trapped during the water entry.   figure 5. image of a wedge with deadrise angle of 20° entering the water at 4.2 m/s (left) and 6 m/s (right). there is no air trapped in the case of lower entry velocity while some very small air bubbles appear in the case of larger velocity.  figure 6. image of a wedge with deadrise angle of 15° entering the water at  4.2 m/s (left) and 6.7 m/s (right). in both cases there is some air trapped in the fluid in the form of small air bubbles.  figure 7. image of a wedge with deadrise angle of 12° entering the water at 5.2 m/s (left) and 6.7 m/s (right). air trapping is much more visible than the previous  cases  since  a  concentrated  light  is  used  to  highlight  the  air bubbles.  figure 8. example of an image where some air is trapped in the fluid during  the impact. air appears as a bright region due to the light that is diffracted  by the surface of the air bubbles.  figure 9. example of a histogram of the grey level (0 to 256) vs. pixel count  of the greyscale image.  acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 66  frequency). however, we comment that, for a given diaphragm aperture and exposition time, variations of the threshold level from the reference value by 20% have negligible effects on the results. a new binary image is then built: pixels below the threshold are set as black (0), while pixels exceeding the threshold are set as white (255). a black and white picture is obtained this way. to smooth out the images and clear possible lonely black pixels isolated in wide white regions an algorithm based on morphological reconstruction described in [12] is applied to the images. later, the general procedure outlined in [13] is applied to compute the white regions. as output, the white regions are listed and their perimeter and area is reported. small air bubbles are dispersed in the water even before the impact. a threshold on the minimum size of the area is thus used to neglect these small bubbles from the evaluation of the total amount of air trapped during the water entry. the number of white regions is thus filtered to exclude those with an area smaller than 12 pixels, as this value has been evaluated to correspond to the area of the air bubbles already trapped in the fluid before the impact. the total area of air trapped below the structure is thus evaluated as it is assumed to be proportional to the number of pixels counted with the proposed method. 5. on the effect of the entry velocity on the  trapped air  this section investigates the effect of the entry velocity on the amount of air trapped during the initial stage of the impact. using the technique presented in the previous section, it is possible to observe the time trace of the trapped air. the analysis is performed on wedges with deadrise angle of 4° entering the water in free fall from several impact heights: namely 50, 100, 150, and 200 cm. wedges are all 2 mm thick and are made by three different materials: aluminium, wovenglass/epoxy and matt e-glass/vinyl ester. this way the impact conditions were similar but bodies presented different flexibility due to the differences in the elasticity modulus of the three materials (namely 68 gpa, 30.3 gpa and 20.2 gpa). a detailed characterization of the specimens can be found in [14]. it was thus possible to study the effect of the structural deformation on the air trapped during the impact. figures 10 to 12 show the results of the evaluated trapped air versus the entry depth v0t, where v0 is the velocity at the beginning of the impact. 6. effect of the structural deformation on the air  trapped during the water entry  in the case of the water entry of flexible structures, there is the possibility that the structural deformation alters the air trapping mechanism. in particular, considering simple wedges, the deadrise angle is locally modified during the impact with water and this could lead air bubbles to coalesce and to form a cushion, or, on the other side, it could let the air escape from an already present cushion. an example of wedge deflection evolution during the water impact can be found in [8]. a collection of the experimental results for the different wedges impacting from the same heights is presented in figure 13. the experimental findings suggest that the flexibility of the wedge has negligible effects on the air trapping, as wedges assume large deformations once the air has been already entrapped in the fluid. even if the amount of trapped air is quite similar in all the three cases, it may be noted that, at the beginning of the water entry, stiffer wedges show a sharper peak of entrapped air, for high impact energies, than the more flexible ones. instead, the more flexible wedges apparently loose some air from the cushions in the final part of the entry. further investigation is needed to further explore and confirm such findings. figure  10.  experimental  evaluation  of  the  air  trapped  in  time.  aluminium wedge (a) 2 mm thick, deadrise angle = 4°, for variable impact height. the  impact heights in the legend are in cm. the product v0t is in mm.  figure  11.  experimental  evaluation  of  the  air  trapped  in  time.  composite  wedge (w) 2  mm thick, deadrise angle = 4°, for variable  impact height.  the impact heights in the legend are in cm. the product v0t is in mm.    figure  12.  experimental  evaluation  of  the  air  trapped  in  time.  composite  wedge (v) 2 mm thick, deadrise angle = 4°, for variable  impact heights.  the impact heights in the legend are in cm. the product v0t is in mm.  acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 67  7. conclusions  in this work we propose a technique to quantify the amount and the evolution in time of the air trapped during the water entry of flexible structures. first, a methodology based on the analysis of the high-speed images is proposed and commented. results are found to be in agreement with the expectations, although only qualitative comparison can be done, as there are no other experimental, nor numerical results in the literature to compare our results with. on the base of the experimental findings, air trapping seems to attain its maximum at the beginning of the impact, when the velocities are higher. however, bodies need time to deform: when the wedge deformations are large enough to modify the deadrise angle, the entireness of the entrapped air has been already trapped in the fluid. on the basis of these preliminary results, there is no experimental remarkable evidence of the influence of the structural deformation on the amount of air trapped during the impact. further studies are needed to confirm this observations. the analysis of the images showed that wedges with deadrise angles greater than 10° entrap the air in the form of small bubbles spread on a region that decreases as the deadrise angle increases. in the cases investigated, the structural deformation was not capable of lowering the deadrise angle enough to switch the air trapping mechanism from small air bubbles (with negligible effect on the hydrodynamic pressure) to an air cushion, which might have a strong effect on the hydrodynamic pressure. indeed, further investigations in this direction are needed to deeply understand this phenomenon. the experimental results show a saturating effect of the impact energy on the amount of entrapped air. furthermore, the role of the stiffness is very limited, as air trapping is found to mainly relate to the initial deadrise angle. further investigations are needed for the cases with very low deadrise angles and different geometries. we comment that the recently developed methodologies to reconstruct the hydrodynamic pressure in water entry problems from the flow kinematic components [15-17] can be adopted in the future to quantify the influence of air trapping on the hydrodynamic pressure. acknowledgement  support from the office of naval research (grant n0001412-1-0260) and the advice of dr. y. rajapakse are gratefully acknowledged. references  [1] s. abrate, hull slamming, appl. mech. rev. 64 (2013) p.060803. [2] a. korobkin, e.i. părău, j.m. vanden-broeck, the mathematical challenges and modelling of hydroelasticity, philos. trans. a. math. phys. eng. sci. 369 (2011) pp. 2803-2812. [3] s.e. hirdaris, p. temarel, hydroelasticity of ships: recent advances and future trends, proc. inst. mech. eng. part m j. eng. marit. environ. 223 (2009) pp. 305–330. [4] x. chen, y. wu, w. cui j.j. jensen, review of hydroelasticity theories for global response of marine structures, ocean eng. 33 (2006) pp. 439–457. [5] o.m. faltinsen, the effect of hydroelasticity on ship slamming, philos. trans. r. soc. a math. phys. eng. sci. 355 (1997) pp. 575–591. [6] o.m. faltinsen, hydroelastic slamming, j. mar. sci. technol. 5 (2000) pp. 49–65. [7] r. panciroli, water entry of flexible wedges : some issues on the fsi phenomena, appl. ocean res. 39 (2012) pp. 72–74. [8] r. panciroli, s. abrate, g. minak, a. zucchelli, hydroelasticity in water-entry problems: comparison between experimental and sph results,” compos. struct. 94 (2012) pp. 532–539. [9] r. panciroli, s. abrate, g. minak, dynamic response of flexible wedges entering the water, compos. struct., 99 (2013) pp. 163-171. [10] a. korobkin, a.s. ellis, f.t. smith, trapping of air in impact between a body and shallow water, j. fluid mech. 611 (2008) pp. 365–394. [11] m. cooper, l. mccue, “experimental study on deformation of flexible wedge upon water entry,” in 9th symposium on high speed marine vehicles. [12] p. soille, morphological image analysis: principles and applications. springer-verlag, 1999, pp. 173–174. [13] r.m. haralick, l.g. shapiro, computer and robot vision, 1st ed. boston, ma, usa: addison-wesley longman publishing co., inc., 1992. [14] r. panciroli, dynamic failure of composite and sandwich structures, vol. 192. dordrecht: springer netherlands, 2013. [15] a. nila, s. vanlanduit, s. vepa, w. van paepegem, a piv-based method for estimating slamming loads during water entry of rigid bodies, meas. sci. technol., 24 (2013) p. 045303. [16] b.w. van oudheusden, piv-based pressure measurement, meas. sci. technol., 24 (2013) p. 032001. [17] r. panciroli, m. porfiri, evaluation of the pressure field on a rigid body entering a quiescent fluid through particle image velocimetry, exp. fluids 54 (2013) p. 1630. figure  13.  evolution  of  the  air  entrapped  in  time  for  the  water  entry  of  wedges  with  various  flexural  stiffness  (a=aluminium,  w=woven glass/epoxy,  v=  mat  glass/epoxy)  impacting  from  two  different  impact heights (100 cm ‐ top, and 150 cm – bottom). the product v0t is in mm.   collaborative systems for telemedicine diagnosis accuracy acta imeko issn: 2221-870x september 2021, volume 10, number 3, 192 197 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 192 collaborative systems for telemedicine diagnosis accuracy jacques tene koyazo1, moise avoci ugwiri2, aimé lay-ekuakille3, maria fazio1,massimo villari1, consolatina liguori2 1 department of mathematics and computer science, physical sciences and earth science, university of messina, lecce 73100, italy 2 department of industrial engineering, university of salerno, fisciano 84084, italy 3 department of innovation engineering, university of salento, lecce 73100, italy section: research paper keywords: signal processing; biomedical; collaborative edge; cloud computing, accuracy; theranostics; measurement citation: jacques tene koyazo, moise avoci ugwiri, aimé lay-ekuakille, maria fazio, massimo villari, consolatina liguori, collaborative systems for telemedicine diagnosis accuracy, acta imeko, vol. 10, no. 3, article 26, september 2021, identifier: imeko-acta-10 (2021)-03-26 section editor: francesco lamonaca, university of calabria, italy received june 12, 2021; in final form august 5, 2021; published september 2021 corresponding author: jacques tene koyazo, email: jacquestene2013@gmail.com 1. introduction the spectacular progress in communication technology has boosted the telemedicine which seems to be a new medical practice preference in medical areas [1]. one of the benefits carried out by this new medical discipline is, for instance, the stomatological diagnosis. thanks to the communication and computer technology, the stomatological diagnosis can provide a safe and reliable remote diagnostic, counselling care, distance education and other information services of medical activities [2]. in recent years, various standards for health care research regulation have been developed. evidences in the literature have proven that studies based on ieee and iso standards [3] enable a good compliance in terms of collaboration with healthcare industries, government agencies and research institutes towards developing novel approaches and methods for handling and controlling diseases. in fact, implementing a reliable collaboration system in telemedicine has a tremendous advantage. it breaks distance restrictions, so that different medical institutions can provide diagnosis; it improves the exchange accuracy for medical advices; it provides a cooperative working environment suitable to share data and information helping in dealing with emergencies. since then, telemedecine has demonstrated a huge prospect thanks to everyday development of information and telecommunication technology [4]. a part from pravicy and security concerns, medical data transmission still face data transmission problem. according to statistics provided by the second xiangya hospital [5], most of medical data produce over 1 gb , and those massive generated data often tend to have a growth rate exceeding the speed of the expansion of the mobile iot bandwidth. this aspect was confirmed by the cisco’s yearbook report [6] demonstrating that they can account for more than 85% of data traffic. this paper is elaborated on the assumption that sensors are used to collect comprehensive physiological information from targeted patients and uses cloud to store and analized those information. the data from the latter process are sent to the service provider for deep investigation. at the same time, those information can be used to remotely monitor health condition of the patient. various sensors are nowadays used in clinical care, storing data from the used sensors in the cloud for more complex analysis and share the results with abstract the transmission of medical data and the possibility for distant healthcare structures to share experiments about a given medical case raises several conceptual and technical questions. good remote healthcare monitoring deals with more problems in personalized heath data processing compared to the traditional methods nowadays used in several parts of hospitals in the world. the adoption of telemedicine in the healthcare sector has significantly changed medical collaboration. however, to provide good telemedicine services through new technologies such as cloud computing, cloud storage, and so on, a suitable and adaptable framework should be designed. moreover, in the chain of medical information exchange, between requesting agencies, including physicians, a secure and collaborative platform enhanced the decision-making process. this paper provides an in-depth literature review on the interaction that telemedicine has with cloud-based computing. on the other hand, the paper proposes a framework that can allow various research organizations, healthcare sectors, and government agencies to log data, develop collaborative analysis, and support decision-making. the electrocardiogram (ecg) and electroencephalogram eeg case studies demonstrate the benefit of the proposed approach in data reduction and high-fidelity signal processing to a local level; this can make possible the extracted characteristic features to be communicated to the cloud database. mailto:jacquestene2013@gmail.com acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 193 the medical professional for further examination is the core idea beside this paper. 2. diagnosis in telemedecine through a collaborative framework: a literature survey 2.1. general overview the internet of things (iot) in collaborative medical framework is considered to be the most fundamental aspect enabling collaboration in telemedicine, because it allows healthcare applications to fully use the iot and cloud computing [7]-[9]. the framework also provides protocols to support the communication as well as broadcast of raw medical signals from different sensors and smart devices to a network of fog nodes. a good insight of a collaborative framework has been introduced by yang et al. [10] and almotiri et al.[11], where they have suggested an architecture able to collect data on the patient health thanks to several sensors, and transfert them to a remote server for processing and giving the possibility to display the results. the figure 1 shows essential components needed in a collaborative framework for collaborative system in telemedicine diagnosis. in ideal situation, sensors have to constantly collect patient’s health condition and vital information. the collected data are then sent to hand-held devices via edge router where it will analyzed and store on a cloud computing platform for further evaluations as presented in the figure 2. it is worth to mention here that, sensors are continuously sending patient’s vital signs as raw information such as electomyograpy (emg), electrocardiogram (ecg), electroencephalogram (eeg), body temperature, blood glucose (bg) and so on. the good data exchange platform architecture ensures that all sensors operate smoothly so that users can interact with them easily. ecg and eeg are demonstrated in section 3 of this paper as show cases. 2.2. cloud computing for telemedicine in information technology, the cloud computing becomes the hottest subject nowadays. the computing ressources allocated by is in fact on-demand, scalable and secure to users. in a work done by sultan et al. [12] cloud computing is described as the backbone of iot health systems. the cloud computing has the advantage of providing the capability of sharing information among health professionals, research institutes and patients in a more structured and organized maner, which minimized the risks of medical record lost [13]. the figure 3 presents the platform of a remote healthcare monitoring system based on cloud computing. each layer in the platform is designed to handle a specific task and it can be implemented in the way to serve various query for healthcare. the cloud storage and the multiple tenants access control are the “master layer” of the platform. it assures the collection of the healthcare data from sensors such presented in the figure 2 (layer 1). the healthcare annotation layer solves data heterogeneity issue, which is a big deal in signal processing discipline. the fact that, sensors used for telemedicine purpose generate data of various type, this complexifies the data sharing automation among agencies. one way is to just create an open linked life data sets to annotate personal healthcare data and integrate dispersed data in a patient-centric pattern for cloud application [14]. the data analysis layer processes data stored in the cloud to assist in figure 1. basic subsets in collaborative framework for heathcare in telemedicine. figure 2. a typology for remote patient monitoring using body sensors. figure 3. cloud-computing based on remote health monitoring – functional platform. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 194 clinical decision making. one can see in figure 4, mining algorithms are constructed in order to induce clinical paths from personal healthcare data. it is important to point out here that, major part of the cloud data centers are geographically centralized and located way far from end-users [15]. for telemedicine application which often requires immediate real-time feedback, communication between users and remote cloud servers are the reason of major issues like delays in round-trip, network congestion, and so on. those observations led to new evolutions in cloud computing, such as fog computing and big data, the cloud computing got then extended and got able to support high scalable computing platforms [16]. in [6], one can evidently see that, cisco was the first to introduce the fog computing concept as a possible solution to extend cloud network edge computing power and storage capacity. fog computing is known to be closer to devices and possesses a dense geographical distribution, so applications and services can be placed at the edge of the local network, which reduces bandwith usage latency. in this way cloud gets closer to the user, and the processing gets done locally, minimizing network ltency and bandwidth usage. the main fog computing architecture as illustrated in the figure 4. the interest beside the use of fog computing layer is deeply discussed by many authors, where they analyzed the role of fog computing in implementing healthcare monitoring framework and they proposed a mediator layer to receive raw information from sensor devices and then store them on the cloud. 3. case study and discussions embracing collaborative edge computing helps healthcare organizations visibility over patient care cycles. sharing information practionners (phycians or health professionals) can give organizations a clear big picture view of what’s going on and how to deal with it. moreover, this technology effectively contributes to the process of organizations of the health system (hospital or health center) from the point of view of the exchange of data in a reliable way, in particular for urgent critical cases presented by patients, such as neurological, cardiological problem, etc. figure 5 represents the proposed collaboration architecture of two healthcare centers, respectively a and b. we consider two patients under ecg and eeg diagnosis, as shown in the diagram, who are. this part constitutes the first block for the diagnosis based on the designed architecture, specifically for ecg and ecg study cases. the diagnosis is made for two patients, each in these healthcare centers where their biosignals are captured. the second considered a block of the process, and the result is composed of three compartments where each one plays a specific role, starting with the acquisition in the local interface, and the data received from another health center, via the data management, and the local storage for the final result after analysis and interpretation of data at the level of the third compartment (data analysis). figure 4. main attributes in fog computing architecture : an illustration. figure 5. the proposed collaborative architecture. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 195 this nature of collaboration between the network edges is attributed in two healthcare centers named edge a and edge b to ensure the reliability, storage, and processing of geographically distributed data. one of the main scopes of this architectural configuration is the exchange of diagnosis between two healthcare centers or multiple healthcare ones. this exchange is necessary to carry out specific comparisons amongst patients of the same age range and the same characteristics. due to heavy file dimensions, the proposed architecture also allows performing remote access without displacing files. that is possible thanks to the cloud. 3.1. ecg monitoring in medical applications, ecg sensors record the electrical activity of a heart at rest, then deliver acquired information about hr and rhythm. the recorded information is crutial in early prediction of a heart enlargement due to hypertension or heart attack. the integration of iot in ecg for telemedicine practices has a tremendous benefits and high potential to warn users about heart rate abnormality, which is a vital sign of early heart desease detection. in this paper, two patients have been considered. in the figure 6, the output ecg signal of two patients is presented, where the filtering sequence uses pan tompkin’s qrs detector. for the comparison purpose, the signals have been downsampled to 250 hz. each window represents 5.5 seconds of data. each stage introduces a delay with a cumulative delay of 40 samples. for a optimum filtering, the butterworth filter is used in this study. the filtering range is between 4 hz and 20 hz, the filter is of order 4. the proposed processing algorithm is implemented in matlab. in figure 7, one can see the respiratory rate which is a critical and one of the vital signs used in telemedicine. r-r interval has been used for detection. due to the change in heart rate synchronized with respiration, the r-r interval of the ecg is short during the respiration, and long during expiration [18]. however, the morphology of a heart can vary greatly depending on the patient as shown in figure 7. usually, normal heartbeat for a patient can resemble an abnormal beat for another. the hamilton and tompkins algorithms used in this paper [19] are then used for peak energy amplitude detection rather than the detailed morphology of the ecg. 3.2. eeg monitoring eeg are known to be interesting source of information for remote healthcare and can be associate to ecg for enhancing the diagnosis task [12]. the issue though is to apply appropriate techniques to extract information prior to upload to the cloud, so that medical agencies can take advantage of them for proper and accurate diagnosis. this paper adopted the filter diagonalization methods exploited by a. lay-ekuakille et al. [20] to extract relevant parameters such as complex frequencies from a given window. eeg can be seen as signals that are sums of damped exponentials, which makes suitable to apply fdm and/or the decimeted signal diagonalization (dsd). the bandlimited decimated signal can be modeled as 𝐶𝑛 bld = ∑ 𝑑𝑘 𝐾 𝑘=1 e−j 𝜔 𝑛 𝜏𝐷 , im 𝜔𝑘 < 0 , (1) where 𝜔𝑘 and 𝑑𝑘 are complex frequencies and amplitudes respectively. if m is the times at which the signal is sampled, then for each of the m signals, the diagnonalization for fdm algorithm can be implemented as follow: 𝐶𝑛 bld = ∑ 𝑑𝑘 𝐾 𝑘=1 e−j 𝜔 𝑛 𝜏𝐷 ⇒ 𝑈1 𝐵1𝑘 = 𝑢1𝑘 𝑈0 𝐵1𝑘 (2) 𝐶2𝑛 bld = ∑ 𝑑2𝑘 𝐾 𝑘=1 e−j 𝜔 𝑛 𝜏𝐷 ⇒ 𝑈1 𝐵2𝑘 = 𝑢2𝑘 𝑈0 𝐵2𝑘 . (3) figure 6. processing sequences for patient 1 and patient 2. figure 7. filtered, smoothed and processed ecg for patient 1 and patient 2. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 196 in (2) and (3), complex frequencies are extracted from the eigenvalues 𝑢1𝑘 and 𝑢2𝑘 . the reader is encouraged to find more detailed on fdm algorithm structure in [21]. as for the ecg case, the eeg is considered for the patient 1 and patient 2, where patient 1 is the unsuspected child and the latter is the suspected child. figure 8 presents the bisepstrum of the signal ranging from 10801 up to 12000 samples. the bisepstrum is built for the second interval for the patient 2 as can be seen in figure 9. an important clinical feature for both suspected and unsuspected cases (epilepsy in this case) are presented in figure 10 and figure 11 respectively for patient 1 and patient 2. as stated above, cloud computing provides secure platform for two-way sharing of research data across different agencies or institutions. platforms such as google-cloud, gift-cloud, etc… are designed to meet the need of collaborative research project by simplifying data transfer. the eeg and ecg characteristic features obtained in this elaboration, make easy to integrate with it local infrastructure of the institutions that provided clinical data and expertise, with end-user within routine clinical workflow. the results are presented such as supports for varied collaboration agreements between institutions and related access control restrictions. the improved scheme proposed in figure 5, makes possible the configuration as well as the update, and can allow new modalities to be added via the server without requiring software updates. features extracted such as bispectrum presented in figure 12 simplified the development of the research software. the idea is straitforward, such that the development allows to automatically fetch the data directely from the server, given the fact data has been uploaded from research center (in our case). these features are not just accurate, but also useful for both medical research projects where data sharing is required between researchers and clinical institutions and medical professionals for decision making. 4. conclusions telemedicine aimed to provide high-quality medical services, given that healthcare facilities nowadays hardly satisfy populations' needs due to limitations of public medical resources and infrastructures. this research proposed a cloud-computingbased architecture for decentralized and collaborative diagnosis by highlighting patients' data storage after meaningful feature extraction. in this way, a medical professional can rapidly grasp the patient's state in question and make an accurate decision easily. as reported at the beginning of section 3, collaborative edge computing certainly helps healthcare organizations in the spirit of connecting hardware and software issues in a unique platform for decision making in the interest of patients [22], [23]. figure 8. the bispectrum and associate representation for patient 1 figure 9. the bispectrum and associate representation for patient 2 (here the sample is 7201 8400). figure 10. the bispectrum and associate representation for patient 1 (here the sample is 7201 8400). figure 11. the bispectrum and associate representation for patient 2 (here the sample is 7201 8400). figure 12. superposition of bispectrums associate to patient 1 and patient 2. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 197 the personalized care is another field of the unifying platform.the contributions of this research inevitably include data reduction and high-fidelity signal processing to a local level to extract characteristic features and communicate them to the cloud database. a demonstration of eeg and ecg features extraction was carried out, and detailed on how the deployment of obtained processing results on a cloud-based application has been presented. to obtain the desired outcome, the study suggested a deployment using a system consisting of at least 900 mhz 32 quad-core arm cortex-a7 cpu, and 2 gb ram. the dataset exploited is from mit-bih arrhythmia, and ranging from 16.24 kb to 36.45 kb. the suggested architecture makes possible the reduction of data around 97 % while most suggested architectures in the litteratre present an accuracy about 89 %. moreover, the time taken by the transfert is estimate to 15 seconds, which validates the efficacy of the proposed architecture in monitoring vital signs such eeg and ecg in even real-time. on the other hand, the study provides a highlevel understanding of the cloud-bade iot system and remote healthcare monitoring. references [1] boyi xu, lida xu, hongming cai, lihong jiang, yang luo, yizhi gu, the design of an m-health monitoring system based on a cloud computing platform, enterprise information systems, vol. 11, issue 1, 2017, pp.17-36. doi: 10.1080/17517575.2015.1053416 [2] t. han, l. zhang, s. pirbhulal, w. wu, v. h. de albuquerque, a novel cluster head selection technique for edge-computing based iomt systems. comput network. april 2019, 158(2), 114e122. doi: 10.1016/j.comnet.2019.04.021 [3] b. kamsu-foguem, p. f. tiako, l. p. fosto, c. foguem, modeling for effective collaboration in telemedicine, telematics and informatics, vol. 32, issue 4, november 2015, pp. 776-786. doi: 10.1016/j.tele.2015.03.009 [4] a. lay-ekuakille, p. vergallo, g. griffo, f. conversano, s. casciaro, s. urooj, v. bhateja, a. trabacca, entropy index in quantitative eeg measurement for diagnosis accuracy, ieee transactions on instrumentation & measurement, vol. 63, n. 6, 2014, pp. 1440-1450. doi: 10.1109/tim.2013.2287803 [5] weisong shi, jie cao, quan zhang, youhuizi li, lanyu xu, edge computing: vision and challenges’, ieee internet of things journal, vol. 3, issue 5, oct. 2016, pp. 637 646. doi: 10.1109/jiot.2016.2579198 [6] cisco, fog computing and the internet of things: extend the cloud to where the things are, white paper, 2015. online [accessed 13 september 2021] https://www.cisco.com/go/iot [7] deepak puthal, saraju p. mohanty, uma choppali, collaborative edge computing for smart villages, ieee consumer electronics magazine, vol. 10, issue 3, 1 may 2021, pp. 68-71. doi: 10.1109/mce.2021.3051813 [8] kai wang, hao yin, wei quan, geyong min, enabling collaborative edge computing for software defined vehicular networks, ieee network, vol. 32, issue 5, september/october 2018, pp. 112-117. doi: 10.1109/mnet.2018.1700364 [9] x. chen l. jiao, w. li, x. fu, efficient multi-user computation offloading for mobile-edge cloud computing, ieee/acm transactions on networking, vol. 24, issue 5, october 2016, pp. 2795–2808. doi: 10.1109/tnet.2015.2487344 [10] a. saeed, m. ammar, k. a. harras, e. zegura, vision: the case for symbiosis in the internet of things, proc. 6th international workshop on mobile cloud computing and services, paris, france, 11 september 2015, pp. 23–27. doi: 10.1145/2802130.2802133 [11] t. x. tran, a. hajisami, p. pandey, d. pompili, collaborative mobile edge computing in 5g networks: new paradigms, scenarios, and challenges, ieee communications magazine, vol. 55, issue 4, april 2017, pp. 54–61. doi: 10.1109/mcom.2017.1600863 [12] a. lay-ekuakille, p. vergallo, a. trabacca, m. de rinaldis, f. angelillo, f. conversano, s. casciaro, low-frequency detection in ecg signals and joint eeg-ergospirometric measurements for precautionary diagnosis, measurement, vol. 46, issue 1, 2012, pp. 97-107. doi: 10.1016/j.measurement.2012.05.024 [13] k. wang, h. yin, w. quan, g. min, enabling collaborative edge computing for software defined vehicular networks, ieee network, vol. 32, issue 5, sep./oct. 2018, pp. 112–117. doi: 10.1109/mnet.2018.1700364 [14] h. zhang, p. dong, w. quan, b. hu, promoting efficient communications for high-speed railway using smart collaborative networking, ieee wireless communications, vol. 22, issue 6, dec. 2015, pp. 92– 97. doi: 10.1109/mwc.2015.7368829 [15] l. chen, j. xu, socially trusted collaborative edge computing in ultra dense networks, proc. of the 2nd acm/ieee symposium on edge computing, san jose/fremont, ca, usa, 12-14 october 2017, 11 pp. doi: 10.1145/3132211.3134451 [16] yuvraj sahni, jiannong cao, lei yang, data-aware task allocation for achieving low latency in collaborative edge computing, ieee internet of things journal, vol. 6, issue 2, april 2019, pp. 35123524. doi: 10.1109/jiot.2018.2886757 [17] a. lay-ekuakille, s. ikezawa, m. mugnaini, r. morello, detection of specific macro and micropollutants in air monitoring, review of methods and techniques, measurement, 98(1) (2017), pp. 4959. doi: 10.1016/j.measurement.2016.10.055 [18] a. p. plonski, j. vander hook, v. isler, environment and solar map construction for solar-powered mobile systems, ieee transactions on robotics, vol. 32, issue 1, feb. 2016, pp. 70-82. doi: 10.1109/tro.2015.2501924 [19] world health organization, coronavirus disease (covid-19) pandemic website. online [accessed 9 september 2021] https://www.who.int/emergencies/diseases/novel-coronavirus2019. [20] a. lay-ekuakille, m. a. ugwiri, c. liguori, p. k. mvemba, proceedings of the medical measurements and applications, (memea) symposium, istanbul, turkey, 26-28 june 2019, art. no. 8802127. doi: 10.1109/memea.2019.8802127 [21] a. lay-ekuakille, g. griffo, p. visconti, p. primiceri, r. velazquez, leaks detection in waterworks: comparison between stft and fft with an overcoming of limitations, metrology and measurement systems, vol. 24, issue 4, pp. 631 – 644. doi: 10.1515/mms-2017-0049 [22] b. qureshi, towards a digital ecosystem for predictive healthcare analytics, proc. of medes 2014 6th international conference on management of emergent digital ecosystems, buraidah al qassim, saudi arabia, 15 17 september 2014, pp. 34-41. doi: 10.1145/2668260.2668286 [23] h. patel, t. m. damush, e. j. miech, n. a. rattray, h. a. martin, a. savoy, l. plue, j. anderson, s. martini, g. d. graham, l. s. williams, building cohesion in distributed telemedicine teams: findings from the department of veterans affairs national telestroke program, bmc health services research, 21 (1), art. no. 124. doi: 10.1186/s12913-021-06123-x https://doi.org/10.1080/17517575.2015.1053416 http://dx.doi.org/10.1016/j.comnet.2019.04.021 http://dx.doi.org/10.1016/j.tele.2015.03.009 https://doi.org/10.1109/tim.2013.2287803 https://doi.org/10.1109/jiot.2016.2579198 https://www.cisco.com/go/iot https://doi.org/10.1109/mce.2021.3051813 https://doi.org/10.1109/mnet.2018.1700364 https://doi.org/10.1109/tnet.2015.2487344 https://doi.org/10.1145/2802130.2802133 https://doi.org/10.1109/mcom.2017.1600863 http://dx.doi.org/10.1016/j.measurement.2012.05.024 https://doi.org/10.1109/mnet.2018.1700364 https://doi.org/10.1109/mwc.2015.7368829 http://dx.doi.org/10.1145/3132211.3134451 https://doi.org/10.1109/jiot.2018.2886757 https://doi.org/10.1016/j.measurement.2016.10.055 https://doi.org/10.1109/tro.2015.2501924 https://www.who.int/emergencies/diseases/novel-coronavirus-2019 https://www.who.int/emergencies/diseases/novel-coronavirus-2019 https://doi.org/10.1109/memea.2019.8802127 http://dx.doi.org/10.1515/mms-2017-0049 https://doi.org/10.1145/2668260.2668286 https://doi.org/10.1186/s12913-021-06123-x integrating maintenance strategies in autonomous production control using a cost-based model acta imeko issn: 2221-870x september 2021, volume 10, number 3, 156 166 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 156 integrating maintenance strategies in autonomous production control using a cost-based model robert glawar1, fazel ansari1,2, zsolt jános viharos3,4, kurt matyas2, wilfried sihn1,2 1 fraunhofer austria research gmbh, theresianumgasse 7, a-1040, vienna, austria, 2 tu wien, institute of management science, theresianumgasse 27, a-1040, vienna, austria 3 institute for computer science & control (sztaki), kendestr.13-17, h-1111, budapest, hungary 4 john von neumann university, izsáki u. 10, h-6000, kecskemét, hungary section: research paper keywords: maintenance; autonomous production control; production planning; cyber physical systems; industry 4.0 citation: robert glawar, fazel ansari, zsolt jános viharos, kurt matyas, wilfried sihn, integrating maintenance strategies in autonomous production control using a cost-based model, acta imeko, vol. 10, no. 3, article 22, september 2021, identifier: imeko-acta-10 (2021)-03-22 section editor: lorenzo ciani, university of florence, italy received february 9, 2021; in final form may 2, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work has been supported by the european commission through the h2020 project epic (grant no. 739592) corresponding author: robert glawar, e-mail: robert.glawar@fraunhofer.at 1. introduction in today’s competitive market, manufacturing enterprises are faced with the challenge of achieving high productivity, short delivery times and a high level of delivery capability despite evershorter planning horizons, a large number of external planning changes and increasing planning complexity [1], [2]. this high degree of complexity in planning is no longer effectively and affordably manageable for humans [3]. on the one hand, there are high demands on flexibility and reaction times in planning and, on the other hand, high requirements regarding availability of production facilities, equipment and machines [4]. however, current systems for production planning and control (ppc) neither incorporate technical innovations nor social requirements and are therefore not able to meet the current challenges [5]. likewise, current maintenance processes and strategies are not sufficiently prepared for these challenges [6]. considering the advancement towards industry 4.0, new opportunities arise due to innovative technologies and approaches such as industrial internet of things (iiot) applications [7], horizontal and vertical communication within a production system by means of open platform communications unified architecture (opc ua) [8] or the use of artificial intelligence (ai) methods for data analysis, forecasting, optimization and planning [9]. the degree of autonomy of such a cyber physical production system (cpps) describes the ability to plan, control and initiate actions autonomously [10]. approaches to autonomous production control (apc) represent suitable ways to increase the degree of autonomy of a cpps [11], [12] therefore, apc represent suitable possibilities to deal with the aforementioned requirements [13]. however, these approaches are currently limited to lab research and are not ready for industrial applications [14]. most of the current approaches are based on idealised assumptions such as maximum availability (i.e. 95-98%) or do not take many abstract autonomous production control (apc) is able to deal with challenges, inter alia, high delivery accuracy, shorter planning horizons, increasing product and process complexity, and frequent changes. however, several state-of-the-art approaches do not consider maintenance factors contributing to operational and tactical decisions in production planning and control. the incomprehensiveness of the decision models and related decision support tools cause inefficiency in production planning and thus lead to a low acceptance in the manufacturing enterprises. to overcome this challenge, this paper presents a conceptual cost-based model for integrating different maintenance strategies in autonomous production control. the model provides relevant decision aspects and a cost function for different maintenance strategies using on a market-based approach. the present work thus makes a positive contribution to cope with the high demands on flexibility and response times in planning while at the same time ensuring high plant productivity. mailto:robert.glawar@fraunhofer.at acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 157 decisive factors such as maintenance strategies into account. for example, the question such as “how the current state of a production plant or machine can affect production control” is not taken into consideration [15]. exactly these factors, as exemplified, are decisive for the acceptance and implementation maturity of autonomous approaches in industrial companies. hence, the aim of the present work is to take a further step towards implementation maturity by integrating different maintenance strategies in apc. 2. maintenance in apc autonomous production control (apc) has the potential to deliver optimal and resource efficient processes as well as higher quality and variations of products than conventional, centralized decision-making systems [16] adaptive, decentralised production control can reduce planning efforts [17], enable shorter reaction times in planning [18] and create greater planning flexibility [19]. since in most cases not all decisions in a production system are made autonomously, a cpps typically includes a combination of hierarchical and heterarchical mechanisms for control [20]. since approaches to autonomous production control are able to deal quickly and flexibly with unplanned changes within the production system, they are used in the context of cpps to represent decision-making processes that require a high degree of responsiveness [21]. to ensure high level of acceptance among operational planning staff, it is particularly important that the underlying models comprehensively take relevant factors of the production system into account and thus making robust decisions [22]. current studies show that a large number of research activities are concerned with the development of approaches to autonomous production control [23]. current approaches focus on the description of the interactions between different parts of a production system from different perspectives. a typical task is to assign a waiting workpiece, which is to be processed in the course of a production job, to a machine or a workstation taking into account available resources, logistical parameters and the smoothing of the job load. existing apc approaches are usually either carried out as event-driven sequencing [18] or agent-based sequencing [24]. first examples that the agent-based simulation is a suitable possibility to realize apc by using multi-agent systems are shown by pantförder et al. (2017) [25]. the integration of such an approach in a production system on the basis of the upc-ua standard is shown by hoffmann et al. (2016) [26]. a key success factor for autonomous interaction in this context is the design of a of a robust system [27]. in order to reach such a design, different algorithms may be used to schedule the orders. in particular, genetic and evolutionary algorithms [28], swarm-based algorithms [29], [30], and market models [14] have been successfully used for apc. in addition, many current approaches to ppc rely on the application of methods of artificial intelligence. often, ml is applied, for example, to predict lead times predict and optimize resource utilization [31]. in addition, reinforcement learning is used to enable, for example, an autonomous order scheduling system [32]. however, as shown in table 1, few approaches deal with the integration of maintenance strategies in apc systems. for instance, erol and sihn (2017) presented a cloud-based architecture for intelligent production planning and control considering maintenance [33]. vallhagen et al. (2017) also presented a system and information infrastructure to enable optimized adaptive production control [34]. nevertheless, neither of these approaches explain which aspects of maintenance should be considered and how they should be integrated. in the approach presented by wang et al. (2018), the condition of production plants is automatically evaluated and thus the production sequence is intelligently adapted. system table 1. overview maintenance in autonomous production control. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 158 performance is improved by automatically evaluating the state of production systems and dynamically configuring processing paths for intelligent products and parts. while the implementtation as decentralized production control is proposed, the work deals in detail with a three-machine problem and neglects the dependencies on a higher-level production planning [35]. in summary, it can be concluded that none of the identified approaches includes a systematic integration of different maintenance strategies into autonomous production control. however, if this aspect is not taken into account, these approaches remain largely unsuitable for industrial application, as no valid decisions can be made in the case of unplanned outages or planned maintenance, and thus ultimately the acceptance of such approaches by operational planning staff is not given. in front of this background a comprehensive methodology for integrating maintenance strategies in autonomous production control has been presented by glawar et al. (2019) [20] and glawar et al. (2020) [36]. the core of this conceptual model, which is presented in detail in section 5, is a cost based model for the integrated planning. this model is laid out in section 4 based on the relevant aspects for the integration of maintenance in apc introduced in section 3. 3. relevant aspects for the integration of maintenance in apc for the integration of maintenance into apc, an important step is to clarify which maintenance aspects are relevant for the integration decision. for this purpose, an expert survey has been conducted including professionals from industrial sectors, namely semiconductor production, metal processing industry, condition monitoring and automotive industry as well as national and international academic experts. the aim of this survey was to discuss the following question with the experts: "how do you evaluate the individual aspects of maintenance with regard to their relevance for integration into production planning and control (ppc)?” the first step was to discuss which aspects of plant maintenance (aka industrial maintenance) are generally important for ppc and for which area of ppc a specific aspect is relevant. using the pair-wise comparison method, it was finally determined how relevant the individual aspects are for integration into the ppc. the results of this expert survey are presented in table 2. the essential aspects of maintenance are listed in the first column and evaluated with regard to their relevance for decision-making. in the second column the relevance for the integration into the different dimensions of ppc is shown. a significant finding is that the relevance for the consideration of the individual aspects for the ppc strongly depends on the general operational conditions, especially the degree of automation, production type and flexibility in case of a plant failure. a closer look at the results shows that some aspects are particularly relevant for integration into apc, while other aspects may have a positive influence on the quality of decisions but are not absolutely necessary for integration purpose. in addition, there are other aspects of plant maintenance which are particularly important for integration into mediumand longterm production planning as well as production controlling. these decision factors have not been further addresses during the course of the present work. 3.1. downtime & costs the time for a shutdown, in case of an occurring failure of the machine, can be estimated either on the basis of empirical knowledge, or calculated on the basis of historical shutdowns. it table 2. aspects and their importance for apc integration into ppc system evaluated by various domain experts [36]. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 159 is important to note that the downtime that occurs usually differs significantly depending on whether it is a planned or unplanned shutdown. since downtime costs correlate with the order situation, the lost contribution margin in the event of a downtime is usually used to calculate the downtime costs. penalties for delayed order completion are also taken into account, if applicable. together with the probability of failure, the downtime costs represent an important basis for decisionmaking in production control. 3.2. maintenance time & costs the time for a repair can either be estimated on the basis of manufacturer information and empirical knowledge or calculated on the basis of historical data (e.g., mean time to repair mttr). usually, when calculating repair costs, a distinction is made between internal and external repair costs, which are usually caused by external services. the underlying share of external services largely determines the repair time and costs. in the case of internal repair costs, the repair time is usually taken into account, taking into account the hourly rates of the personnel required for the repair, depending on their qualifications, and supplemented by the material costs for the necessary spare parts. this applies analogously to the occurring maintenance times & costs. the repair and maintenance costs calculated in this way are important factors in production control for deciding whether maintenance should be carried out or even brought forward, or whether the risk of a breakdown with subsequent repair should be taken. 3.3. spare parts availability the information on whether the spare parts required for repair and maintenance are basically available is decisive for the decision within the framework of production control as to whether maintenance is triggered or whether an order is produced on a system with a certain risk of failure. depending on the type and complexity of a machine as well as the organizational form of maintenance, spare parts availability represents a more or less important decision aspect. in the case that mechanical spare parts can be produced independently with relatively little effort or are outsourced to a service provider via a service contract, it may not be necessary to integrate this decision aspect into production control. 3.4. availability of maintenance capacity the information as to whether maintenance capacity is available for repair or maintenance is essential in the context of production control in order to make the decision as to whether this should be triggered. capacities can represent both internal personnel resources and external third-party services, which are usually not available in unlimited quantities. depending on the type of maintenance organization, this decision aspect is also more or less important. while a capacity check can be very important in a decentralized maintenance organization that has to manage with a narrowly limited capacity, it is less important in an organization that provides sufficient resources centrally. 3.5. availability of qualifications the qualifications required to perform a particular repair or maintenance task can also play a relevant role in the decision within the framework of production control. however, the significance of this decision aspect depends to a large extent on the complexity of the equipment as well as the available qualifications of the internal personnel resources. while only a small number of qualified personnel resources are generally available for highly complex plant components such as bionic components, the significance decreases for simple mechanical plant components, for which a large proportion of the available personnel resources are qualified. 3.6. planned maintenance orders and planned maintenance interval orders that are scheduled for the execution of maintenance are very relevant for the ppc as they tie up capacities. while internal maintenance tasks only reduce the capacity within a period but allow a certain flexibility with regard to sequence planning, externally performed tasks often represent a hard restriction for production control. the basis for these planned maintenance orders are often the defined intervals for (periodic) preventive maintenance. maintenance interval management thus represents a key success factor for medium-term production planning. it is crucial that this maintenance planning is coordinated with the expected fluctuation in production volumes in order to prevent equipment from being unavailable in a phase of particularly high order levels, while it could be maintained in a phase of low order levels. 3.7. probability of failure the probability of failure significantly determines the risk of a plant or machine failure during production and thus influences production control decisions. depending on the maintenance strategy applied, different approaches exist to calculate the probability of failure. in the simplest case, information from the manufacturer or internal empirical values (e.g. mean time between failure mtbf) are used to calculate the probability of failure. often, historical data based on statistical methods, such as the weibull distribution, can also be used to calculate the probability of failure. ideally, the probability of failure is determined based on the actual condition of the plant and a corresponding forecast for the next failure. 3.8. condition of the plant or machine components if the condition of a plant or machine can be reliably measured, estimated or calculated, it represents an essential decision-making factor for the pps. in the context of mediumterm production planning and spare parts planning, it is possible to react as soon as a component exhibits a critical condition, for example, by ensuring that the corresponding spare parts are available or by initiating planned maintenance. in the context of production control, the risk of failure can be taken into account based on the change in the condition of the machine for example in the context of sequence planning or machine assignment. 3.9. technical plant availability technical plant availability, which describes what proportion of the available operating time a plant is technically available, is a key aspect of medium-term production planning. depending on the availability, the plants are scheduled to a greater or lesser extent. technical plant availability also represents a hard restriction for the maximum possible production quantity. 4. development of a cost function for an integrated planning different algorithms can be used to determine the order of the orders within the autonomous production control. many of these algorithms use a cost function to prioritize or determine the production sequence. for example, when applying the market principle for apc, orders are allocated to individual acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 160 production units based on a cost function. in this paper, a cost function for autonomous production control is developed using the market principle for autonomous production control as an example: 𝐾pa = ∑ 𝐾s + ∑ 𝐾pf + ∑ 𝐾pv × 𝑋p + ∑ 𝐾tv × 𝑋t . (1) a possible formulation of the costs of a production order, as shown in (1), is presented by rötzer and schwaiger [37]. in this description, the costs of a production order (kpa) consist of individual location costs (ks) fixed process costs (kpf) variable process costs (kpv) and variable transport costs (ktv) as well as the production quantity (xp) and the number of transports (xt). transportation costs (ktv) represent the expenses for the necessary transports (xt) to move the workload to be produced within the production system. they include, for example, costs for material supply and provision, transports between different workplaces and production facilities, as well as expenses for intermediate, inward and outward storage of the produced worklist. fixed production costs (kpf). represent the portion of production costs that is independent of the amount of work in process (xp)-that is, fixed production costs are constant for each production order. this includes, for example, the expenses for setup between the production orders but also administrative costs for order processing. in comparison, variable production costs (kpv) depend on the amount of work produced. typical variable production costs are, for example, costs for material, auxiliary and operating supplies, expenses for the actual production depending on the processing time, as well as expenses for the necessary energy input during production. the variable process costs of a production thus consist of costs due to production backlog, production time, setup times, and the inventory necessary for production, as well as maintenance costs [38]. in this context, the availability of the machines and the delay of the end of the order are particularly relevant for the evaluation of a production order [39]. for this reason, it is necessary to explicitly add the maintenance-relevant factors in a cost function to describe the costs of a production order. since different maintenance strategies make different demands on production control, but also allow for different information, it is advisable to use different cost functions for the different maintenance strategies. the cost functions are successively designed to build on each other, so that it is possible to use them in a production system that uses different maintenance strategies for the different assets and their components. 4.1. reactive maintenance strategy the reactive maintenance strategy is characterized by the fact that the system components are operated until failure and therefore the failure probability and the associated costs are not relevant for decision-making. this is also reflected in the representation of the cost function of a production order under consideration of reactive maintenance krm, cf. (2). the cost function takes into account not only the sum of the fixed production costs kf, variable production costs kp, transport costs kt, as well as the current order load xp, and the number of transports xt, but also the maintenance cost ratio km. the maintenance cost ratio describes the maintenance costs per production quantity produced. the maintenance costs under consideration include the costs for maintenance and repair of the various elements of the production system, the costs for spare parts stocking, and external service costs. 𝐾rm = ∑ 𝐾f + ∑ 𝐾v × 𝑋p + ∑ 𝐾t × 𝑋t + ∑ 𝐾m × 𝑋p (2) 4.2. periodic preventive maintenance strategy in periodic preventive maintenance, measures are planned preventively either time-dependently, for example weekly, quarterly or annually, or load-dependently, for example after a certain number of operating hours or switching operations. hence, the cost function of a production order, taking into account preventive maintenance kpm, cf. (3). it also takes into account the risk of an unplanned production downtime rdtp, figure 1. costs of abased production order under consideration of multiple maintenance strategies [36]. costs of a product ion order under consideration of maintenance m aintenance strategies production planning production controlling reactive number of t ransport s fixed & variable product ion costs m aintenance cost ratio number of pieces produced preventive cbm pdm kf, kv km xp xt costs f or penalty payment s dow nt ime costs order backlog m aintenance planning forecast rul planned measures and degree of prevention determine dow ntime costs condition dow nt ime risk downtime costs depending on order backlog kdt kp c rul rdt acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 161 costs in the event of a downtime event kdt, as well as any costs for contractual penalties due to schedule variances kp. the costs in case of a downtime event are, for example, the lost contribution margin of the planned worklist in case of an unplanned downtime, as well as costs for repairs. these downtime costs are dependent on the current order load xp as shown in (4). the default risk in the case of periodic preventive maintenance rdtp, can be calculated based on historical failures. for this purpose, the probability density function fp(tslf) is used for integration, where tslf describes the time since last failure, cf. (5). normally, a normal distribution is assumed, cf. (6). while the normal distribution is calculated based on historical failures, the expected value is assumed by the mtbf. 𝐾pm = ∑ 𝐾f + ∑ 𝐾v × 𝑋p + ∑ 𝐾t × 𝑋t + ∑ 𝐾m × 𝑋p + ∑ 𝐾dt × 𝑅dtp + ∑ 𝐾p × 𝑅dtp (3) 𝐾dt = f(𝑋p ) (4) 𝑅dtp = ∫ fp(𝑇𝑆𝐿𝐹) d𝑇𝑆𝐿𝐹 𝑇𝑆𝐿𝐹 0 (5) 𝑓𝑝(𝑇𝑆𝐿𝐹) = 1 𝜎√2 π e − 1 2 ( 𝑇𝑆𝐿𝐹−𝑀𝑇𝐵𝐹 𝜎 ) 2 (6) 4.3. condition-based maintenance strategy the cost function of a production order under consideration of condition-based maintenance kcm, cf. (7), corresponds largely to the cost function of preventive maintenance and also includes the risk of an unplanned production downtime rdtc applying condition-based maintenance strategies. in this case, in which a maintenance task is planned depending on the actual condition of a component, rdtc is calculated by a condition-based function fc at the respective time of the condition determination tc and the determined condition c at this time, cf. (8). the determination of this function is usually based on empirical studies or on already known equations or manufacturer data. in many cases, especially if a complex empirical determination is not economical, it is sufficient to assign a fixed default risk rdtc to defined states c based on empirical knowledge. 𝐾cm = ∑ 𝐾f + ∑ 𝐾v × 𝑋p + ∑ 𝐾t × 𝑋t + ∑ 𝐾m × 𝑋p + ∑ 𝐾dt × 𝑅dtc + ∑ 𝐾p × 𝑅dtc (7) 𝑅dtc = 𝑓𝑐(𝑡𝑐; 𝐶) (8) 4.4. predictive maintenance strategy in predictive maintenance (pdm), maintenance tasks are planned depending on prognosis of remaining useful life (rul). the cost function of a production order, therefore, takes into account kpdm, cf. (9), the risk of an unplanned production downtime (rdrpdm). the failure risk is calculated analogous to the condition-based maintenance by a function fp which is determined by the rul i.e. the remaining degree of wear and tear of the machine component, cf. (10). this function must also be known or empirically determined. in (11), a determination of the rul using a weibull function is shown. here, t represents the characteristic life, beta the shape parameter and wi an influence factor to account for changing operating conditions of the weibull function. in summary, figure 1 shows the composition of the developed cost function depending on the applied maintenance strategy and visualizes the relationship between the individual cost factors 𝐾pdm = ∑ 𝐾f + ∑ 𝐾v × 𝑋p + ∑ 𝐾t × 𝑋t + ∑ 𝐾m × 𝑋p + ∑ 𝐾dt × 𝑅dtpdm + ∑ 𝐾p × 𝑅dtpdm (9) 𝑅dtpdm = fp(𝑅𝑈𝐿) (10) 𝑅𝑈𝐿 = e −( 𝑡 𝑇 × 𝑤𝑖 ) 𝛽 (11) 5. conceptual model for integrating maintenance strategies in apc the model for the integration of different maintenance strategies in apc is designed using three subsystems: i) a maintenance system, ii) a system for autonomous production control and iii) a system for production planning. in figure 2, these subsystems and their interrelations are shown in detail. the system for autonomous production control maps the level for machine-to-machine (m2m) communication of the apc model. it regulates the real-time communication of the different elements of a production system with the aim of autonomously determining a production sequence based on the requirements of the production control system (the production orders) and the current framework conditions of the production system. to achieve this goal, real-time communication between different machine agents (ma), work piece agents (wpa) and resource agents (ra) is necessary (information flow a). an ma represents the different machines and plants of a production system. wpas represent the open worklist within a production system. depending on the production environment, an open worklist can be a concrete workpiece, production lot or any clearly identifiable portion of the production quantity. an ra represents further elements of a production system which are of interest for the task of production control. depending on the production environment, these can be, for example, tools, workstations, measuring equipment, transport equipment and all other resources, which have a significant influence on the determination of the production sequence. the m2m communication between ma, wpa and ra takes place via a message transport system (mts), which communicates between the elements of the production system and an order agent (oa) via an agent management system (ams) and a directory facilitator (df). the ams manages the specific addresses of the individual agents (information flow b). in comparison, the df manages the specific attributes and properties of each individual agent (information flow c). examples of these attributes are the probability of failure, downtime costs, and repair and maintenance costs, which are communicated directly from the maintenance system to the df (information flow d). further attributes describe, for example, the ability of an agent to determine which possible production steps can be carried out at the respective ma or which processing times result from this. the mts distributes messages between the different agents and between agents and oa (information flow e). the mst transports information about the attributes and properties of the respective agents and production orders, which it receives from acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 162 df and oa. the mst transports this information from a specific address that it receives from ams to another specific address that is also provided by ams. the oa also receives information about spare parts availability, maintenance capacity availability, and available qualifications from the maintenance system (information flow f). with this information, taking into account the current production sequence, planned maintenance orders can be defined and confirmed to the maintenance system (information flow g). the operational control of these maintenance orders, as well as the control of production orders, takes place via communication between the various agents and df and ams using mts. the central task of the system for apc is to determine the production sequence based on real-time m2m communication. different scheduling models can be used to fulfil this task and to determine a sequential order for each of the different production orders provided by the oa. the production orders to be scheduled are typically created and managed by an enterprise resource planning (erp) or manufacturing execution system (mes). in the present work, a "marketplace-based" model is used to illustrate the integration of the system for apc. in this case, the oa receives a demand in the form of a production order from an erp or mes system. this demand is matched by a supply of capacities of the mas and ras representing the production capacities of the production system such as machine resources, work centre resources, tool resources or transport resources. the information necessary to describe the supply is provided to the oa by means of mst via the attributes figure 2. process model for the integrated planning of maintenance and apc [36]. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 163 of the production resources relevant for the production order in question, which are managed in the df. 𝑃 = 1 (𝑡i − 𝑡e) × (𝑡c − 𝑡i) × 𝐾min (12) using the information on supply and demand, oa is able to determine the priority of each production order, cf. (12). the priority p is calculated by taking into account the desired completion date tc, the possible order start time ti, the order receipt time te, and a priority factor kmin. the oa can calculate the priority factor necessary to determine the priority using the information received from the mts based on the information managed in the df. to determine the priority factor, the cost function, presented in this paper, is used here. 𝑝𝑚𝑖𝑛 = min[𝑝(𝑥)] ; 0 < 𝑥 < 𝑛𝑀𝐴 (13) due to the structure of the underlying cost function, the priority of a manufacturing order increases the higher the costs of the manufacturing order and therefore the priority factor. similarly, the greater the difference between the current time and the incoming order, the higher the priority. the higher the desired production duration of the worklist to be produced, the lower the priority of the underlying manufacturing order is calculated. since it is usually assumed that both the variable production costs and the risk of an unplanned downtime differ between the different machines and plants of a production system, it is necessary to calculate the priority factor for the number of possible ma (nma), and then determine the minimum of the possible priority factors (pmin), cf. (13). based on this minimum, the final step is to determine the sequence rank n of the production order to be produced on the assigned ma, cf. (14). for this purpose, the priority rank (p(i)) of the individual available production orders (pan), is determined in order of the minimum priority: 𝑁 = rank(p(i)) < min{𝑃𝐴𝑛 } . (14) based on the sequence rank n and the lead time tpt, which the oa can determine using the information it receives from df via the mts and the current time tact, the oa can determine the estimated time of completion tn, (15), and communicate this together with the defined production sequence to the production planning system (information flow j). for example, the oa provides this information to a mes via the mst or an alternative interface. 𝑡n = 𝑡act + ∑ 𝑡pt 𝑁 0 (15) 6. outlook and further research 6.1. evaluation of economic plausibility in further research of glawar et al. (2021) the presented model has been implemented and evaluated [40]. since an implementation in a real time environment is yet hard to realize an implementation using an agent-based simulation approach based on a real industrial use case in the automotive industry has been realized. on this basis, the conditions for a successful, and cost-effective implementation of the model in industry are derived. in this use-case the benefits of integrating maintenance strategies in apc can be described as follows [40]: a. an increase in on-time delivery of more than 9% by reducing schedule deviations due to backlogs of production jobs. since the condition of a machine is already taken into consideration during the production control, even a simultaneous failure of several machines has little impact on the adherence to delivery dates. b. a reduction in the cost of manual rescheduling of approximately € 29,500 per year, since in the event of an unplanned machine failure the sequence and machine assignment can be adjusted autonomously. c. increase of the uptime by using the potential of modern maintenance strategies by approx. 4% to over 96% and thus a reduction of maintenance costs of approx. € 52,000 per year. d. increase in productivity, which is defined as parts produced per hour, by over 5.6%. 6.2. integration into maintenance cost controlling the cost function developed in this thesis aims at integrating the relevant aspects of maintenance for autonomous production control. in a further step, this cost consideration can also be used as a basis for integration into maintenance cost controlling. such an analysis enables the formalization of the relationship between key figures such as the proportion of external costs or the maintenance ratio and the operational logistical targets such as lead time and adherence to delivery dates as well as the productivity of the production system. the maintenance ratio describes the maintenance costs incurred in relation to a period under consideration. it is therefore an essential component of production costs and can already be used in rough-cut planning and in sequencing. this makes it possible to consider the resulting effects at the tactical level and to derive measures for achieving an overall optimum, independent of a fixed defined maintenance budget. existing models for maintenance cost controlling such as the cost prove model [41] model planned and unplanned maintenance costs and attempt to derive optimization measures based on any deviation from a defined budget in order to achieve the ideal operating point between planned and unplanned measures for maintenance. in comparison, by taking into account the developed cost function depending on the current and future expected production program as well as the current risk of failure of the equipment necessary for the workers of this production program, the maintenance costs are dynamically adjusted. this creates transparency for the performance of maintenance by quantifying the benefit of concrete measures on the operating result in costs and thus justifying, for example, the exceeding of a target budget while ensuring adherence to schedules and productivity. an example of integration in maintenance cost controlling is shown in figure 3. maintenance cost controlling supplies relevant cost variables to production control, which pursues the goal of minimizing the costs for a production order while taking maintenance into account. if a deviation from the original maintenance budget occurs, the effect on productivity and ontime delivery is used in a mathematical reference model to optimize maintenance cost controlling. this results in new target costs for maintenance, which influences the initiation of planned measures depending on the risk of failure and the respective machine condition. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 164 in order to create such a mathematical reference model, it is expedient to differentiate between planned and unplanned cost factors, as explained by ansari [38] and to model these in order to achieve an overall optimum. 7. conclusions in the present paper a novel model for integrating maintenance strategies in autonomous production control has been presented. relevant decision aspects have been discussed and a cost function for an integrated planning using a marketbased approach have been laid out. it is based on the key elements of a cpps and their relations to establish a complete but fully, efficiently integrated component in ppc. essential findings are the identified and evaluated aspects of maintenance which are decision relevant for the integration into the production control. the most relevant aspects are taken into account in the developed cost function for integrated planning, thus providing a robust basis for the implementation of apc in industrial practice. only through clear guidelines on how autonomous control behaves in the event of a failure and how the case of an increased risk of failure is taken into account, a acceptance for the implementation of apc can be achieved. against this background, the developed cost-based model contributes to bringing approaches to apc a further step towards implementation maturity and thus provides an innovative approach to communication between production and maintenance planning both from a practical and a scientific point of view. however, further research questions remain open: 1) the implementation of the present process model in a real production environment and a corresponding evaluation of the benefits in industrial practice represent the logical next step. this requires a well-thought-out roadmap, since such an implementation deeply affects different areas of a production system. in addition, there is the challenge of preparing personnel for the new way of working using autonomous production control and providing appropriate qualification measures in good time. 2) the present model is limited to the mapping of a manufacturing area and negates at this point the dependencies with respect to the higher-level planning to the present or subsequent areas of the production system. in particular, the challenges to the integration of autonomous and human agents have to be addressed. 3) similarly, the impact of short-term production control at the operational level on the tactical and strategic levels, such as production controlling, poses exciting challenges for further research activities. in particular, integration into maintenance cost controlling, as outlined schematically in section 6.2 can offer a significant contribution to quantifying the contribution of maintenance to the achievement of operational targets in this context. 4) approaches that make use of artificial intelligence methods represent interesting alternatives for the relatively simple market-based model used in the present modelin this context, reinforcement learning approaches in particular represent an alternative which, in the view of the author, should be explored in this context in the future. 5) in order to be able to apply this approach easily and quickly to further use cases in the future, research should be conducted in the direction of automated parameter optimization, for example by means of simulation studies. acknowledgement this work has been supported by the european commission through the h2020 project epic (grant no. 739592). references [1] t. bauernhansl, r. miehe, industrielle produktion–historie, treiber und ausblick. fabrikbetriebslehre 1, springer vieweg, berlin, heidelberg, 2020, pp. 1-33. [2] d. spath, e. westkämper, h.-j. bullinger, h.-j. warnecke, neue entwicklungen in der unternehmensorganisation. springer vieweg, 2017. figure 3. integration into maintenance cost controlling. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 165 [3] e. rauch, p. dallasega, d. t. matt, complexity reduction in engineer-to order industry through real-time capable production planning and control. production engineering, 12(3-4), 2018, pp. 341–352. doi: 10.1007/s11740-018-0809-0 [4] s. luke, c. cioffi, lpanait, k. m. sullivan, g. c. balan, mason: a multiagent simulation environment, simulation, 81(7), 2005, pp. 517–527. doi: 10.1177%2f0037549705058073 [5] v. gallina, l. lingitz, m. karner, a new perspective of the cyberphysical production planning system, 16th imeko tc10 conference, berlin, germany, 3-4 september 2019. online [accessed 8 september 2021] https://www.imeko.org/publications/tc10-2019/imekotc10-2019-008.pdf [6] a. kinz, r. bernerstaetter, h. biedermann, lean smart maintenance – efficient and effective asset management for smart factories. motsp 2016, porec, istria, croatia, 1-3 june 2016, 8 pp. [7] o. schmiedbauer, h. t. maier, h. biedermann, evolution of a lean smart maintenance maturity model towards the new age of industry 4.0, 2020 doi: 10.15488/9649 [8] f. pauker, t. frühwirth, b. kittl, w. kastner, a systematic approach to opc ua information model design. procedia cirp, 2016, 57:321–326. doi: 10.1016/j.procir.2016.11.056 [9] m. ulrich, d. bachlechner, wirtschaftliche bewertung von ki in der praxis, hmd praxis der wirtschaftsinformatik, 2020, 57(1):46–59 [in german]. doi: 10.1365/s40702-019-00576-9 [10] f. ansari, r. glawar t. nemeth, prima: a prescriptive maintenance model for cyber-physical production systems, international journal of computer integrated manufacturing, vol. 32, issue 4-5, 2019, pp. 482-503. doi: 10.1080/0951192x.2019.1571236 [11] m. henke, t. heller, smart maintenance der weg vom status quo zur zielvision. acatech studie, 2019, 68 pp. [in german]. [12] e. uhlmann, e. hohwieler, m. kraft, selbstorganisierende produktion, agenten intelligenter objekte koordinieren und steuern den produktionsablauf. fraunhofer ipk berlin, gito verlag, 2013, berlin, pp. 57-61 [in german]. [13] l. monostori, b. kádár, t. bauernhansl, s. kondoh, s. kumara, g. reinhart, o. sauer, g. schuh, w. sihn, k. ueda, cyber-physical systems in manufacturing, cirp annals, 65(2), 2016, pp. 621–641. doi: 10.1016/j.cirp.2016.06.005 [14] b. vogel-heuser, d. schütz, t. schöler, s. pröll, s. jeschke, d. ewert, o. niggemann, s. windmann, u. berger, c. lehmann, agentenbasierte cyber-physische produktionssysteme. anwendungen für die industrie 4.0, atp magazin, 57(09), 2016, pp. 36-45 [in german] [15] f. förster, a. schier, m. henke, m. ten hompel, dynamische risikoorientierung durch predictive analytics am beispiel der instandhaltungsplanung. logistics journal. proceedings, 2019(12), pp. 1-9 [in german]. doi: 10.2195/lj_proc_foerster_de_201912_01 [16] n. o. fernandes, t. martins, s. carmo-silva, improving materials flow through autonomous production control. journal of industrial and production engineering, 35(5), 2018, pp. 319–327. doi: 10.1080/21681015.2018.1479895 [17] j. zhang, multi-agent-based production planning and control, john wiley & sons, 2017, isbn 9781118890080 (pdf). [18] g. kasakow, n. menck, j. c. aurich, event-driven production planning and control based on individual customer orders, procedia cirp, 57, 2016, pp. 434-438. doi: 10.1016/j.procir.2016.11.075 [19] r. cupek, a. ziebinski, l. huczala, h. erdogan, agent-based manufacturing execution systems for short-series production scheduling. computers in industry 82, 2016, pp. 245-258. doi: 10.1016/j.compind.2016.07.009 [20] r. glawar, f. ansari, c. kardos, k. matyas, w. sihn, conceptual design of an integrated autonomous production control model in association with a prescriptive maintenance model (prima). procedia cirp, 80, 2019, pages 482–487. doi: 10.1016/j.procir.2019.01.047 [21] h. meissner, r. ilsen, j. c. aurich, analysis of control architectures in the context of industry 4.0. procedia cirp, 2017, 62:165–169. doi: 10.1016/j.procir.2016.06.113 [22] s. grundstein, s. schukraft, m. görges, b. scholz-reiter, an approach for applying autonomous production control methods with central production planning. int j syst appl eng dev, 7(4), 2013, pp.167-174. online [accessed 8 september 2021] https://www.naun.org/main/upress/saed/d012014-130.pdf [23] l. martins, n. o. fernandes, m. l. r. varela, autonomous production control: a literature review. in international conference on innovation, engineering and entrepreneurship, 2018, pp 425–431. doi: 10.1007/978-3-319-91334-6_58 [24] s. mantravadi, c. li, c. møller, multi-agent manufacturing execution system (mes): concept, architecture & ml algorithm for a smart factory case, proceedings of the 21st international conference on enterprise information systems, scitepress science and technology publications, 2019, pp. 477–482. doi: 10.5220/0007768904770482 [25] d. pantförder, f. mayer, c. diedrich, p. göhner, m. weyrich, b. vogel-heuser, agentenbasierte dynamische rekonfiguration von vernetzten intelligenten produktionsanlagen. in handbuch industrie 4.0 bd. 2, 2017. pp. 31–44 [in german]. doi: 10.1007/978-3-658-04682-8_7 [26] m. hoffmann, j. aro, c. büscher, t. meisen, intelligente produktionssteuerung und automatisierung, productivity, gito berlin, 2016, pp. 17-20 [in german]. [27] i. graessler, a. poehler, integration of a digital twin as human representation in a scheduling procedure of a cyber-physical production system, ieee international conference on industrial engineering and engineering management (ieem), singapore, 10-13 december 2017, pp. 289-293. doi: 10.1109/ieem.2017.8289898 [28] s. mayer, c. endisch, adaptive production control in a modular assembly system based on partial look-ahead scheduling, ieee international conference on mechatronics (icm), ilmenau, germany, 18-20 march 2019, vol. 1, pp. 293–300. doi: 10.1109/icmech.2019.8722904 [29] j. zou, q. chang, x. ou, j. arinez, g. xiao, resilient adaptive control based on renewal particle swarm optimization to improve production system energy efficiency. journal of manufacturing systems, 2019, vol. 50, pp. 135–145. doi: 10.1016/j.jmsy.2018.12.007 [30] t. jamrus, c.-f. chien, m. gen, k. sethanan, hybrid particle swarm optimization combined with genetic operators for flexible job-shop scheduling under uncertain processing time for semiconductor manufacturing, ieee transactions on semiconductor manufacturing, 31(1), 2018, pp. 32–41. doi: 10.1109/tsm.2017.2758380 [31] d. gyulai, a. pfeiffer, b. kádár, l. monostori, simulation-based production planning and execution control for reconfigurable assembly cells. procedia cirp, 57, 2016, pp. 445–450. doi: 10.1016/j.procir.2016.11.077 [32] a. kuhnle, n. röhrig, g. lanza, autonomous order dispatching in the semiconductor industry using reinforcement learning. procedia cirp, 79, 2019, pp. 391–396. doi: 10.1016/j.procir.2019.02.101 [33] s. erol, w. sihn, intelligent production planning and control in the cloud – towards a scalable software architecture. procedia cirp, 62, 2017, pp. 571–576. doi: 10.1016/j.procir.2017.01.003 [34] j. vallhagen, t. almgren, k. thörnblad, advanced use of data as an enabler for adaptive production control using mathematical optimization – an application of industry 4.0 principles. procedia https://doi.org/10.1007/s11740-018-0809-0 https://doi.org/10.1177%2f0037549705058073 https://www.imeko.org/publications/tc10-2019/imeko-tc10-2019-008.pdf https://www.imeko.org/publications/tc10-2019/imeko-tc10-2019-008.pdf https://doi.org/10.15488/9649 https://doi.org/10.1016/j.procir.2016.11.056 https://dx.doi.org/10.1365/s40702-019-00576-9 https://doi.org/10.1080/0951192x.2019.1571236 https://doi.org/10.1016/j.cirp.2016.06.005 http://dx.doi.org/10.2195/lj_proc_foerster_de_201912_01 https://doi.org/10.1080/21681015.2018.1479895 https://doi.org/10.1016/j.procir.2016.11.075 https://doi.org/10.1016/j.compind.2016.07.009 https://doi.org/10.1016/j.procir.2019.01.047 https://doi.org/10.1016/j.procir.2016.06.113 https://www.naun.org/main/upress/saed/d012014-130.pdf https://doi.org/10.1007/978-3-319-91334-6_58 https://doi.org/10.5220/0007768904770482 https://doi.org/10.1007/978-3-658-04682-8_7 http://dx.doi.org/10.1109/ieem.2017.8289898 http://dx.doi.org/10.1109/icmech.2019.8722904 https://doi.org/10.1016/j.jmsy.2018.12.007 http://dx.doi.org/10.1109/tsm.2017.2758380 https://doi.org/10.1016/j.procir.2016.11.077 https://doi.org/10.1016/j.procir.2019.02.101 https://doi.org/10.1016/j.procir.2017.01.003 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 166 manufacturing, 11, 2017, pp. 663–670. doi: 10.1016/j.promfg.2017.07.165 [35] f. wang, y. lu, f. ju, condition-based real-time production control for smart manufacturing systems, 2018 ieee 14th international conference on automation science and engineering (case), munich, germany, 20-24 august 2018, pp. 1052–1057. doi: 10.1109/coase.2018.8560389 [36] r. glawar, f. ansari, z. j. viharos, k. matyas, w. sihn, a costbased model for integrating maintenance strategies in autonomous production control. 17th imeko tc 10 virtual conference, 2022 october 2020, pp. 258 -264. online [accessed 8 september 2021] https://www.imeko.org/publications/tc10-2020/imekotc10-2020-037.pdf [37] s. rötzer, w. schwaiger, forschungsbericht zum projekt: „kosten und co2-emissionen im produktionsnetzwerk von magna europe", in: h. biedermann, industrial engineering und management. technoökonomische forschung und praxis, wiesbaden: springer gabler, 2016, pp. 237–246. [38] m. haoues, m. dahane, k. n. mouss, n. rezg, production planning in integrated maintenance context for multi-period multiproduct failure-prone single-machine, ieee 18th conference on emerging technologies & factory automation (etfa), cagliari, italy, 10-13 september 2013, pp. 1–8. doi: 10.1109/etfa.2013.6647980 [39] a. berrichi, f. yalaoui, bi objective artificial immune algorithms to the joint production scheduling and maintenance planning, ieee international conference on control, decision and information technologies (codit), hammamet, tunisia, 6-8 may 2013, pp. 810–814. doi: 10.1109/codit.2013.6689647 [40] r. glawar, f. ansari, k. matyas, evaluation of economic plausibility of integrating maintenance strategies in autonomous production control: a case study, automotive industry, 2021, 7th ifac symposium on information control problems in manufacturing (in print). [41] f. ansari, meta-analysis of knowledge assets for continuous improvement of maintenance cost controlling. faculty of science and technology, thesis, university of siegen, 2014, 169 pp. https://doi.org/10.1016/j.promfg.2017.07.165 https://doi.org/10.1109/coase.2018.8560389 https://www.imeko.org/publications/tc10-2020/imeko-tc10-2020-037.pdf https://www.imeko.org/publications/tc10-2020/imeko-tc10-2020-037.pdf https://doi.org/10.1109/etfa.2013.6647980 https://doi.org/10.1109/codit.2013.6689647 bias-induced impedance effect of the current-carrying conductors acta imeko issn: 2221-870x june 2021, volume 10, number 2, 88 97 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 88 bias-induced impedance effect of the current-carrying conductors sioma baltianski 1 1 wolfson dept. of chemical engineering, technion – israel institute of technology, haifa, israel section: research paper keywords: bias-induced impedance; impedance spectroscopy; zbi-effect citation: sioma baltianski, bias-induced impedance effect of the current-carrying conductors, acta imeko, vol. 10, no. 2, article 13, june 2021, identifier: imeko-acta-10 (2021)-02-13 section editor: giuseppe caravello, università degli studi di palermo, italy received january 18, 2021; in final form april 15, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: sioma baltianski, e-mail: cesema@technion.ac.il 1. introduction the study of various physical objects using impedance spectroscopy under applied dc bias is widespread. these objects may be of different physical nature: semiconductor structures [1], electroceramic structures [2], [3], electrochemical objects [4], etc. in all cases, the external offset sets the operating point in the vicinity of which the impedance measurements are made. the set offset (bias) makes it possible to tie the parameters obtained from impedance measurements to the physical state of the object under study. the idea to investigate the current-carrying conductors under the influence of bias arose because of a misunderstanding of the behavior of a fairly simple object as a load in the process of testing the different impedance meters. the first research results were published in [5]. possible measurement errors were checked in various ways, and literature sources describing the detected effects were not found. this work relates to impedance spectroscopy for several reasons. on the one hand, based on the described phenomena, sensitive elements can be created that require the use of impedance spectroscopy as a method for extracting informative parameters. on the other hand, it is a challenge to build more sensitive impedance meters (with an appropriate offset function) that allow obtaining reliable data under relatively difficult measurement conditions: low frequency and high tangent of the loss angle. the goal of this work is to reveal the discovered properties, which are important in the context of impedance research. low-frequency impedance spectroscopy was used as a method. the complex conductivity function and its determination by approximating experimental data utilising different models serve as theoretical basis [6]-[8]. the impedance of current-carrying conductors is well known and described in the literature [9]. as an example, we give the behaviour of the impedance of a silver conductor 0.5 m long and 0.25 mm in diameter. the initial experiment in figure 1 without bias (at vdc = 0 v) represents a typical behaviour of the real and imaginary parts of the impedance in the frequency range 1 mhz … 100 mhz. first, we use the full frequency range to verify the processes. later, we will be interested in events only in the low abstract the paper presents the previously unstudied properties of current-carrying conductors utilising impedance spectroscopy. the purpose of the article is to present discovered properties that are the significant context of impedance research. the methodology is based on the superposition of test signals and bias affecting the objects under study. these are the main results obtained in this work: the studied objects have an additional low-frequency impedance during the passage of an electric current; the bias-induced impedance effect (zbieffect) is noticeably manifested in the range of 0.01 hz … 100 hz and it has either capacitive or inductive nature or both types, depending on the bias level (current density) and material types. the experiments in this work were done using open and covered wires made of pure metals, alloys, and non-metal conductors, such as graphite rods. these objects showed the zbi-effect that distinguishes them from other objects, such as standard resistors of the same rating, in which this phenomenon does not occur. the zbi-effect was modeled by equivalent circuits. particular attention is paid to assessing the consistency of experimental data. understanding the nature of this effect can give impetus to the development of a new type of instrument in various fields. mailto:cesema@technion.ac.il acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 89 frequency part of the spectrum. the initial and all subsequent potentiostatic experiments were carried out at the amplitude of the test signal vac = 10 mv. this signal corresponds to a small signal approach. the view of graph results is quite trivial at zero bias. three approximate frequency domains can be distinguished in this graph: the high frequency region (hf) of the spectrum f = 1 mhz … 100 khz; the mid frequency region (mf) f = 100 khz … 100 hz and the low frequency region (lf) f = 100 hz … 0.1 hz. in the hf region there is an increase of a real re(z) and imaginary im(z) part of impedance with increasing frequency. this part is well described by a parallel connection of resistance rp and inductance lp (figure 1). in the mf region, a constant value of re(z) is observed. series resistance rs must be added to the model. a linear decrease in the imaginary component occurs with a decrease in frequency (log-log scale). the relative noise level increases at the same time. this noise is natural and associated with the measuring system capabilities. the lf part of the spectrum at vdc = 0 v demonstrates the constancy of re(z) and the strong noise of im(z). this area is not informative for interpretation using the imaginary part of the impedance. measurements were made using a faraday cage to improve the signal-to-noise ratio mainly for the imaginary impedance component. in this simple model, the specific resistance of the conductor determines the series resistance rs. mainly the length of the conductor determines the inductance lp. the parallel resistance rp that is connected to the inductance characterises the active loss in the conductor due to the skin effect at high frequencies. the experimental values at zero bias correspond to the expected values and are quite common. the situation changes significantly when measurements are carried out under bias. the experimental characteristics are shown in the same figure 1 at biases vdc = 0.09 v … 0.9 v. the increment of bias was 0.09 v, and the measuring test signal was the same, namely vac = 10 mv. the hf and the mf imaginary part of the impedance do not change with bias. however, the lf part changes considerably. this response can be reflected by including an additional non-linear impedance zbi (figure 1). a corresponding increase of the real part of the impedance occurs in the considered region, which meets the kramers–kronig relationships [6]. besides, we observe a monotonic change in the real component of the impedance in the entire frequency range (the model element rs). this change is caused by a shift in the temperature of the conductor due to biases. the model indicated in figure 1 is intuitive, but it well describes and fits the object under study in the specified frequency range. the bias phenomenon is sometimes difficult to detect because experimental data are often on the limit of the sensitivity of measuring instruments, namely, limitations on phase measurement with a high value of the loss tangent. potentiostat/galvanostat biologic sp-240 (biologic science instruments) was used as a measuring instrument. in doubtful cases, data were verified using the potentiostat/galvanostat gamry reference 3000 (gamry instruments). a biologic’s contour plot defines an error not more than 0.3% and 0.3° in the desired measuring range [10]. thus, the experimental data in figure 1 is reliable. moreover, in this case, we are interested in relative changes in the impedance components. a homemade four-wire sample holder was used to connect the samples under test (sut), as shown in figure 2. electrode designation is taken from the manual [10]. this sample holder gives accurate measurement of low-resistance objects as well as negligible influence of contact phenomena. in addition to the devices used in this work (biologic and gamry), which are based on frequency response analyzer, also the devices of the lock-in amplifier type can be used (see for example [11]). we used standard not wire wound resistors as references to verify measuring results and estimate artifacts. this phenomenon was not observed when dealt with standard resistors of the same rating as sut. the impedance change in the lf region can be caused not only by bias using direct current but also by alternating current – by a large amplitude test signal at zero bias. it should be emphasised that in this work, we use a small signal approach in which a change in the lf impedance is not observed at zero bias. thus, the occurrence of the additional impedance in the lf region will be determined solely by the level of bias. this physical phenomenon is named here as a bias-induced impedance (zbieffect). in section 2 we will systematise the experimental results. section 3 is devoted to the interpretation of experimental results lo g ( f r e q /hz ) 50 lo g ( |r e ( z ) /o h m |) 0 -1 -2 -3 -4 -5 lo g ( |im ( z ) /o h m |) 0 -1 -2 -3 -4 -5 re(z) at vdc = 0 v im(z) at vdc = 0 v rs rp lp vdc t zbi lo g (| r e (z )/ o h m |) lo g (|-im (z )/o h m |) log(freq/hz) figure 1. re(z) and im(z) of the silver wire at vdc = 0 v … 0.9 v with bias step 0.09 v; length 500 mm and diameter 0.25 mm; frequency range 1 mhz … 0.1 hz. figure 2. four-wire sample holder. 1 high force-and-sense kelvin clip; 2 – sample under test (wire); 3 – working electrode; 4 – counter electrode; 5 low force-and-sense kelvin clip; 6 – reference electrode; 7working sence electrode. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 90 using electrical models. section 4 discusses significant differences in impedance behavior of open and covered objects. special attention is paid to checking the consistency of experimental data: this is outlined in the section 5. section 6 discusses and proves the main hypotheses that explain the revealed effect. finally, the main results are presented in the concluding section. 2. systematisation of the experimental results we studied pure metals: nickel, copper, silver, tungsten, platinum, gold; alloys: constantan, nichrome, manganin; nonmetals graphite rods. although the frequency scan started from 1 mhz toward low frequencies, the analysis of the results was carried out only for the lf part of the spectrum, where the zbieffect manifested. according to the type of zbi-effect, all studied materials were grouped into three categories: (i) the zbi-effect has a capacitive nature, (ii) an inductive nature, and (iii) mixed when both types of reactance occur. table 1 summarises the properties of the investigated materials. below are the experimental characteristics of one of the representatives of each group. 2.1. pure metals we find significant changes in the behaviour of the imaginary part of the impedance after a critical frequency of about 30 hz (resonance point) when applying bias (figure 1). the inductive nature of reactance sharply changes to a capacitive nature from this point to the direction of lf. we observe a monotonic change in the imaginary and real component of the impedance, depending on the applied bias. changes in the real part of impedance in the mid frequency region also occur; however, this is due to a change in the temperature of the conductor upon bias. for example, with a maximum bias of 0.9 v for this experiment and a conductor resistance of about 0.23 ω, the current flowing through the conductor will be approximately 3.9 a. the power dissipation will be approximately 3.5 w, which will lead to certain heating of the conductor and consequential increase in its resistance. figure 3 shows a nyquist graph of the lf part of the same experiment, which is shown in figure 1. at mid and high frequencies, there is no change in the behaviour of the imaginary component of the impedance under the influence of bias. henceforward, we will limit visualisation within the lf part of experimental data in the form of nyquist plots. similar in appearance, but numerically different in values, the characteristics were obtained in studies of other pure metals: nickel, copper, tungsten, platinum, and gold. 2.2. alloys the impedance characteristics of alloys as a function of bias differ from pure metals. manganin demonstrated the inductive nature of reactance at moderate bias. in our nichrome and constantan samples, the zbi-effect had both capacitive and inductive reactance. the nature of the reactance depends on the level of bias. as an example, studies of a nichrome sample with a diameter of 0.1 mm and a length of 57 mm are presented in figure 4. the experiment was carried out using a bias in the range of 2.8 v… 8.5 v in increments of 0.1 v. the data were taken in the frequency spectrum 1 mhz … 0.1 hz, but only the range of interest is presented here: 100 hz … 0.1 hz. three areas of bias were identified. in the bias of 2.8 v … 4.6 v, the capacitive nature of the reactance was observed. in the range of 4.6 v … 6.7 v, the increasing portion of the inductive nature of the reactance added to the decreasing portion of the capacitive nature of the reactance. with a subsequent increase in bias, the reverse process occurs. also, in the bias range of 6.7 v … 8.5 v, the capacitive nature of the reactance was again re ( z ) / oh m 0.30.250.20.15 im ( z ) /o h m 0.08 0.06 0.04 0.02 0 -0.02 -0.04 -0.06 -0.08 z at vdc = 0 v z at vdc = 0.9 v f < 3 0 h z f > 3 0 h z -i m (z )/ o h m re(z)/ohm figure 3. nyquist plot of silver wire at vdc = 0 v … 0.9 v with bias step 0.09 v; length 500 mm and diameter 0.25 mm. re ( z ) /oh m 9.59 im ( z ) /o h m 0.3 0.2 0.1 0 -0.1 -0.2 -0.3 -0.4 -0.5 z at vdc = 6.7 v z at vdc = 4.6 v re(z)/ohm -i m (z )/ o h m figure 4. nyquist plot of nichrome wire at vdc=4.6 v … 6.7 v with bias step 0.1 v; length 57 mm and diameter 0.1 mm. table 1. systematisation of the investigated materials by the nature of zbieffect. type of conductors pure metals alloys non-metals (graphite) nature of zbi-effect capacitive mixture: capacitive and inductive inductive acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 91 observed, the same as with a small bias. figure 4 shows a transient state when both types of reactance are present. 2.3. non-metals measurements were carried out on graphite rods. samples of various diameters were investigated. figure 5 shows nyquist plots of the impedance of the graphite rod 0.5 mm in diameter and 57 mm in length. the inductive nature of reactance was demonstrated over the entire range of biases. 3. interpretation using electrical models first, we consider a simple case of interpreting experimental data related to pure metals in which the zbi-effect of capacitive nature is manifested. as an example, figure 6 presents a fitting result of the lf part of one of the experiments shown in figure 3, specifically at bias vdc = 0.72 v. the fitting was carried out using the impedance model which consists of serial resistor rs connected to parallel c1 and r1. the resistor rs reflects specific resistance of the sample under test and its geometry. the resistance varies with the applied bias, which affects the temperature of the sample (see a right shift of characteristics in figure 3 with increasing bias). the parallel circuit c1-r1 exactly describes the zbi-effect. figure 6 shows a good fitting quality. a similar approach for fitting can be used for materials in which the zbi-effect is purely inductive (figure 5) by using the lr circuit. the situation becomes more complicated in the case of a complex zbi-effect (figure 4). one of the possible electrical models that satisfactorily approximate the experimental data is embedded in figure 7. a system function as a rational fraction [12] that corresponds to this model has the following form: 𝑍(𝑠) = 𝐴0 + 𝐴1𝑠 + 𝐴2𝑠 2 1 + 𝐵1𝑠 + 𝐵2 𝑠 2 , (1) where 𝑠 = 𝑗 2 π 𝑓 and 𝐴𝑖, 𝐵𝑖 are unknown coefficients. although the system function uniquely approximates the experimental data, its coefficients are difficult to fill with a physical meaning. it is easier to do this using circuit functions which reflect the topology of the corresponding equivalent circuits [12]. the circuit function corresponding to the model in figure 7 is described by the following equation: 𝑍(𝑠) = 𝑅𝑠 + 𝑅1 1 + 𝜏𝐶 ⋅ 𝑠 + 𝐿1 ⋅ 𝑠 1 + 𝜏𝐿 ⋅ 𝑠 , (2) where: 𝜏𝐶 = 𝑅1 ⋅ 𝐶1; 𝜏𝐿 = 𝐿1/𝑅2 and rs, r1, r2, c1, l1 requested parameters from fitting. the system function (1) covers several equivalent circuits. figure 7 represents one of the possible implementations. the results of fitting utilising this circuit for one of the characteristics represented in figure 4, specifically at bias vdc = 5.3 v, are given in figure 7. the selection of a suitable electrical model can be made empirically by iterating through the available set of models determined by the system function (1). to implement this process, experimental data can be approximated using an acceptable set of equivalent circuits and utilising available fitting programs, such as levm [6] or method described in [13]. figure 8 represents the dependencies of the model parameters versus bias of the silver wire corresponding to the data shown in figure 3. re ( z ) /oh m 1.31.2 im ( z ) /o h m 0.05 0 -0.05 -0.1 -0.15 z at vdc = 0v z at vdc = 1v f > 1 0 0 h z -i m (z )/ o h m re(z)/ohm figure 5. nyquist plot of graphite rod at vdc=0 v … 1 v with bias step 0.1 v; length 57 mm and diameter 0.5 mm. r1 rs c1 zbi -i m (z )/ o h m re(z)/ohm figure 6. fitting result of lf part of data (f = 10 hz … 0.1 hz). silver wire at vdc=0.72 v: rs=0.214 ω; r1=0.07 ω; c1=21.34 f. r2 rs c2 zbi r1 l1 -i m (z )/ o h m re(z)/ohm figure 7. fitting result of lf part of data (f = 10 hz … 0.1 hz). nichrome wire at vdc = 5.3 v: rs = 8.261 ω; r1 = 0.622 ω; c1 = 0.373 f; r2 = 1.018 ω; l1 = 1.091 h. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 92 the capacitance с1 increases exponentially with decreasing bias. this leads to a decrease in the contribution of the reactive component into the zbi-effect. at the same time, a monotonic decrease in resistance r1 is observed. as a result, at zero bias, the zbi-effect demonstrated by the vanishingly small magnitude. the resistance rs reflects a change in resistivity as a function of temperature, which in turn depends on the flowing bias current. the temperature value (it got by utilising resistivity) and power dissipation for this experiment shown in figure 9. similar results, but differing in values, were obtained for other pure metals. the graphite rod model behaves quite differently as shown in figure 10. first, the series resistance rs decrease with bias due to a negative temperature coefficient (ntc). it distinguishes them from metal in which there is a positive coefficient of resistance (ptc), see figure 8. secondly, the inductance l1, together with the parallel resistance r1, decreases with decreasing bias. this nullifies the zbi-effect at zero bias. the behaviour of model parameters for alloys is more complex and is beyond the scope of this article. 4. the difference between open and covered objects some of the previously investigated objects were studied using various types of cover. this is necessary to validate the hypotheses put forward to explain the occurrence of the effect as described in section 6. the results of the study of currentcarrying conductors covered with dielectric materials are quite informative for this purpose. here, we present the results employing a shell in the form of thin teflon and ceramic (alumina) tubes that is quite tight adjacent to the object under study. it is necessary to expand the frequency range towards lower frequencies down to 1 mhz to detect the effect of covering onto the impedance results. this requirement significantly increased the time of each experiment. figure 11 shows the real and imaginary parts of the impedance without covering. we used the galvanostatic mode of the measurement device (biologic sp-240) and the same sample holder as previously. the full range of bias current was idc = r1 rs c1 zbi 0.72 v vdc/v r /o h m c /f figure 8. model parameters of silver wire as a function of bias refer to the experiment data in figure 3. vdc/v t p /w figure 9. power dissipation and temperature of silver wire refer to the experiment data in figure 3. rs zbi r1 l1 vdc/v l /h r /o h m figure 10. model parameters of graphite rod as a function of bias refer to the experiment data in figure 5. l o g ( f r e q / hz ) 50 lo g ( |r e ( z ) /o h m |) 0 -1 -2 -3 -4 -5 lo g ( |im ( z ) /o h m |) 0 -1 -2 -3 -4 -5 lo g (| r e (z )/ o h m |) lo g (-|im (z )/o h m |) log(freq/hz) figure 11. re(z) and im(z) of the open silver wire; galvanostatic mode at amplitude iac = 100 ma and bias idc = 0 a … 4a with step 0.4 a; length 500 mm and diameter 0.25 mm; frequency range 1 mhz … 1 mhz. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 93 0 a … 4 a with steps 0.4 a, and the test signal was iac = 100 ma (it satisfies the low signal approach). figure 12 and figure 13 show the results of the silver wire inside alumina and teflon tubes, respectively. a comparison between these experiments in the form of a nyquist plot at one of the biases (at idc = 2 a) is shown in figure 14. it can be seen with the naked eye that an open wire has one time constant while the covered conductors have two time constants. yet teflon covering shows more overlapping and more distributed impedance spectra. the fitting of the low-frequency part of the data (100 hz … 1 mhz) using the equivalent rc-circuits is also presented in figure 14. the fitting results are summarised in table 2. the indexes in the equivalent circuits in the table have the following meaning: p parallel connection and s serial connection. evaluating the fitting results, we can say that these objects are quite satisfactorily approximated by models utilising lumped elements. a better result can be obtained for the case of a conductor surrounded by teflon using the gaussian distribution function, convolved into impedance [14]. but in this case, to demonstrate the zbi-effect when exposed by the covering, this is not essential. a simple calculation of the ratios of the time constants (τ = rc) taken from table 2 gives the following values: τ2 / τ1 = 247 in the case of alumina covering and τ2 / τ1 = 40 in the case of teflon covering. let us cite as an example the behaviour of an object exhibiting an inductive nature of zbi-effect also in a free and covered state. figure 15 shows resulting nyquist graphs of a graphite rod at bias idc = 0.9 a and the test signal iac = 100 ma in the open state and covered with alumina and teflon tubes. it is important to emphasise that the imaginary part has the opposite sign compared to the previous graphs of the silver wire. the fitting results are summarised in table 3. the ratio of the two time constants (τ =l/r) for the graphite rod covered by alumina is τ2 / τ1 = 284. in the case of teflon covering, the ratio is τ2 / τ1 = 22. the difference in the ratios of the time constants is similar to that found earlier in experiments with silver wire covered with the same materials. 5. check of the data consistency current-voltage characteristics were acquired on the same samples to check a data set for internal consistency. a sweep rate of 1 mv/s which is reasonable to our low frequency 0.1 hz (in the first studies) was selected. this speed allows getting quasistatic characteristics. static and differential parameters, namely resistances, were calculated and compared with parameters obtained from impedance measurements. l o g ( f r e q / hz ) 50 lo g ( |r e ( z ) /o h m |) 0 -1 -2 -3 -4 -5 lo g ( |im ( z ) /o h m |) 0 -1 -2 -3 -4 -5 lo g (| r e (z )/ o h m |) lo g (-|im (z )/o h m |) log(freq/hz) figure 12. re(z) and im(z) of the silver wire inside the alumina tube. the same experimental conditions as pointed in figure 11. l o g ( f r e q / hz ) 50 lo g ( |r e ( z ) /o h m |) 0 -1 -2 -3 -4 -5 lo g ( |im ( z ) /o h m |) 0 -1 -2 -3 -4 -5 lo g (| r e (z )/ o h m |) lo g (-|im (z )/o h m |) log(freq/hz) figure 13. re(z) and im(z) of the silver wire inside the teflon tube. the same experimental conditions as pointed in figure 11. re(z)/ohm -i m (z )/ o h m figure 14. nyquist plot of experimental and fitted data of non-covered silver wire, the wire inside alumina, and the wire inside teflon. low frequency part of data: 100 hz … 1mhz at the bias idc = 2 a. table 2. fitting results of non-covered silver wire, the wire inside alumina, and the wire inside teflon (according to figure 14). sut equivalent circuit fit parameters: r/ohm; c/f open wire (r1p + c1p)s + rs r1p = 0.032; c1p = 64.27 rs = 0.185 wire inside alumina (r1p + c1p)s + (r2p + c2p)s + rs r1p = 6.0e-3; c1p = 42.3 r2p = 0.025; c2p = 2.48e3 rs = 0.214 wire inside teflon (r1p + c1p)s + (r2p + c2p)s + rs r1p = 6.2e-3; c1p = 91.8 r2p = 0.013; c2p = 1.86e3 rs = 0.179 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 94 the i-v characteristics of a silver sample are shown in figure 16. the parameters of the sample under test correspond to the parameters indicated in figure 3. the setup for i-v measurements was identical to the setup for impedance measurements. figure 16 represents i-v curve, static resistance rstat = vdc/idc and differential resistance rdiff = d(vdc)/d(idc). parabolic spline interpolation was used for analytical differentiation. a fairly good accordance was obtained between the model parameters extracted from impedance measurements and the parameters calculated from the current-voltage characteristics. the parameter rs extracted from the impedance zbi and the rstat extracted from the i-v well fit each other in an error of not more than 0.3%. the total resistance rsum=rs+r1 found from the impedance measurements corresponds to the resistance rdiff calculated from the i-v (figure 16). as an example, the bias point vdc = 0.72 v was taken in figure 16 for indicating these correlations. it is corresponding to this bias point in figure 6, figure 8 and figure 9. the dependence of power dissipation pi +δp on the influence of bias and the test signal at an operating point i will have the form: 𝑃𝑖 + 𝛥𝑃 = (𝑉𝑖 + 𝛥𝑉) ⋅ (𝐼𝑖 + 𝛥𝐼), (3) therefore, changes in power due to only the test signal is determined as 𝛥𝑃 = 𝑉𝑖 ⋅ 𝛥𝐼 + 𝐼𝑖 ⋅ 𝛥𝑉 + 𝛥𝑉 ⋅ 𝛥𝐼 , (4) where: vi, ii voltage and current at a working point and δv, δi – amplitudes of voltage and current of the test signal. from (4), it can be noticed that with increasing bias the dissipated power caused by the test signal will increase. hence, the temperature variation will increase, which will lead to an increase in a change of the resistivity by an influence of the test signal. this is an explanation of the magnification of the zbi-effect at all the experiments with increasing a bias. the most mysterious case is the presence of both types of reactivity in experiments with alloys (figure 4 and figure 7). it required an additional consistency check through an independent method. for these purposes, current-voltage characteristics with different sweep rates were used. the sweep rates were chosen to match the transient point around 0.5 hz (figure 7). the corresponding graphs of the i-v characteristics are shown in figure 17. the setup was the same as for impedance measurements. figure 18 shows the graphs of the calculated values of the static resistances rstat obtained from the i-v characteristics with different sweep rates (figure 17). now we can see that, in the vicinity of 5.3 v bias voltage, the static resistance changes the trend from decreasing its value with increasing a bias at the sweep 0.1 mv/s to increasing its value with increasing a bias at the sweep 100 mv/s. the sign of the -i m (z )/ o h m re(z)/ohm figure 15. nyquist plot of experimental and fitted data of non-covered graphite rod, the rod inside alumina, and the rod inside teflon. low frequency part of data 100 hz … 1 mhz at the bias idc = 0.9 a. table 3. fitting results of the non-covered graphite rod, the rod inside alumina, and the rod inside teflon (according to figure 15) sut equivalent circuit fit parameters: r/ohm; l/h open rod (r1p + l1p)s + rs r1p = 0.160; l1p = 0.343 rs = 1.09 rod inside alumina (r1p + l1p)s + (r2p + l2p)s + rs r1p = 0.01; l1p = 1.33e-3 r2p = 0.121; l2p = 4.572 rs = 1.128 rod inside teflon (r1p + l1p)s + (r2p + l2p)s + rs r1p = 0.041; l1p = 0.027 r2p = 0.095; l2p = 1.38 rs = 1.041 r e s is ta n c e /o h m voltage/v c u rre n t/a figure 16. i-v curve; static and differential resistance curves of silver wire with the same dimensions as in figure 3. c u rr e n t/ a voltage/v c u rr e n t/ a voltage/v figure 17. i-v curves with voltage sweeps: 0.1; 1; 10 and 100 mv/s. nichrome wire, the same dimensions as pointed in figure 4. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 95 differential resistance, which is the essence of impedance measurements, will change accordingly. thus, there is a consistency between measurements in the frequency domain (figure 7) and time domain (figure 18). several temperature studies were carried out to check the data consistency. the platinum conductor was heated by a separate heating element at various biases. figure 19 shows the results of the lf part of data 100 hz – 0.1 hz at different values of temperature controller: at room temperature, at 165 c°, and 265 c°. the actual temperature of the wire can be calculated considering the resistivity of the wire – via real part of impedance at frequency 100 hz or using dc – measurement. for qualitative analysis, knowing the actual temperature of the wire is not essential in our case. the main results of this experiment are as follows. we see that the zbi-effect does not occur in the absence of bias at any temperature. at the same time, this effect takes place in the presence of a bias at any temperature. it is also noticeable that the external temperature additively shifts the values of the real impedance component, almost without affecting the imaginary one. a good illustration in this graph is the occasion when, in the case of room temperature and bias idc = 2 a, the actual conductor temperature is practically identical with the case when the temperature controller value shows 265 c° without bias applied to the conductor (idc = 0 a). 6. discussions the impedance of quite ordinary current conductors under bias in the low-frequency region demonstrates remarkable properties named as zbi-effect. the term "current-carrying conductor" refers to extended conductors designed to carry electric current. the research that is outlined in this article is also relevant to other types of objects. these can be conductive or semiconductive materials of various compositions and shapes. an example relates to experiments carried out with thermistors (they are out of the scope of the paper). the zbi-effect is counter-intuitive. it was natural to assume that the measurement of the impedance of objects at the lowest frequency brings it closer to the measurement result at direct current. this is what happens in the absence of bias. yet, if a bias is applied to an object the feeling deceives. it looks paradoxical, but the infra-low frequency is not an asymptotic approximation to dc when measuring the impedance of a conductor under bias. now, the question of how to explain the occurrence of such significant reactive elements in the impedance models of the studied objects. in particular, the capacitance of pure metals reaches the order of farads (figure 8). the inductance of graphite rods reaches about hundreds of millihenry (figure 10). in reality, of course, such reactance does not exist in the studied objects. this phenomenon may be called as a “phantom” reactance. this effect can be explained considering two necessary properties of the studied objects. the first is nonlinearity and the second one is inertia. the nonlinearity of current conductors is the second kind of nonlinearity (indirect). this property distinguishes them from objects of the first kind of nonlinearity (direct), such as p-n junctions or schottky diodes. in the case of the first kind of nonlinearity, the nonlinearity reveals itself directly, without any delay. for the second kind of nonlinearity, it manifests itself due to a resistivity dependence on temperature. this is a factor of the studied material. the nonlinearity of the second kind has a significant delay. the bias sets a specific operating point. the test signal acts in the vicinity of this point. no matter how small the test signal is, it will change the temperature of the investigated object in the locality of the operating point with a certain delay. consequently, the resistance of the material under investigation will cyclically change with the test signal. eventually, the resistance is modulated by the test signal. the difference between the phase of this modulation and the phase of the acting test signal determines the occurrence of phantom reactance. if the investigated object has a ptc property, a capacitive reactance arises. this behavior is typical for pure metals or ptc thermistors. if the object under study has an ntc feature, then an inductive reactance appears. it is specific, for example, to graphite and ntc thermistors. in terms of electrical measurements, impedance is properly defined only for systems satisfying stationarity [6]. in our case, we have a dynamic structure with one exception the system changes cyclically and synchronously with the influence of the test signal. the amplitude and phase response depends on the frequency of the test signal. a purely active resistance, which changes are synchronously according to the test signal, but with r e si st a n ce /o h m voltage/vvoltage/v r es is ta n ce /o h m figure 18. static resistances calculated from the i-v curves represented in figure 17. -i m (z )/ o h m re(z)/ohm figure 19. nyquist plot of platinum wire at different temperatures and biases; length 83 mm and diameter 0.2 mm; frequency range 100 hz … 0.1 hz. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 96 a different phase relative to the test signal, generates a reaction that looks like a complex resistance. as a result, a complex value will be estimated during the measurements as an impedance of the studied object. successive experiments revealed a significant feature. it turned out that the time constant following from the zbi-effect’s model (τ = rc for ptc objects and τ = l / r for ntc objects) weakly depends on the applied bias. it is reasonable to assume that these time constants related to the time constant of the heat exchange between the object under study and the environment (air in our initial study case). studies using a various covering of current-carrying conductors support this hypothesis. there are two time constants (figure 12 and figure 13). the first of which is apparently determined by the thermal properties of the covering and the thermal interaction between the conductor and the covering. the second time constant is determined by the thermal interaction between the covering and the environment. an accurate description of thermal processes requires special knowledge and is beyond the scope of this article. however, it seems possible that the discovered effect could allow to develop specialised sensors to assess the thermal conductivity of various materials. this approach may be an alternative to the methods described in [15], [16]. the experimental results obtained on the nichrome alloy motivates the appearance of additional ideas. in particular, both capacitive and inductive reactive components are observed at a bias vdc = 5.3 v (figure 7) and in the areas close to it (figure 4) depending on the frequency of the test signal. specifically, for figure 7 capacitive nature takes place in the frequency range of 10 hz to 0.5 hz, and inductive nature takes place in the range of 0.5 hz to 0.1 hz. such effects are possible if we assume that the temperature coefficient of resistance (tcr) has dynamic properties. in other words, the tcr changes its character depending on the rate of temperature change. in turn, the rate of temperature variations at the selected operating point will be related to the frequency of the test signal. therefore, in the higher frequency range will be observed a ptc-feature, in the lower frequency range will be observed an ntc-feature (figure 7). this assumption was confirmed using i-v experiments with different sweep rates (figure 17 and figure 18). it looks like impedance spectroscopy may provide a more sensitive tool for assessing the dynamic properties of the tcr. since the dynamic properties of the tcr will depend on the composition of the objects under study, there is a potential possibility of indirect composition estimation by checking the moment of the tcr sign change utilising impedance spectroscopy. the revealed new properties (i.e., the possibility of evaluating the thermal conductivity and estimating the composition of the material by dynamic tcr, arising from the discovered zbi-effect) may represent a significant contribution to the scientific and technical community, in particular, in the development of the theory and practice of impedance spectroscopy of objects that cyclically change their parameters synchronously with the test signal. understanding the nature of this effect can foster the development of a new type of instruments in various fields and various scientific institutions. 7. conclusions in this work, the phenomenon of bias-induced impedance was described. this effect was most evident in the low frequency spectra of the reactive part of the impedance. the manifestation of the different nature of this issue was shown experimentally. the zbi-effect may be capacitive, inductive, or complex, which includes both types of reactance. the nature of the reactance depends on the type of test material. pure metals showed capacitive reactance. graphite rods showed inductive reactance. the alloys showed reactance of both types depending on the level of bias. the investigated objects can be attributed to the inertial nonlinear resistances. the zbi-effect is caused by the thermal interaction between the conductor and the environment under the superposition of the bias combined with a test signal. relatively simple equivalent circuits were found to describe the experimental data. additional studies should be undertaken to better understand the behaviour of alloys and other composites under the bias, especially unexpected dynamic tcr properties. new possibilities arise for assessing the thermal conductivity of various materials. this requires the synthesis of knowledge in the fields of electrical and thermal measurements and the construction of specialised sensor devices. references [1] e. h. nikollian, j. r. brews, mos physics and technology, wiley, new york, 1982. isbn-13: 978-0471430797 [2] s. taibl, g. fafilek, j. fleig, impedance spectra of fe-doped srtio3 thin films upon bias voltage: inductive loops as a trace of ion motion, nanoscale. 8 (2016), pp.13954–13966. doi: 10.1039/c6nr00814c [3] n. kumar, e. a. patterson, t. frömling, d. p. cann, dc-bias dependent impedance spectroscopy of batio3bi(zn1/2ti1/2)o3 ceramics, j. mater. chem. c. 4 (2016), pp. 1782–1786. doi: 10.1039/c5tc04247j [4] z. b. stoynov, b. m. grafov, b. s. savova-stoynov, v. v. elkin, electrochemical impedance, nauka, moscow, 1991. isbn: 5-02001945-3 [5] s. baltianski, low frequency bias-induced impedance, 24th imeko tc4 int. symp. 22nd int. work. adc and dac model. test, palermo, italy, 14-16 september 2020, pp 423-428. online [accessed 18 june 2021] https://www.imeko.org/publications/tc4-2020/imeko-tc42020-79.pdf [6] e. barsoukov, j. r. macdonald (eds.), impedance spectroscopy, theory, experiment, and applications, 2nd ed., new jersey, john wiley & sons. inc., 2005. isbn: 978-0-471-64749-2 [7] m.e. orazem (eds.), b. tribollet, electrochemical impedance spectroscopy, 2nd ed., new jersey, john wiley & sons, inc., 2008. isbn:9780470041406 [8] a. lasia, electrochemical impedance spectroscopy and its applications, electrochem. impedance spectrosc. its appl. 9781461489337 (2014) 1–367. doi: 10.1007/978-1-4614-8933-7 [9] f. w. grover, inductance calculations: working formulas and tables (dover phoenix editions) hardcover – march 29, 2004. isbn-10: 0486495779 [10] n. murer, installation and configuration manual for vmp-300based instruments and boosters. online [accessed 05 june 2021]. https://www.biologic.net/documents/vmp300-based-manuals/ [11] p. baranov, v. borikov, v. ivanova, b. b. duc, s. uchaikin, c. y. liu, lock-in amplifier with a high common-mode rejection ratio in the range of 0.02 to 100 khz, acta imeko 8(1) (2019), pp. 103–110. doi: 10.21014/acta_imeko.v8i1.672 [12] s. s. baltyanskii, measuring the parameters of physical objects by identifying electrical models, meas. tech. 43 (2000), pp. 763–769. doi: 10.1023/a:1026645722396 https://doi.org/10.1039/c6nr00814c https://doi.org/10.1039/c5tc04247j https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-79.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-79.pdf https://doi.org/10.1007/978-1-4614-8933-7 https://www.biologic.net/documents/vmp300-based-manuals/ http://dx.doi.org/10.21014/acta_imeko.v8i1.672 https://doi.org/10.1023/a:1026645722396 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 97 [13] f. m. janeiro, p. m. ramos, gene expression programming and genetic algorithms in impedance circuit identification, acta imeko 1(1) (2012), pp. 19-25. doi: 10.21014/acta_imeko.v1i1.16 [14] s. baltianski, impedance spectroscopy: separation and asymptotic model interpretation, xxi imeko world congr. measurement res. ind., prague, czech republic, 30 august 04 september 2015, pp. 492-497. online [accessed 05 june 2021]. https://www.imeko.org/publications/wc-2015/imeko-wc2015-tc4-101.pdf [15] e. barsoukov, j. h. jang, h. lee, thermal impedance spectroscopy for li-ion batteries using heat-pulse response analysis, j. power sources 109 (2002), pp. 313–320. doi: 10.1016/s0378-7753(02)00080-0 [16] m. swierczynski, d. i. stroe, t. stanciu, s. k. kær, electrothermal impedance spectroscopy as a cost efficient method for determining thermal parameters of lithium ion batteries: prospects, measurement methods and the state of knowledge, j. clean. prod. 155 (2017), pp. 63–71. doi: 10.1016/j.jclepro.2016.09.109 http://dx.doi.org/10.21014/acta_imeko.v1i1.16 https://www.imeko.org/publications/wc-2015/imeko-wc-2015-tc4-101.pdf https://www.imeko.org/publications/wc-2015/imeko-wc-2015-tc4-101.pdf https://doi.org/10.1016/s0378-7753(02)00080-0 https://doi.org/10.1016/j.jclepro.2016.09.109 using coverage path planning methods for car park exploration acta imeko issn: 2221-870x september 2021, volume 10, number 3, 15 27 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 15 using coverage path planning methods for car park exploration anna barbara ádám1, lászló kocsány1, emese gincsainé szádeczky-kardoss1 1 department of control engineering and information technology, budapest university of technology and economics, budapest, hungary section: research paper keywords: car park exploration; coverage path planning; parking assistant system citation: anna barbara ádám, lászló kocsány, emese gincsainé szádeczky-kardoss, using coverage path planning methods for car park exploration, acta imeko, vol. 10, no. 3, article 5, september 2021, identifier: imeko-acta-10 (2021)-03-05 section editor: bálint kiss, budapest university of technology and economics, hungary received january 13, 2021; in final form september 17, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: anna barbara ádám, e-mail: annadam97@gmail.com 1. introduction with an increasing number of vehicles on the roads, it is becoming difficult to find a free parking space. several sensorbased parking assistant systems have been developed in the past decade to make it easier to find a free parking space in busy areas, such as city centres and shopping malls. these systems mainly include sensors installed in each parking space that can detect the presence of a car by measuring its weight with pressure sensors; sensing the car body with magnetic sensors or infrared and ultrasonic sensors can determine if something is in the examined area. the main problem with these parking systems is the need for extra infrastructure and sensors. as most car parks are not equipped with sensors, which indicate the occupancy of the parking space, a vehicle must drive passed a parking space to be able to detect if it is free [1]. there are also internet-of-things (iot) systems, which involve not only the signals of the sensors but also mobile applications [2]. these systems can navigate the driver to the free parking spaces in the shortest possible time. the main purpose of this paper is to present an installed sensor-free car park exploration method that navigates the vehicle to all the possible free parking spaces. autonomous vehicles are able to detect the free parking spaces with sensors installed in the vehicle (e.g. lidar [3]). as the coverage path planning (cpp) problem is similar to the car park exploration problem, the core concepts of cpp algorithms can be used. this paper is organised as follows. section 2 presents the most commonly used parking systems and provides examples, while the formulation of the car park exploration problem can be found in section 3. section 4 presents some cell decomposition and grid-based cpp methods and explains how they are used for car park exploration. an improved version of trapezoidal cell decomposition can also be found in this section. different traversal methods are presented in section 5, while section 6 introduces a cost function to grade the free parking spaces. the presented exploration methods are compared in section 7, and the method for using them in multi-storey car parks is presented in section 8. section 9 derives the conclusions from the different traversals and presents possible avenues for future work. abstract with the increasing number of vehicles on the roads, finding a free parking space has become a time-consuming problem. traditional car parks are not equipped with occupancy sensors, so planning a systematic traversal of a car park can ease and shorten the search. since car park exploration is similar to coverage path planning (cpp) problems, the core concepts of cpp algorithms can be used. this paper presents a method that divides maps into smaller cells using trapezoidal cell decomposition and then plans the traversal using wavefront algorithm core concepts. this method can be used for multi-storey car parks by planning the traversal of each floor separately and then the path from one floor to the next. several alternative explorational paths can be generated by taking different personal preferences into account, such as the length of the driven route and the proximity to preferred locations. the planned traversals are compared by step number, the cell visitedness ratio, the number of visits to each cell and the cost function. the comparison of the methods is based on simulation results. mailto:annadam97@gmail.com acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 16 2. parking assistant systems the literature proposes various solutions for autonomous parking with sensors installed in car parks. these systems require a centralised parking system in which a server stores the occupancy of the parking spaces based on the sensor signals. when a vehicle requests a parking space, the server reserves one for the vehicle. cctv systems are widely available in car parks. consequently, algorithms using image processing algorithms are able to detect the occupancy of a parking space based on camera signals. for example, athira et al. [4] present an optical character recognition system that detects occupied parking spaces based on cctv signals. another solution is the availability of iot-enabled cities, often referred to as smart cities. smart cities can provide information about the availability of parking spaces. if a centralised parking system server is available, the vehicle is able to request information about the occupancy of parking spaces from the server. al-turjman et al. [4] present a survey of iot-enabled cities. the use-case practices are also presented, including the smart payment system, the parking reservation system and the eparking system. there are several commercially available solutions for smart parking. in hungary, two such solutions can be found in budapest. the first is parkl [5], which is a smartphone application providing information about the location of car parks and making cashless payment possible. parkl does not provide the exact location of the parking spaces but the location of parking zones in the city where possible parking spaces can be found. in comparison, parker [6] (developed by smart lynx ltd.) provides information about the exact occupancy of about 1,500 parking spaces in the city centre. this solution uses preinstalled sensors to indicate the occupancy of a given parking space to the driver. parker also provides a cashless payment service for its users. the algorithm presented in this paper provides a car park exploration method using cpp. car park exploration is required because no additional signals for preinstalled sensors in car parks are used. a complete system is therefore required to perform the car park search from the creation of the exploration path to detecting appropriate parking spaces and performing the parking manoeuvre. this system is called the autonomous parking system and consists of four subsystems, as the parking task can be broken down into four subtasks. the first task of the autonomous parking system is to provide an exploration path for the vehicle. the focus of this paper is to propose a solution for this subsystem by applying cpp algorithms. this subsystem provides multiple goal configurations for the vehicle. these goal configurations are required, as the avoidance of preinstalled sensors leads to the loss of the goal configuration. the second subsystem is a detector subsystem. the main task is to scan the environment and detect parking spaces for the vehicle while the vehicle is driving along the exploration path [3]. in order to be able to detect parking spaces, a sensor must be mounted on the vehicle. the first subsystem should consider the sensor parameters during the planning of the exploration path. when a parking space is found, the third and fourth subsystems plan and execute the parking manoeuvre [7], [8]. 3. car park exploration the goal of car park exploration is to plan a path leading to all the possible free parking spaces. the binary map (figure 2) of the car park (figure 1) is known, and the parking spaces are treated as obstacles. while following the planned path, a sensor (e.g. lidar) searches for a suitable free parking space. in this formulation, c ⊂  r𝟚 defines the workspace of car park exploration and 𝒜 denotes the vehicle. the state of the vehicle is 𝑞 = [ 𝑥 𝑦], where 𝒜 (𝑞) ⊂ 𝐶, while [ 𝑥 𝑦] denotes the position of the vehicle in a fixed frame (the orientation of the vehicle is not taken into account). the workspace consists of obstacles (𝐶obs ⊂ 𝐶) and free spaces (𝐶free = c  ∖ 𝐶obs), some of which need to be visited (𝐶vis ⊆ 𝐶free). the vehicle can move only in free space (∀𝑞 ∈ 𝐶free). the vehicle moves on a collision-free path (τ), where 𝑠𝑖 ∈ 𝑅 is a scalar path parameter (𝑠 ∈ [0, 𝑇], 𝑇 is the length of the whole path): τ: 𝑠 ↦ 𝑞, ∀𝑠 ∈ [0, 𝑇]: τ(𝑠) ∈ 𝐶free . (1) 𝐿(𝑞) ⊂ 𝐶 denotes the points that are inside the range (δ) of the sensor, which detects the free parking spaces: 𝐿(𝑞) = { 𝑧 ∈ 𝐶free| ‖𝑞 − 𝑧‖ ≤ 𝛿}, (2) where ‖𝑞 − 𝑧‖ is the euclidean distance between points 𝑞 and 𝑧. the points seen while traversing the path are 𝐿(τ(𝑡)) = ⋃ 𝐿(τ(𝑠)) 𝑠∈[0,𝑡] . (3) the aim is to reach every position that should be visited during the exploration: 𝐶vis ⊆ 𝐿(τ(𝑇)). (4) other constraints that should be considered are as follows: • the start position is τ(0) = 𝑞init. figure 1. map of a car park. figure 2. binary map of a car park. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 17 • cost can be assigned to the path as 𝑤1(τ) (e.g. length of the path). • preferred positions are provided for, which are considered first. the 𝑤2(𝑞) cost can be assigned to the 𝑞 ∈ 𝐶𝑓𝑟𝑒𝑒 position (e.g. distance from the target position). • the traversal can be interrupted when a condition is met (parking space detected with lidar). • when stopping while driving (at some point with 𝑠1 path parameter), the cost of the traversal is the personal preference weighted (𝛼) sum of 𝑤1 and 𝑤2: 𝑤1(τ) + 𝛼 𝑤2(τ(𝑠1)) (the cost of the path up to that point plus the cost of the position). • there might be constraints to the order of the configurations in τ path (e.g one-way streets). 4. using coverage path planning methods the purpose of cpp methods is to plan an obstacle-free path that reaches every free point of the given configuration. this section presents car park traversal-related computational problems. it also explains the idea behind using cpp algorithms for the traversal and provides an overview of the most commonly used cell decomposition and grid-based methods. finally, methods chosen for implementation are presented. 4.1. related computational problems cpp is similar to the travelling salesman problem [10], in which the salesman has to visit all the cities via the shortest route possible and return to the beginning. the cities are represented as nodes on a graph and the routes between them are the edges. the travelling salesman problem is np-hard, which means it is at least as hard as any np problem, so it cannot be solved with polynomial-time algorithms. during path planning, an important factor in terms of the path is the exploration of the environment. two computational geometric problems are related to this: the museum problem [11] calculates the fewest number of guards required to observe the whole museum, while in the watchman problem [12], only the map of the region is given, and the main purpose is to plan the shortest possible path between the obstacles so that the watchman can guard the entire area. 4.2. cell decomposition-based path planning there are several cell decomposition-based path planning methods that divide a map into cells. exact cellular decomposition methods [12] divide the free space into varioussized nonoverlapping cells. the union of the cells covers the whole free configuration. trapezoidal cell decomposition [14] can only be used in polygonal environments, as it extends rays from the vertices of the obstacles. these rays and the edges of the obstacles form the cell borders, dividing a map into trapezoidal or triangular cells. an example of this method can be seen in figure 3. boustrophedon (greek: ‘ox turning’) decomposition [15] extends rays from the entry and exit points of the obstacles, so fewer rays are extended than in trapezoidal cell decomposition. consequently, the cells have a larger area. figure 4 shows an example of this method. when applying morse decomposition [12], the critical points of the smooth function, called the morse function, indicate the boundaries of the cells (see figure 5). greedy convex polygon decomposition [18] can be used when there are polygonal obstacles. this method consists of two types of cuts: • a single cut: a cut from a nonconvex vertex to an existing edge or another cut, • a matching cut: cutting two nonconvex vertices. first, all the matching cuts are made on the nonconvex vertices, then the single cuts are made for the unmatched vertices. in the example in figure 6, the matching cuts are green and the single cuts are red. the cell boundaries are the set of cuts and the edges of the obstacles. after dividing the map, the adjacency matrix of the cells can be created. two cells are adjacent if they have a common boundary, and the boundary has a given number of common points. the purpose of cell decomposition-based methods is to visit every cell once, although it is not always possible. if the adjacency matrix of the cells is known, a traversal can be planned, as it is known which cell can be visited from the current one. after creating the traversal, a path can be planned that leads through the cells in a given order reaching every free point of the configuration. figure 3. example of trapezoidal cell decomposition [14]. figure 4. example of boustrophedon decomposition [16]. figure 5. example of morse decomposition [17]. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 18 4.3. grid-based path planning in contrast to the cell decomposition-based methods, gridbased methods divide the map into same-sized cells, called a grid. the cells are classified into two groups, those that have obstacle points inside the cell and those without obstacle points. the wavefront algorithm [9] assigns distance values to every cell in the map, with each neighbouring cell being assigned one larger distance value, starting from the initial cell, which has a distance value of 0. the traversal of the cells is based on distance values, and the neighbouring unvisited cell with the largest distance value is the following cell. if a number of unvisited neighbouring cells have the same distance value, the following cell is selected randomly. an example of this method can be seen in figure 7. the spiral spanning-tree coverage method [21] builds up a graph with nodes that represent the centres of the obstacle-free cells and edges that represent the lines between the neighbouring cell centres. the cells are divided into four subcells, which are classified into two groups: those that have four unvisited subcells (called new cells) and those that have at least one visited subcell (called old cells). first, every cell is unvisited, and the algorithm starts from the initial cell and marks it as an old cell. the initial cell is the root of the spanning tree. in the subsequent steps, every cell neighbouring the current cell is tested if it is a new cell. if the cell has a neighbouring new cell, a spanning-tree edge is added from the current cell to the neighbouring cell, and then the algorithm moves to a subcell of the neighbouring cell and adds the centre of the neighbouring cell to the tree as a node. if there is a backwards move from the current cell to a subcell of the parent cell along the right-hand edge of the spanning tree, the algorithm terminates. figure 8 provides an example of this method. 4.4. using coverage path planning methods for car park exploration in order to plan the traversal of a car park, the map of the car park should be divided into smaller regions. as most car parks consist of polygonal obstacles, trapezoidal cell decomposition can be used to divide the map. the cells can be treated as a grid, so wavefront algorithm core concepts can be used to plan the traversal of the cells. the main advantage of the wavefront algorithm is that it can take personal preferences into account by modifying the distance values of the cells. 4.5. rectangular cell decomposition as rectangular decomposition is based on trapezoidal cell decomposition, it can only be used in polygonal environments. if the map contains nonpolygonal obstacles, the bounding boxes of the obstacles should be taken into account when decomposing the map. trapezoidal cell decomposition decomposes the map using only one axis (𝑥 or 𝑦), so the decomposed map may contain cells covering large areas. the main disadvantage of this method is that the traversal of the cells is not unequivocal, so a path should be planned inside a cell from which all of the free parking spaces can be seen. rectangular cell decomposition decomposes the map using both 𝑥 and 𝑦 axes. the final cells are the intersections of the cells decomposed by these axes. the final cells are smaller and rectangular, which is the main advantage of this method; every cell has only one neighbouring cell in each direction. an example of a decomposed map can be seen in figure 9, where the coloured rectangles indicate the cells. the adjacency matrix of the cells can be created after dividing the map. this matrix is an 𝑛 × 4 matrix, where 𝑛 is the number of cells. the rows of the matrix store the indices of the adjacent cells of every cell in each direction. if two cells have common boundaries, and the vehicle can pass from one cell to the other, these cells are adjacent. cells neighbouring obstacles and cells on the edges of the map do not have four neighbouring cells, so the neighbouring obstacles and edges are considered as their neighbours (an implementational solution might be to store these false neighbours as dummy indices, e.g. −1). if there are one-way road sections, moving from one cell to the other is permitted, but moving in the opposite direction is prohibited, so the 𝑖𝑡ℎ cell is adjacent to the 𝑗𝑡ℎ cell but not vice versa. this means that the 𝑖𝑡ℎ row of the adjacency matrix contains the 𝑗-cell index in the appropriate column, but the 𝑗𝑡ℎ row contains a dummy index instead of the 𝑖 index. more details of this algorithm can be found in [20]. figure 6. example of greedy convex decomposition [18]. figure 7. example of a wavefront algorithm [19]. figure 8. example of spiral spanning-tree coverage [22]. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 19 5. traversal of the cells several different traversal methods are presented in this section. the map presented in section 5.1 is used as an illustration. 5.1. map of the car park the car park consists of only one floor, with the black areas representing the obstacles (the possible free parking spaces are also considered as obstacles). the red x represents the preferred location. the map of this car park can be seen in figure 10, the decomposed map can be seen in figure 9 (rectangular decomposition was used) and the assigned distance values from the wavefront algorithm can be seen in figure 11. all the algorithms stop when the step number exceeds a given number, which depends on the number of cells. the maximal step number of the following simulations is 57. 5.2. visitednessand preference-based traversal this method first visits the unvisited neighbouring cells (see figure 12). if there are a number of neighbouring cells that are unvisited, the cell with the highest preference value is the following cell (see figure 13). in the map in section 5.1, one cell remains unvisited. this results from the low preference values in that area. the algorithm visits the neighbouring cells of the unvisited cell only once, and the other cells have higher preference values, so they are the next cells of the traversal. a traversal designed by this method can be seen in figure 14. figure 9. the decomposed map of the car park; the initial cell is cell number 10. figure 10. map of the car park; the red x represents the preferred location. figure 11. the distance values of the cells. figure 12. the visitedness of the cells determines the following cell, which is in red. figure 13. the preference value determines the following cell. figure 14. the visitednessand preference-based traversal of the car park. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 20 5.3. visitednessand preference-based traversal with dynamic preference this method differs from the previous one by applying dynamic preference. this means the preference value of the cell visited last is decreased during the traversal. in this case, the preference values of the cells decreased by 2 every time they were visited. as a result, every cell could be visited; the preference values of the cells visited first were decreased enough so that the algorithm would select the cells with lower preference values as the next cells. figure 15 provides an example of this method. 5.4. visitednessand preference-based traversal with dynamic preference while avoiding unnecessary cells a map may contain cells that are not adjacent to parking spaces, as they do not contain any c ∈ cvis points, so it is not necessary to visit them. in this case, the preference values of these cells can be changed to 0, and they can be marked as visited at the beginning. the algorithm will then only visit these cells in order to reach other cells. in figure 16, the white cells represent cells that do not need to be visited. 5.5. preferenceand visitedness-based traversal this method selects the next cell based on its preference value (see figure 17). if a number of neighbouring cells have the same preference value, the algorithm selects the following one based on the visitedness of the cell (see figure 18). as can be seen in figure 19, the cells with the highest preference values are visited multiple times, while cells with lower preference values may remain unvisited. 5.6. preferenceand visitedness-based traversal with additional preference values as preferenceand visitedness-based traversal mainly selects the next cell based on its preference value, this method should be used in cases where the driver wants to park at a given location. additional preference values can also be applied to attract the traversal to the desired location. figure 20 provides an example of the applied additional preference values. the highest additional preference value (5) was added to the cell that contains the preferred location (see figure 10), so each neighbouring cell gets one smaller additional preference value until the fourth neighbouring cell (the range of the preference is 5). it can be seen in figure 21 that the algorithm navigates to the highly preferred area in the shortest possible path, and subsequently, the traversal only moves around one obstacle by traversing the cells with the highest preference values. figure 15. the visitednessand preference-based traversal of the car park with dynamic preference. figure 16. the visitednessand preference-based traversal of the car park with dynamic preference while avoiding unnecessary cells. figure 17. the preference value determines the following cell. figure 18. the visitedness of the cells determines the following cell. figure 19. preferenceand visitedness-based traversal. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 21 5.7. preferenceand visitedness-based traversal with additional preference values and dynamic preference the previous algorithm can also be applied when using dynamic preference values. in this case, the preference values of the visited cells are decreased by 2 every time they are visited. it can be seen in figure 22 that the algorithm visits the cells with high preference values, but then also visits the neighbouring cells. in this case, every cell is visited. this method might be the closest to human thinking. first, it searches for free parking spaces in the highly preferred area, then it goes a bit further and returns to the preferred location. finally, it also visits the cells that are furthest from the preferred cells. 5.8. making the traversal repeatable the presented traversals do not lead back to the initial position. it is possible that no free parking spaces were found during the first traversal of the car park, so the traversal should be repeated in order to find a suitable free parking space. there are two possible solutions to this problem: regenerate the traversal from the current cell as the initial cell or plan a path back to the initial cell from the current cell so that the original traversal can be repeated. 6. cost of the traversal the quality of the detected free parking space can be measured with a cost function. there are two main factors that should be considered: the time spent searching for a free parking space (which is proportional to the driven-route length 𝑤1 in section 3) and the distance from the preferred location (𝑤2 in section 3). there is a trade-off between these two factors, but the quality of a parking space also depends on personal preferences: the importance of the proximity to a preferred location or a short driven-route length. the 𝑅𝑜𝑢𝑡𝑒𝐿𝑒𝑛𝑔𝑡ℎ (𝑤1 in section 3) (5) is the sum of the distances between the cell centres during the traversal, the 𝑁𝑒𝑎𝑟𝑛𝑒𝑠𝑠 (𝑤2 in section 3) (6) to the preferred location is the preference (𝑝𝑟𝑒𝑓𝑉𝑎𝑙) weighted sum of the euclidean distances measured from the preferred location and the cost function (𝑐𝑜𝑠𝑡), which is calculated for each cell, is a personal preference (α) weighted sum of the 𝑅𝑜𝑢𝑡𝑒𝐿𝑒𝑛𝑔𝑡ℎ and 𝑁𝑒𝑎𝑟𝑛𝑒𝑠𝑠 (7). the smaller the α value, the better a closer parking space. 𝑅𝑜𝑢𝑡𝑒𝐿𝑒𝑛𝑔𝑡ℎ = ∑ length(𝑅𝑜𝑢𝑡𝑒i) n i=1 , (5) 𝑁𝑒𝑎𝑟𝑛𝑒𝑠𝑠 = ∑ 𝑝𝑟𝑒𝑓𝑉𝑎𝑙i ∑ 𝑝𝑟𝑒𝑓𝑉𝑎𝑙𝑗 p j=1 ⋅ dist(𝑝𝑜𝑠𝑖𝑡𝑖𝑜𝑛, 𝑝𝑟𝑒𝑓𝐿𝑜𝑐i) p i=1 , (6) 𝑐𝑜𝑠𝑡 = α ⋅ 𝑅𝑜𝑢𝑡𝑒𝐿𝑒𝑛𝑔𝑡ℎ + (1 − α) ⋅ 𝑁𝑒𝑎𝑟𝑛𝑒𝑠𝑠, (7) where 𝑁 is the number of route sections, 𝑃 is the number of preferred locations and α is the weighting factor. the quality of a free parking space can be determined by the value of the cost function. the driver should establish a threshold value, and free parking spaces at lower costs are adequate. the better a parking space is, the lower the value of the cost function. 7. comparison of the traversals in this section, the previously presented methods are compared based on the step number (section 7.1), the cell visitedness ratio (section 7.2), the number of visits to each cell (section 7.3) and the cost function (section 7.4). 7.1. step number the step number gives the number of steps needed to visit every cell at least once. traversing from one cell to another is considered as one step. as the algorithm stops when more than a given number of steps is exceeded, the maximum number of steps in this map is 57. table 1 shows how many steps are needed when different methods were applied. it can be seen that when the visitedness and preference-based method (sections 5.2–5.4) was applied, fewer unvisited cells remained. when there were cells marked as unnecessary to visit (section 5.4), only 30 steps were sufficient to visit all the other cells, and in the end, two out of three marked figure 20. the additional preference values. figure 21. preferenceand visitedness-based traversal with additional preference values. figure 22. preferenceand visitedness-based traversal with additional preference values and dynamic preference. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 22 cells remained unvisited. when dynamic preference (sections 5.3, 5.4 and 5.7) was applied, fewer steps were needed and fewer cells remained unvisited. 7.2. cell visitedness ratio the cell visitedness ratio represents the number of visited cells relative to the number of cells in each step. figure 23 shows the visitedness ratio for the different methods using different colours. cells marked as unnecessary in section 5.4 are also included in the number of cells. the legend gives the section number in which the traversal method is presented. first, every cell is visited for the first time, so this ratio grows at each step. when applying the preferenceand visitednessbased method without dynamic preference (sections 5.5 and 5.6), the final value of the ratio is lower than in other cases. these methods visit only the cells with high preference values (60 %– 75 % of the cells). the other methods visit nearly all the cells (more than 80 % of the cells). applying dynamic preference (section 5.7) makes the visiting of the remaining unvisited cells more probable. when visitednessand preference-based traversal (sections 5.2–5.4) was used, the algorithm visited the unvisited neighbouring cells first, so there were fewer cells that remained unvisited. 7.3. the number of visits to each cell the following diagrams show the number of visits to each cell. numbers on the horizontal axis represent the indices of the cells shown in figure 9. visitednessand preference-based traversal this method’s traversal (section 5.2) visits the unvisited cells first. cells with high preference values are visited more frequently (5–6 times), but there is only one cell (cell 14) that is unvisited. for example, the preference value of cell 13 is 9 (see orange bar in figure 24), and this cell was visited 6 times (see blue bar in figure 24). the diagram demonstrating the number of visits to each cell can be seen in figure 24. visitednessand preference-based traversal with dynamic preference by applying dynamic preference (section 5.3), all the cells become visited and the maximum number of visits to each cell is four. most of the cells are visited once or twice, so the application of dynamic preference decreases the number of visits (figure 25). visitednessand preference-based traversal with dynamic preference while avoiding unnecessary cells there can be cells in a map that do not need to be visited because there are no parking spaces around them. in this case, they are only visited if the traversal goes through them in order to reach another cell (section 5.4). in figure 26, the only unvisited cells are unnecessary ones (unnecessary cell indices: 4, 5, 19). due to dynamic preference, the other cells are visited only once or twice. preferenceand visitedness-based traversal this method (section 5.5) visits the cells with a high preference value first and more frequently because the algorithm selects the next cell based on the preference value of the neighbouring cells. in figure 27, it can be seen that most of the visited cells are visited 5–6 times, but there are a large number of unvisited cells. figure 23. cell visitedness ratio of the presented methods. figure 24. number of visits to each cell (method presented in section 5.2). figure 25. number of visits to each cell (method presented in section 5.3); the preference values shown in this figure are the original values, not the decreased values. table 1. the number of steps needed when applying different methods. method section number 5.2 5.3 5.4 5.5 5.6 5.7 number of steps 57 56 30 57 57 42 number of unvisited cells 1 0 2 5 8 0 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 23 preferenceand visitedness-based traversal with additional preference values in figure 28, it can be seen that due to the application of additional preference values fewer cells are visited, but they are visited 6 times. this algorithm (section 5.6) navigates to the most preferred location (there was an additional preference applied with a range of 5 and a value of 5 in the cell with index 12) in the shortest possible route (these cells are only visited once), then it only moves around the preferred area. preferenceand visitedness-based traversal with additional preference values and dynamic preference when applying dynamic preference (section 5.7), all the cells are visited, and each cell is visited a maximum of 3 times instead of 5–6 times (figure 29). 7.4. cost function the algorithm can decide whether a free parking space is suitable or not based on a cost function. the defined cost function is based on two factors: the driven-route length and distance from the preferred location. the estimated route length of a method depends on the step number; the higher the step number, the longer the route length (see table 2). the route length is measured in pixels (the real distance depends on the graphic scale of the map). the other aspect of cost function is the distance from the preferred location. the distance is calculated at every step, and it depends strongly on the traversal method. the minimum distance from the preferred location is 50 pixels in this example. visitednessand preference-based traversal this method (section 5.2) visits the unvisited cells first, only reaching the possible minimum distance 3 times during the traversal. it can be seen in figure 30 that the distance is between 180 and 330 pixels most of the time. visitednessand preference-based traversal with dynamic preference figure 31 shows that the shape of the distance function is similar to the function in figure 30. this function also has three minimum points due to dynamic preference (section 5.3); only the visiting order of the cells is different, and the unvisited cells are also visited. figure 26. number of visits to each cell (method presented in section 5.4); the preference values shown in this figure are the original values, not the decreased values. figure 27. number of visits to each cell (method presented in section 5.5). figure 28. number of visits to each cell (method presented in section 5.6). figure 29. number of visits to each cell (method presented in section 5.7); the preference values shown in this figure are the original values, not the decreased values. table 2. the route length of each method. method section number 5.2 5.3 5.4 5.5 5.6 5.7 number of steps 57 56 30 57 57 42 route length 6752 6608 3184 6555 6671 4379 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 24 visitednessand preference-based traversal with dynamic preference while avoiding unnecessary cells this method (section 5.4) avoids the cells that have no parking spaces next to them, so the step number is lower than in the previous methods. it can be seen in figure 32 that the traversal visits the cell nearest to the preferred location only once, but the length of this traversal is much shorter due to the smaller number of steps. preferenceand visitedness-based traversal this method (section 5.5) visits cells with a high preference value more often. the preferred location is far from the initial cell, so it has a high distance value. without an additional preference value, the algorithm also visits the preferred location 5 times, as can be seen in figure 33. preferenceand visitedness-based traversal with additional preference values by applying additional preference values (section 5.6), the traversal reaches the minimal distance from the preferred location 6 times (figure 34). it can also be seen that the traversal repeats the same path, as one part of the function is repeated 5 times. preferenceand visitedness-based traversal with additional preference values and dynamic preference as a result of dynamic preference (section 5.7), the cells that are further from the preferred location are also visited. the traversal has only three minimum points (figure 35) because when a cell becomes visited, the preference value of this cell is decreased. figure 30. distance from preferred locations (method presented in section 5.2). figure 31. distance from preferred locations (method presented in section 5.3). figure 32. distance from preferred locations (method presented in section 5.4). figure 33. distance from preferred locations (method presented in section 5.5). figure 34. distance from preferred locations (method presented in section 5.6). figure 35. distance from preferred locations (method presented in section 5.7). acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 25 8. handling multi-storey car parks as there is a considerable lack of free parking spaces in city centres, an increasing number of multi-storey car parks have been constructed to ensure there are enough free parking spaces to meet demand. the traversal of these car parks is similar to those in the presented methods. the storeys of a multi-storey car park can be handled individually. this means that the traversal of each floor should be planned independently from the other floors, with the only extra requirement being the minimising of the transition between levels. to plan a traversal of each floor, the map of the floor must be known. some floors are preferred over others, so these floors should be traversed first. if there is no free parking space on the preferred floor, the driver must either go around again or go to another floor. the traversal to another floor can be forced so that the preference values of all the cells can be set to zero except for the cell representing the ramp to the other floor, which has a high preference value with a large range. figure 36 shows the map of the first floor of a car park, and figure 37 shows its second floor. the car park entrance and the ramp to the next floor are also indicated on the maps. the maps of the floors are the same, with only different entrance and exit locations. the planned traversals for the whole car park can be seen in figure 38 to figure 40. there are no additional preference values during the traversal, and the traversal is planned based on visitedness and preference, with dynamic preference values. the traversal of the first floor (figure 38) is much longer than the traversal of the second floor (figure 40). the first floor can be traversed in 56 steps, but the second floor needs only 33 steps to be traversed. the traversal method of the floors is the same, only the entrance locations are different. this example shows how the traversal depends on the location of the entrance point. 9. conclusion as searching for a free parking space is a time-consuming task, the aim of this paper is to design different car park exploration strategies. the implemented algorithms used the core concepts of cpp algorithms, which is possible because the car park exploration problem is similar to cpp problems. cpp algorithms are used to plan the paths of vacuum-cleaner robots, lawnmower robots and robots for different purposes, which are designed to reach every free point of a configuration in the shortest possible time while avoiding obstacles. during car park exploration, the vehicle does not have to reach every free point of the map, it only has to drive by all the possible free parking spaces. the car park map is decomposed by using trapezoidal cell decomposition. this method leads to cells with large areas, and the planned traversal contains reversals. if the map is decomposed using both 𝑥 and 𝑦 axes, the created cells are smaller, and every cell has only one neighbouring cell in each direction. in this case, the traversal can take personal preferences figure 36. map of the first floor. figure 37. map of the second floor. figure 38. traversal of the first floor. figure 39. traversal of the first floor to the exit ramp. figure 40. traversal of the second floor. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 26 into account by using the wavefront algorithm. the distance values can be modified in order to attract the traversal to a given location (e.g. entrances, lifts, etc.). the traversal can be planned by taking the preference values and the visitedness of the cells into account. the first method presented only takes the preference values into account, so the traversal does not visit every cell on the map, only the ones with high preference values. another method chooses the next cell based on the preference value, but in case of equal preference values, the next cell is an unvisited one. the third wavefront algorithm-based traversal is based on visitedness and preference, and it visits the unvisited neighbouring cells first. this method is more likely to visit all the cells on the map. in order to compare the presented methods, quality characteristics were defined: step number, the cell visitedness ratio at each step, the number of visits to each cell and a cost function. the step number shows how many steps are needed in order to visit every cell; the algorithm stops if the step number exceeds a given number. the cell visitedness ratio shows the ratio of the visited cells at each step. the cost function is based on two factors: the driven-route length and the weighted sum of the distance from the preferred locations. if a free parking space is found, the decision as to whether it is suitable is based on the cost function. the implemented algorithms were tested in simulations, the results of which are detailed in section 5. the simulation results demonstrate that the different methods can be used in different situations. the visitednessand preference-based traversals visit nearly every cell on the map. if dynamic preference is applied, there is a higher chance that every cell becomes visited. it is also possible that there are cells that do not need to be visited. in this case, their preference value can be changed to 0, and they are marked as visited at the beginning. these cells only become visited when the traversal passes through them to reach other cells. visitednessand preference-based methods should be used when it is important to find a free parking space as soon as possible. these methods visit the cells with high preference values more frequently. if additional preference values are applied, the traversal moves around the preferred area. if dynamic preference is applied, the traversal visits the preferred cells first, then goes further away from the preferred area. preferenceand visitedness-based methods should be used if it is important to park near the preferred location. future work will include testing the implemented methods in a real environment and handling situations in which multiple vehicles are searching for free parking spaces at the same time. acknowledgement the research reported in this paper and carried out at the budapest university of technology and economics has been supported by the national research development and innovation fund (tkp2020 institution excellence subprogram, grant no. bme-ie-mifm) based on the charter issued by the national research development and innovation office under the auspices of the ministry for innovation and technology. references [1] faheem zafari, s. a. mahmud, g. m. khan, m. rahman, h. zafar, a survey of intelligent car parking system, j. of applied research and technology 11 (2013) pp. 714-726. doi: 10.1016/s1665-6423(13)71580-3 [2] f. al-turjman, a. malekloo, smart parking in iot-enabled cities: a survey, sustainable cities and society, volume 49 (2019), pp. 2210-6707. doi: 10.1016/j.scs.2019.101608 [3] a. b. ádám, l. kocsány, e. g. szádeczky-kardoss, v. tihanyi. parking lot exploration strategy, proc. of the 19th ieee int. symp. on computational intelligence and informatics and the 7th ieee int. conf. on recent achievements in mechatronics, automation, computer sciences and robotics (cinti-macro), szeged, hungary, 14-16 november 2019, pp. 000169-000174. doi: 10.1109/cinti-macro49179.2019.9105160 [4] a. athira, s. lekshmi, p. vijayan, b. kurian, smart parking system based on optical character recognition, proc. of the 3rd int. conf. on trends in electronics and informatics (icoei), tirunelveli, india, 23-25 april 2019, pp. 1184-1188. doi: 10.1109/icoei.2019.8862517 [5] parkl digital technologies kft, parkl innovative parking, 2020. online [accessed 15 september 2021] https://parkl.net/hu/ [6] smart lynx kft, parker, 2020. online [accessed 15 september 2021] https://smartlynx.hu/ [7] e. szádeczky-kardoss, b. kiss, designing a tracking controller for passenger cars with steering input, period. polytech. elec. eng. 52 (2008) pp. 137-144. doi: 10.3311/pp.ee.2008-3-4.02 [8] e. szádeczky-kardoss, b. kiss, continuous-curvature paths for mobile robots, period. polytech. elec. eng. 53 (2009) pp. 63-72. doi: 10.3311/pp.ee.2009-1-2.08 [9] e. galceran, m. carreras, a survey on coverage path planning for robotics, robotics and autonomous systems 61 (2013) pp. 12581276. doi: 10.1016/j.robot.2013.09.004 [10] r. bellman, dynamic programming treatment of the travelling salesman problem, j. acm 9 (1962) pp. 61-63. doi: 10.1145/321105.321111 [11] s. kumar ghosh, approximation algorithms for art gallery problems in polygons and terrains, in: walcom: algorithms and computation. m. s. rahman, s. fujita (editors). springer, berlin, heidelberg, 2010, isbn 0302-9743, pp. 21-34. [12] w. p. chin, s. ntafos, optimum watchman routes, information processing letters 28 (1988) pp. 39-44. doi: 10.1016/0020-0190(88)90141-x [13] h. choset, e. acar, a. a. rizzi, j. luntz, exact cellular decompositions in terms of critical points of morse functions, proc. of the ieee int. conf. on robotics and automation, san francisco, ca, usa, 24-28 april 2000, pp. 2270-2277. doi: 10.1109/robot.2000.846365 [14] m. a. akkus, trapezoidal cell decomposition and coverage, middle east technical university, department of computer engineering. online [accessed 14 september 2021] https://user.ceng.metu.edu.tr/~akifakkus/courses/ceng786/hw 3.html [15] h. choset, coverage of known spaces: the boustrophedon cellular decomposition, auton. robots 9 (2000) pp. 247-253. doi: 10.1023/a:1008958800904 [16] s. raghavan, distributed algorithms for hierarchical area coverage using teams of homogeneous robots. 11 2020, master’s thesis. indian institute of technology madras. [17] j. park, cell decomposition course: introduction to autonomous mobile robotics, intelligent systems and robotics lab. division of electronic engineering, chonbuk national university. online [accessed 14 september 2021] https://cupdf.com/document/cell-decomposition-courseintroduction-to-autonomous-mobile-robotics-prof.html [18] a. das, m. diu, n. mathew, c. scharfenberger, j. servos, a. wong, j. zelek, d. clausi, s. waslander, mapping, planning, and sample detection strategies for autonomous exploration, j. of field robotics 31 (2014), pp. 75-106. doi: 10.1002/rob.21490 https://doi.org/10.1016/s1665-6423(13)71580-3 https://doi.org/10.1016/j.scs.2019.101608 https://doi.org/10.1109/cinti-macro49179.2019.9105160 https://doi.org/10.1109/icoei.2019.8862517 https://parkl.net/hu/ https://smartlynx.hu/ https://doi.org/10.3311/pp.ee.2008-3-4.02 https://doi.org/10.3311/pp.ee.2009-1-2.08 https://doi.org/10.1016/j.robot.2013.09.004 https://doi.org/10.1145/321105.321111 https://doi.org/10.1016/0020-0190(88)90141-x https://doi.org/10.1109/robot.2000.846365 https://user.ceng.metu.edu.tr/~akifakkus/courses/ceng786/hw3.html https://user.ceng.metu.edu.tr/~akifakkus/courses/ceng786/hw3.html https://doi.org/10.1023/a:1008958800904 https://cupdf.com/document/cell-decomposition-course-introduction-to-autonomous-mobile-robotics-prof.html https://cupdf.com/document/cell-decomposition-course-introduction-to-autonomous-mobile-robotics-prof.html https://doi.org/10.1002/rob.21490 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 27 [19] m. mcnally, walking the grid, robotics 52 (2006), pages 151–155. online [accessed 20 september 2021] https://dl.acm.org/doi/pdf/10.5555/1151869.1151889 [20] a. b. ádám, l. kocsány, e. g. szádeczky-kardoss, cell decomposition based parking lot exploration, proc. of the workshop on the advances of information technology, budapest, hungary, 30 january 2020, pp. 5-12. [21] y. gabriely, e. rimon, spiral-stc: an on-line coverage algorithm of grid environments by a mobile robot, proc. of the ieee int. conf. on robotics and automation, washington, dc, usa, 1115 may 2002, pp. 954-960. doi: 10.1109/robot.2002.1013479 [22] y. gabriely, e. rimon, competitive on-line coverage of grid environments by a mobile robot, computational geometry 24 (2003) pp. 197-224. doi: 10.1016/s0925-7721(02)00110-4 https://dl.acm.org/doi/pdf/10.5555/1151869.1151889 https://doi.org/10.1109/robot.2002.1013479 https://doi.org/10.1016/s0925-7721(02)00110-4 evaluation of the electricity savings resulting from a control system for artificial lights based on the available daylight acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 11 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 evaluation of the electricity savings resulting from a control system for artificial lights based on the available daylight francesco nicoletti1, vittorio ferraro2, dimitrios kaliakatsos1, mario a. cucumo1, antonino rollo1, natale arcuri1 1 department of mechanical, energy and management engineering (dimeg), university of calabria, via p. bucci, 87036 rende (cs), italy 2 department of computer, modelling, electronics and system engineering (dimes), university of calabria, via p. bucci, 87036 rende (cs), italy section: research paper keywords: daylight; artificial lights; energy savings citation: francesco nicoletti, vittorio ferraro, dimitrios kaliakatsos, mario a. cucumo, antonino rollo, natale arcuri, evaluation of the electricity savings resulting from a control system for artificial lights based on the available daylight, acta imeko, vol. 11, no. 4, article 12, december 2022, identifier: imekoacta-11 (2022)-04-12 section editor: francesco lamonaca, university of calabria, italy received july 11, 2022; in final form october 13, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by regione calabria (pac calabria 2014-2020 asse prioritario 12, azione b) 10.5.12) for f. nicoletti’s contribution. corresponding author: francesco.nicoletti@unical.it, e-mail: francesco.nicoletti@unical.it 1. introduction buildings are the place where man spends most of his time. a large amount of energy is used inside buildings to meet human needs and to provide thermal-hygrometric and visual comfort. in the coming years, it is very important to adopt solutions to reduce consumption. the most widespread strategy to reduce consumption consists of improving the efficiency of the building envelope (reducing thermal losses through transmission by opaque surfaces [1]-[5]) and the efficiency of plant components [6]. it is especially important to also utilize solar gains in a smart way because they bring thermal and visual benefits. the solar source, in fact, has a considerable influence on the behavior of the building. scientific research in this regard is dedicated to three main objectives: 1) production of electrical and thermal power through solar panels [7], [8], 2) control of solar inputs to reduce winter and summer heat load, 3) reach adequate daylight within the rooms. in addition to user awareness to reduce waste, intelligent solutions to better exploit solar radiation are divided into passive and active systems. passive systems do not require a control system. examples include bioclimatic solar greenhouses [9], green roofs [10], [11], trombe walls [12]. active systems, on the other hand, use sensors and actuators, which often communicate via iot technology to make automatic adjustments [13], [14]. the visual comfort of the occupants, which is often neglected, is also a primary objective. this research is concerned with analyzing a dynamic system for controlling artificial lighting to avoid unnecessary switching on when natural illuminance is adequate. daylight plays a key role in visual comfort inside buildings. estimating daylight inside a room is not a simple problem. over the years, different methodologies have been developed to consider the different variables involved. abstract natural lighting in building environments is an important aspect for the occupants' mental and physical health. furthermore, the proper exploitation of this resource can bring energy benefits related to the reduced use of artificial lighting. this work provides some estimates of the energy that can be saved by using a lighting system that recognises indoor illuminance. in particular, it is able to manage the switching on of lights according to the daylight detected in the room. the savings from this solution depend on the size and orientation of the window. the analysis is conducted on an office by means of simulations using the inlux-dbr code. the locations have an influence on the luminance characteristics of the sky. the analysis is conducted with reference to one city in the south and one in the north of italy (cosenza and milan). the energy saving is almost independent of latitude and therefore representative of the italian territory. it is highly variable according to exposure, being the highest for southern exposure (97 % with the window size equal to 36 % of the floor area) and between 26 % and 48 % (as a function of window size) for northern exposure. mailto:francesco.nicoletti@unical.it acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 numerous computational codes are available to estimate the illuminance on the working plane [15]-[17]. these codes, however, are difficult to adapt to a real case since they use the sky luminance models present within their libraries. in particular, the models are predominantly related to the cie clear sky model [18] or standard overcast sky distributions. the use of locally measured or estimated luminance is not allowed. in this regard, the authors developed the inlux-dbr calculation code [19], [20]. the luminous contributions that the code considers are: light carried by direct rays, scattered light from the sky and reflected light from the outside ground. it allows realistic estimation of illuminance at various points in the room. the code is very flexible since it allows the use of measured distributions of sky luminance and also classical model calculations. the resolution method used is "radiosity model" by which luminance distributions in walls are evaluated by an implicit method. the validation of the code was performed with an example case conducted in japan and allows the extension of case studies to perform parametric analysis. in the present study, we investigate the possibility of replacing the artificial lighting system using a system that can adapt to confer setpoint illuminance based on the already present daylight contribution. this is a control strategy that is often applied in smart buildings. this work provides numerical data to quantitatively assess the energy savings from this investment. it depends, in fact, on a number of factors: in particular, the geometry of the room and the size and orientation of the window. the objective of the work is to provide useful indications to take advantage of daylight and save electricity from artificial lights. in addition, this solution provides visual comfort and makes the environment healthier for the occupants. two italian locations will be simulated in the work, one in southern italy and one in northern italy: cosenza and milan. 1.1. literature review controlling artificial light according to natural illuminance is useful to provide an environment with constant illuminance over time, reducing unnecessary waste. the daylight recorded in rooms depends greatly on the geometry of the rooms and the ratio of window to wall areas. in addition, illuminance depends on the position of the sun. in the past, many authors have made important contributions on this topic. the most important parameter used in building design is the daylight factor (df), defined by trotter [21]. the estimation of this was improved by dresler [22] by formulating empirical relationships. hopkinson et al [23] introduced the need to consider external obstacles and ground reflection. later, tregenza [24] defined an analytical method called split-flux, which is based on overcast conditions. numerous models are formulated in order to estimate df; these use scaled models, empirical formulas and corresponding diagrams [25], [26]. the df, however, allows for valid considerations only under overcast conditions because on clear days it is highly dependent on the position of the sun in the sky. in these cases, in fact, it would result that the indoor illuminance distributions would not depend on the orientation of the window. it would be appropriate to define indices to evaluate illuminance on an annual scale. this greatly increases the complexity of the problem. for our purpose, therefore, classical df-based methods are not useful. nabil et al. [27] showed that alternating clear, intermediate and cloudy days makes indoor lighting highly dependent on window orientation. the models mainly used to simulate the distribution of light inside a room are divided into "radiance models" and "ray tracing models". the two models provide equal results being based on valid assumptions. the code used in the present work belongs to the first category. the method consists of solving, at each instant of time, a system of equations obtained from the balance on each surface of the room. each surface is treated as a body that reflects incident light isotropically. it is defined by the reflectivity and view factors toward all other surfaces. in ray tracing models, the path of the solar ray through its various reflections on the walls of the room is followed. illuminance is calculated by setting the maximum number of reflections. no system of equations is solved. daylighting studies are very diversified. in fact, the problems are always significantly different depending on the building conformation and scale. alhagla et al. [28] obtained in a study that the benefits of daylight depend on location. in particular, in hot locations, the exploitation of sunlight leads to an increase in the cooling heat demand in the summer period. on the other hand, in locations where the climate is colder, the use of daylight could favour the input of solar radiation and thus reduce the winter heat demand. athienitis et al. [29], analysed control systems for artificial lights associated with daylight to manage electricity use and reduce energy consumption. the authors were also able to achieve excellent results in terms of visual well-being. the benefits achieved on electricity consumption are remarkable, although the authors claim that there is a risk of overheating during summer periods. krarti et al [30] studied different geometric room configurations in order to analyse the impact of daylight on energy consumption. the analysis was conducted with different window sizes and different types of glass. they analysed four locations in america. they also observed that geographical location has a low impact on daylight. hee et al [31] presented a review to analyse scientific paper where the impact of windows on room illuminance and the resulting energy savings are assessed. they also listed the different optimization techniques used to choose glass. their advice is to perform a careful economic evaluation when choosing glass and defining the size of the opening in a room. the largest number of scientific studies concerns the assessment of solar radiation introduced through windows, together with daylight. of course, these effects are important because they have a considerable influence on the energy balance of a building. the present work, however, has a different objective. in fact, solar contributions are defined by the geometry of the building and the ratio of windows to opaque walls. this paper, unlike the others, does not aim to evaluate the best architectural design for the building. instead, the aim is to evaluate, for a given building (thus for a given daylight and solar gains), whether the artificial lighting system can be more efficient, by making it intelligent, to exploit daylight. for example, when the room is long and narrow, the illuminance established in the areas away from the window is not sufficient. these rooms are generally not suitable for exploiting daylight properly. moon et al. [32] performed an analysis on the control of the lighting system using photosensors in the room. they performed simulations using lightscape software. the use of software to simulate daylight can be interesting as it allows comparison between different locations with reference to the same geometry. it is therefore necessary to understand whether the presence of an artificial light control system can lead to savings by trying to quantify them [33]-[35]. li et al. [36] carried out experimental acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 3 studies on daylight and energy consumption for atrium corridors. doulos et al. [37] showed that the power factor for led lamps worsens when they are dimmed and this could make the use of fluorescent lamps more advantageous than leds under certain conditions. bellia et al. [38] observed experimentally that there are no significant differences between using a proportional dimmable system and an on/off system. in particular, for southern exposures they observed that the difference is very low. the installation of a dimmable system is therefore not recommended in these cases due to the higher costs compared to an on/off system. the number of variables on which lighting problems depend is high and it is not always clear how many benefits the presence of a dimmable system can bring. the present work aims to overcome this gap and to provide useful information for locations in the latitude range equal to those of italy. 2. methodology 2.1. the inlux-dbr code and its experimental validation the lighting analysis was carried out using the inlux-dbr calculation code. this code allows to evaluate the luminance distribution within an environment. in particular, it divides opaque structures and transparent structures into “n” and “m” surface elements respectively. on each of these the illuminance value depends on: a. a component of direct solar radiation which, passing through the transparent surfaces, hits the surface under examination; b. a component of the diffuse solar radiation which, passing through the transparent surfaces, hits the surface under examination; c. a component of solar radiation reflected from the external environment which, passing through the glass surfaces, hits the surface under examination; d. a component due to the infinite internal reflections by the other surfaces that make up the environment in question. these components influence both the illuminance of opaque surfaces and the illuminance of glazed surfaces. in the present analysis, it is considered that the glazed surfaces are made by means of ordinary clear glass, therefore the direction of the solar radiation incident on the surface remains unchanged. having to consider the effects due to the radiation reflected from the ground outside the environment, the latter is divided into “p” surface elements. taking into account the various incident radiative components, it is possible to determine the illuminance 𝐸𝑖 of the “i-th” surface by means of the eq. (1). 𝐸𝑖 ∙ ∆𝐴𝑖 = ∑ [𝜏(𝛼, 𝜑) ∙ 𝐿p(𝛼, 𝜑) ∙ ∆𝐴w ∙ π × 𝐹w−i] + 𝑚 w=1 𝑘 ∙ 𝜏(𝛼s, 𝜑s) ∙ ∆𝐴i ∙ 𝐸vs ∙ cos 𝜗si + ∑[𝜏(𝛼g, 𝜑g) ∙ 𝐿g ∙ ∆𝐴g ∙ π ∙ 𝐹g−i] + 𝑝 g=1 ∑ ∆𝐴𝑗 ∙ 𝜌j ∙ 𝐸j ∙ 𝐹𝑗−i 𝑛+𝑚 𝑗=1 . (1) in the first term, the illuminance of the “i-th” element (𝐸𝑖 ) multiplied by the relative surface (∆𝐴𝑖 ) appears. in the second term, on the other hand, all the radiative components that influence the value of the illuminance on the surface under examination appear. in particular, with 𝜏(𝛼, 𝜑) indicates the transmissivity of the glazed surface, with 𝐿p (𝛼, 𝜑) indicates the luminance of the celestial vault (this varies according to the point of the celestial vault considered), with 𝛼 indicates the angular height of the point of the celestial vault considered, with 𝜑 indicates the azimuth of the point considered in the celestial vault, with ∆𝐴𝑤 indicates the area of the glazed element under examination, with 𝐹𝑤−𝑖 indicates the view factor between the glazed element under examination "w" and the "i-th" element, 𝑘 is a coefficient that has a unit value if the element in question is directly affected by solar radiation and a zero value otherwise, with 𝜏(𝛼𝑠 , 𝜑𝑠 ) indicates the transmissivity of the glazed surface to direct solar illumination (𝐸𝑣𝑠), with 𝛼𝑠 indicates the height of direct solar radiation, with 𝜑𝑠 indicates the azimuth of direct solar radiation, with 𝜗𝑠𝑖 indicates the angle between direct solar radiation and the normal to the surface of the "i-th" element, with 𝜏(𝛼𝑔, 𝜑𝑔 ) indicates the transmissivity of the glazed surface to reflected radiation from the ground, with 𝐿𝑔 indicates the luminance of the ground, with ∆𝐴𝑔 indicates the surface of the "g-th" element of the ground, with 𝐹𝑔−𝑖 indicates the view factor between the "g-th" element and the "i-th "in question, 𝜌𝑗 is the reflexivity of the elements internal to the environment in question is indicated and ∆𝐴𝑗 is the surface of the" j-th "element inside the environment that reflects part of the incident radiation (𝐸𝑗 ) on the" i-th "element in exam, finally, 𝐹𝑗−𝑖 indicates the view factor between the" j-th "element and the" i-th "element. this balance equation is written for each “i-th” opaque element inside the environment. figure 1 shows a graphic representation of the various radiative components exchanged by the different surfaces and considered in the inlux dbr code. considering the external ground as an isotropic surface and assuming that the reflectivity 𝜌𝑔 does not vary between direct and diffuse radiation, it is possible to calculate the luminance of the ground 𝐿𝑔 by means of eq. 2. 𝐿𝑔 = 𝜌𝑔 ∙ (𝐸𝑣𝑠 ∙ sin(𝛼𝑠 ) + 𝐸0) 𝜋 , (2) where with 𝐸0 indicates the diffused illuminance on the horizontal plane. figure 1. representation of the various radiative components exchanged by the different surfaces inside the room. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 in the case in question, the illuminance on the glass surface depends solely on the radiative components reflected by the surfaces inside the environment. in fact, it is considered by hypothesis a single glazed surface in the room. consequently, it is possible to simplify the eq. (1). therefore, the illuminance on a glass element inside the room can be determined by means of the eq. (3). 𝐸𝑤 ∙ ∆𝐴𝑤 = ∑[∆𝐴𝑖 ∙ 𝜌𝑖 ∙ 𝐸𝑖 ∙ 𝐹𝑖−𝑤] 𝑛 𝑖=1 , (3) where with 𝐸𝑤 indicates the internal illuminance of the "w-th" element of the glazed surface under examination, while 𝐹𝑖−𝑤 is the view factor between the "i-th" element and the "w-th" element of the glazed surface in exam. finally, a system of unknown (𝑛 + 𝑚) equations with (𝑛 + 𝑚) variables is obtained which is solved with an iterative method by means of a finite difference methodology starting from a uniform initial solution. the calculation code has been implemented in the matlab environment and other details are provided in [19]. one of the advantages of this computational code is that it can implement different methodologies to model the behavior of the celestial vault. for example, it is possible to use: 1) the distribution of clear and cloudy cie [39]; 2) perez's model of clear skies [40]; 3) igawa's model [41]; 4) the model of kittler and darula [18], [42] 5) the tregenza model [43]; 6) experimental models created ad hoc for the location in question. the developed code was validated by means of data measured in an experimental site in osaka. in the experimental validation, clear, overcast and intermediate days were considered. the validation was carried out considering two error indices: the normalized mean bias error (nmbe) and the normalized root mean square error (nrmse) evaluated between the measured and calculated illuminance at various points in the environment. in particular, the nmbe is calculated by means of the following equation: 𝑁𝑀𝐵𝐸 = ∑ 𝑉calc,𝑖 − 𝑉meas,𝑖 𝑉meas,𝑖 𝑁 𝑖=1 𝑁 , (4) the nrmse is calculated by means of the following relation: 𝑁𝑅𝑀𝑆𝐸 = √ ∑ ( 𝑉calc,𝑖 − 𝑉meas,𝑖 𝑉meas,𝑖 ) 2 𝑁 𝑖=1 𝑁 (5) the values of these error indices are summarized in figure 2. it is observed that there is a good correspondence between the measured values and the calculated values. of course, the error increases as the solar radiation entering the environment increases. 2.2. calculation methodology of the electrical energy saving the analysis was conducted with reference to two different locations: cosenza (lat: 39° 18 'n) and milan (lat: 45° 28' n). the distribution of the point illuminance and the average illuminance in the environment were determined using the inlux-dbr code. the code requires some input data such as direct solar radiation and diffuse hourly monthly average and illuminance on the horizontal plane. these parameters were obtained through the “european database of daylight and solar radiation” provided by satel-light [44]. the luminance of the sky was determined by setting the perez model. the latter was chosen on the basis of an accuracy analysis carried out in a previous study [20], which showed the goodness of the model. the perez model [40] is a full-sky calculation methodology based on the clarity index. the latter is determined by means of the following relation: 𝜀 = 𝐺𝑑0 + 𝐺𝑏𝑜 sin 𝛼𝑠 𝐺𝑑0 + 5.535 ∙ 10−6 ∙ 𝑧𝑠 3 1 + 5.535 ∙ 10−6 ∙ 𝑧𝑠 3 , (6) where with 𝐺𝑑0 indicates the diffused irradiation on the horizontal plane and with 𝐺𝑏𝑜 indicates the direct irradiation on the horizontal plane. this index can vary between 𝜀 = 1 (with completely cloudy skies) and 𝜀 = 6.2 (with completely clear skies). as a function of the clarity index, eight sky conditions have been identified, varying the parameters and formulas for calculating the luminance of the sky; this quantity varies point by point in the celestial vault and appears in eq. (1). in particular, the luminance of the sky can be calculated by means of the following relationship: 𝐿𝑃 = 𝐿𝑍 ∙ 𝑙𝑟 , (7) where with 𝑙𝑟 indicates the relative luminance and with 𝐿𝑍 indicates the zenith luminance. relative luminance can be determined by means of the following relationship: 𝑙𝑟 = 𝛷(𝛼) ∙ 𝑓(𝜁) 𝛷(π/2) ∙ 𝑓(π/2 − 𝛼𝑠 ) , (8) where 𝛼 is the solar height of the considered point of the sky, 𝜁 is the angular distance between the considered point of the sky and the position of the sun. the functions 𝛷(𝑥) and 𝑓(𝑥) can be determined by means of the following equations: 𝛷(𝑥) = 1 + a ∙ e 𝑏 sin 𝑥 (9) 𝑓(𝑥) = 1 + c ∙ e𝑑 𝑥 + 𝑒 ∙ cos2 𝑥 . (10) in eqs. (9) and (10) the parameters 𝑎, 𝑏, c, 𝑑 and 𝑒 are functions of the solar height 𝛼𝑠, the brightness of the sky ∆ and figure 2. analysis of the error indices during the experimental campaign done by means of an experimental site located in osaka 7. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 5 the clarity index 𝜀. more information on the calculation of these parameters can be found in ref. [25]. the luminance of zenith, which appears in eq. (7), can be determined by means of the following correlation developed by perez [45] as the sky condition varies. 𝐿𝑍 = 𝐺𝑑0[𝑎i + 𝑐i sin 𝛼𝑠 + 𝑐i ′ e−3(π/2−𝛼𝑠) + 𝑑i ] , (11) where the constants 𝑎i, 𝑐i, 𝑐′i and 𝑑i are function of the clarity index 𝜀. 2.3. case study the analysis will be conducted with reference to a case study. the geometry of the room for which the smart lighting system is simulated is 6 × 6 × 3 m³ in which there is a window with double glazing and transmissivity equal to 0.75. the size of the window varies between 5.04 m2 and 12.96 m2. the minimum size is fixed on the basis of what is reported by the italian legislation [46] which imposes a minimum size of the glass surfaces of a room of 1/8 (12.5 %) of the total floor area. the reflectivity of the room's opaque vertical and horizontal structures is summarized in table 1. by means of the inlux-dbr calculation code, an analysis was carried out for a whole year in order to evaluate the distribution of natural lighting inside the room. in particular, the natural illuminance was evaluated with reference to the 12 points shown in figure 3. in each of these points it will be assumed to place a luminescent lamp with power 36 w and brightness of 3200 lumen. the set lamp control logic envisages activating a row of lamps only if the natural illuminance of two points out of three (for the same row) is less than 500 lux. as the orientation and size of the window vary, the distribution and intensity of natural lighting varies. consequently, the number of switch-on of the lighting system and the electrical consumption will vary. the final goal is precisely to determine the energy savings that can be obtained by varying the configuration. 3. analysis of results in figure 4 it is possible to observe the distribution of natural lighting on the 12 points selected for the town of cosenza at 12:30 am on 15 january. the analysis is carried out with reference to a surface window equal to 5.04 m2 for different variation of the exposure. with the window exposed to the north, only in the row closest to the window the illuminance intensity is greater than 500 lux; with the exposure to the east the natural illuminance is higher than 500 lux in the first row and in two points out of three of the second line; with the southern exposure the natural illuminance in all points is higher than 500 lux; finally, with the western exposure, only in the last two lines farthest from the window the natural illuminance is higher than a 500 lux. the same analysis was conducted at 12:30 am on july 15th. the results are reported in figure 5. table 1. characteristics of the walls of the reference room. dimensions area (m 2) ρ floor 6 × 6 m² 36 0.2 ceiling 6 × 6 m² 36 0.7 rear wall 6 × 3 m² 18 0.5 left wall 6 × 3 m² 18 0.5 right wall 6 × 3 m² 18 0.5 front wall 6 × 3 m² 12.96 0.5 ref. window 4.2 × 1.2 m² 5.04 0.15 work plane 6 × 6 m² 36 figure 3. position of reference points in which the analysis of the natural illuminance was conducted. figure 4. natural illuminance on 12 points of the work plane. january 15th time 12:30 am cosenza. figure 5. natural illuminance on 12 points of the work plane. july 15th time 12:30 am cosenza. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 6 the results are similar but the values obtained for natural lighting are sharply lower. this is due to the higher solar trajectory in the sky in the summer months. this affects the solar height which varies from 𝑠 = 29.37° january 15th at 12:30 am to 𝑠 = 70.77° july 15th at 12:30 am. this lower solar trajectory causes less direct solar radiation to hit the surfaces inside the room in july. it goes from an average solar radiation of 200 kluxhour/day for january to 119 kluxhour/day for july. furthermore, the lower illuminance in july is due to the lower brightness of the celestial vault due to the greater distance of the sun. the distribution of the luminance of the sky varies a lot between the month of january and the month of july. this can be easily observed in figure 6 which shows how the luminance of the sky varies as the angular height 𝛼 and azimuth 𝜙 of the celestial point considered at 12:00 am on january 15 (a) and at noon on 15 july (b) in the city of cosenza. it is observed that in all cases for all points of the celestial vault there is a higher intensity of the luminance of the sky in the month of january than in the month of july. this is due to the above. for the case in question, considering a 5.04 m2 south-facing window, the opaque elements arranged centrally in the room have a radiative exchange with the points of the celestial vault comprised between an angular height of 𝛼 = 2° and an angular height of 𝛼 = 23°. the difference in sky brightness can also be evaluated with reference to the distributions obtained by varying the azimuth for 𝛼 = 5° and 𝛼 = 25° with reference to noon on 15th january and to noon on 15th july. the distributions obtained were represented in figure 7. once again, the brightness of the sky in january is much greater than that of july under the same conditions. in figure 8 it is observed how the average illuminance in the room varies as the orientation of the window varies at the hours of january 15th and july 15th. in figure 9 and figure 10 it is possible to observe, respectively, how the illumination in point 7 and point 8 of the room (see figure 3) varies as the orientation of the window varies on january 15th and july 15th. therefore, the two central points furthest from the window were taken into consideration. with reference to the case with the window exposed to north, in july in the first and last hours of the day, natural illuminance is the highest due to the longer and lower solar trajectory in the celestial vault. therefore, in the first and last hours of the day, a certain amount of direct solar radiation will affect the internal opaque surfaces, consequently affecting the natural internal lighting. in the central hours of the day, the direct solar radiation incident on the glass surface is obviously zero and this affects the natural lighting inside the room. in january, however, the solar trajectory will be shorter but higher in the celestial vault and this generates a typical “bell-like trend” of the natural illuminance inside the room. with the window exposed to south, the natural lighting has trends similar to those described in reference to the case with north orientation but with much higher intensity. in fact, in this case the sun passes in front of the window in its trajectory. a) b) figure 6 a) sky luminance distribution (perez) as a function of azimuth and elevation angles. noon on january 15th, cosenza, b) sky luminance distribution (perez) as a function of azimuth and elevation angles. noon on july 15th, cosenza. figure 7. comparison of sky luminance distributions for noon of january 15th and july 15th, cosenza. figure 8. mean illuminance on the work plane as a function of time, cosenza. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 7 finally, with reference to the cases with the eastern and western exposure, in the first and last hours of the day, respectively, there is a peak of illuminance due to the solar path (sunrise and sunset). the higher intensity of natural lighting respectively in the first and last hours of the day in the month of july is once again due to the longer solar trajectory which affects the amount of direct solar radiation entering the room through the window. in the central hours, on the other hand, there is a greater intensity of natural lighting in the month of january due to the lower solar trajectory. the electricity consumption in reference to the city of cosenza was also analyzed as the area and the orientation of the window changed. by hypothesis it is considered that the environment is used as an office and has a working period of 8 hours ranging from 8:00 am to 4:00 pm. to evaluate the convenience of each individual configuration taken into consideration, two different energy saving indices have been introduced: 1) the percentage of energy savings compared to a case in the total absence of natural light (𝑅1). in particular, in the case in which it is assumed that in the room in question there is no contribution due to natural light, an annual electricity consumption is estimated 𝐶max equal to 860 kw h. the energy saving index 𝑅1 can be calculated using the following formula: 𝑅1 = 𝐶max − 𝐶 𝐶max . (12) 2) the percentage of energy savings compared to a reference case (𝑅2). in particular, the case with a northfacing window is taken as reference and the relative annual electricity consumption 𝐶ref is indicated with 5.04 m2. this case has been imposed as a reference as it constitutes the configuration with the lowest natural illuminance. therefore, the index 𝑅2 can be determined by means of the following formula: 𝑅2 = 𝐶ref − 𝐶 𝐶ref . (13) table 2 summarizes the monthly and annual electricity consumption depending on the orientation and area of the window. furthermore, the same table shows the values of the indices 𝑅1 and for the various configurations 𝑅2. figure 9. local illuminance on point 7 as a function of time, cosenza. figure 10. local illuminance on point 8 as a function of time, cosenza. table 2. electrical energy consumptions in a room 6 × 6 × 3 m³ of a building located in cosenza. month j f m a m j j a s o n d year r1 r2 exposure days 21 20 22 19 22 21 21 22 21 22 21 17 249 reference window 1.2 × 4.2 m² = 5.04 m2 (14 % floor area) north kw h 54.4 51.8 57.0 45.1 52.3 54.4 54.4 54.4 52.3 57.0 54.4 44.0 633 26 0 east 29.5 30.2 35.6 34.9 40.4 38.5 38.5 40.4 34.0 33.3 29.5 25.7 411 52 35 south 4.5 0.0 0.0 20.5 38.0 43.1 43.1 0.0 0.0 0.0 0.0 0.0 183 78 71 west 29.5 28.1 35.6 32.8 40.4 45.4 43.1 40.4 34 33.3 29.5 23.9 416 51 34 window 1.8 × 4.2 m² = 7.56 m2 (21 % floor area) north kw h 45.4 38.9 47.5 41 47.5 45.4 47.6 47.5 45.3 38 38.5 36.7 520 39 18 east 20.4 21.6 21.4 24.6 35.6 36.3 36.3 30.9 20.4 23.8 18.2 16.5 306 64 52 south 0.0 0.0 0.0 0.0 23.8 29.5 24.9 4.7 0.0 0.0 0.0 0.0 83 90 86 west 20.4 21.6 21.4 26.7 33.3 36.3 34 30.8 20.4 23.7 20.4 16.5 306 64 52 window 1.8 × 5.4 m² = 9.72 m2 (27 % floor area) north kw h 40.8 34.5 38 32.8 42.7 40.8 40.8 45.1 36.3 38 36.3 33 459 46 27 east 18.1 15.1 21.4 20.5 28.5 29.5 27.2 23.7 20.4 16.6 15.8 14.6 252 70 60 south 0.0 0.0 0.0 0.0 9.5 24.9 13.6 00.0 0.0 0.0 0.0 0.0 48.1 94 92 west 18.1 15.1 21.3 20.5 28.5 31.7 27.2 23.8 20.4 16.6 18.1 14.6 256 70 59 window 2.4 × 5.4 m² = 12.96 m2 (36 % floor area) north kw h 40.8 34.6 38.0 32.8 38 40.8 40.8 38 34 38 36.2 33 445 48 30 east 18.1 15.1 19.0 18.5 28.5 27.2 27.2 23.8 20.4 16.6 15.9 14.7 245 71 61 south 0.0 0.0 0.0 0.0 4.7 9 6.8 0.0 0.0 0.0 0.0 0.0 20.6 98 97 west 18.1 15.1 19.0 18.5 28.5 27.2 27.2 23.8 20.4 16.6 13.6 14.7 243 72 61 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 8 in table 2 it is observed that the index 𝑅1 assumes values including: a. between 26% and 48% for the northern exposure; b. between 52% and 72% due to east and west exposure; c. between 78% and 98% for the southern exposure. the index 𝑅2 assumes values including: a. between 0% and 30% for the northern exposure; b. between 35% and 61% due to east and west exposure; c. between 71% and 97% for the southern exposure. the same analysis was repeated with reference to the city of milan in order to evaluate the effects of latitude and climate zone (cosenza belongs to climate zone c and milan belongs to climate zone e) on electricity consumption. the resalts was summarized in table 3. a comparison of electricity consumption between the city of milan and the city of cosenza shows that there is less electricity consumption in milan. by means of the satel-light database it can be observed that cosenza is characterized by an annual cumulative diffuse illuminance on the horizontal plane greater than approximately 6.5% compared to that of milan (37656 kluxhour versus 35345 kluxhour). furthermore, cosenza has an annual cumulative direct illuminance on the vertical plane greater than about 13% compared to that of milan (44420 kluxhour versus 39235 kluxhour). however, the sky in milan appears brighter due to the lower solar trajectory. this phenomenon causes greater internal natural lighting and therefore lower electricity consumption. in figure 11 a) and b) the trends of the energy saving indices 𝑅1 and 𝑅2 are shown respectively. in this trends the indices vary with the window area and orientation and with location. the analysis carried out shows similar values of the indices with reference to the north, east and west exposures for the two locations in question. the values of the indices differ a lot for a southern exposure of the window. this highlights how the latitude has a greater effect on natural lighting and consequently on electricity consumption only with an exposure to the south of the window. in the other cases this effect is completely negligible. these results are very useful in evaluating what could be the natural illuminance present inside an environment and how much this can vary with the variation of the surface and orientation of the window. this could also suggest control techniques for dimmable artificial lighting systems. thus, adapting the artificial lighting to the natural lighting present in the environment. 4. conclusions the calculation code for illuminance used in this article was inlux-dbr. this code is experimentally validated through measurements taken in a scaled room at a building located in table 3. electrical energy consumptions in a room 6 × 6 × 3 m³ of a building located in milan. month j f m a m j j a s o n d year r1 r2 exposure days 21 20 22 19 22 21 21 22 21 22 21 17 249 reference window 1.2 × 4.2 m² = 5.04 m2 (14 % floor area) north kw h 56.7 51.8 57.0 49.2 52.3 49.9 49.9 54.6 54.4 57.0 54.4 44.0 631 27 east 34.0 28.0 33.2 30.8 40.4 38.5 38.5 35.6 34.0 30.8 31.7 23.9 400 53 37 south 0.0 0.0 0.0 8.2 35.6 36.3 36.3 19.0 0.0 0.0 0.0 0.0 135 84 79 west 31.7 28.0 33.2 30.8 40.4 38.5 38.5 35.6 34.0 30.9 29.5 22.0 393 54 38 window 1.8 × 4.2 m² = 7.56 m2 (21 % floor area) north kw h 52.0 38.8 42.7 41.0 47.5 45.3 45.3 47.5 43.1 38.0 52.2 38.5 532 38 16 east 25.0 19.4 21.4 22.6 30.9 34 31.7 28.5 20.4 21.4 24.9 14.7 295 66 53 south 0.0 0.0 0.0 0 4.7 18.1 9.1 0.0 0.0 0.0 0.0 0.0 32 96 95 west 20.4 19.4 21.4 22.6 30.9 31.7 31.7 28.5 20.4 19.0 20.4 16.5 283 67 55 window 1.8 × 5.4 m² = 9.72 m2 (27 % floor area) north kw h 40.8 34.6 38.0 36.9 38.0 38.5 38.5 40.4 36.3 38.0 38.5 33 452 47 28 east 18.1 15.1 19.0 18.5 28.5 29.5 24.9 21.4 18.1 16.6 18.1 12.8 241 72 62 south 0.0 0.0 0.0 0.0 0.0 6.8 0.0 0.0 0.0 0.0 0.0 0.0 6.8 99 99 west 18.1 15.1 16.6 18.5 23.7 27.2 22.3 21.4 18.1 16.6 15.9 12.8 227 74 64 window 2.4 × 5.4 m² = 12.96 m2 (36 % floor area) north kw h 40.8 34.6 38.0 32.8 38.0 38.5 36.3 35.6 36.3 38.0 38.5 33.0 441 49 30 east 18.1 15.1 16.6 18.5 23.8 24.9 22.7 21.4 18.1 16.6 18.1 11.0 225 74 37 south 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 100 100 west 18.1 15.1 16.6 18.5 23.8 24.9 22.7 21.4 15.9 14.3 15.9 12.8 220 74 38 a) b) figure 11. a) saving index r1 as a function of window area and orientation for cosenza and milan, b) saving index r2 as a function of window area and orientation for cosenza and milan acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 9 osaka (japan). estimates of the electricity consumption of artificial lights were performed assuming switching on at critical times. the luminance of the sky is given by the perez model. the evaluation of the luminance distribution in the interior walls of the room is obtained using the radiosity model. the building in question is an office which has been placed alternatively in two italian locations: cosenza and milan. estimates were made by changing the window size and orientation of the building, analysing the behaviour by placing the window on the four orientations coinciding with the cardinal points. in the case of the city of cosenza, the electrical savings obtained varied between 26% and 48% for glazed surfaces exposed to the north, between 52% and 72% to the east and west, and between 78% and 97% to the south. the variations refer to different window sizes ranging from 14% to 36% of the room's floor area. with reference to the city of milan, electricity savings vary between 27% and 49% per window surface exposed to the north, between 53% and 74% to the east and west, and between 84% and 100% to the south. of course, if the building envelope has a very large window area, using an intelligent management system for artificial light is very convenient. the results show, however, that even when the windows are typically large (14% of the floor area), electricity savings are considerable. in particular, it is up to 84% for the city of milan. the results obtained for milan are better than those obtained for the city of cosenza, which is at a lower latitude. although, the results are very similar. the results are therefore considered similar for all areas within this latitude range. the numerical results obtained can serve as a reference for making proper use of daylight and reducing the unnecessary use of electricity. nomenclature (𝑎, 𝑏, 𝑐, 𝑑, 𝑒); (𝑎i, 𝑏i, 𝑐i′, 𝑑i) coefficients used in the perez sky model 𝛷, 𝑓 functions used in the perez sky model 𝑘 coefficient used in the illuminance balance gbo direct irradiance on horizontal plane, w/m2 gdo diffuse irradiance on horizontal plane, w/m2 lp luminance of a point of the sky, cd/m2 lg luminance of a point of the ground, cd/m2 𝑙𝑟 relative luminance lz zenith luminance, cd/m2 zs sun zenith angle, rad e illuminance of the surface element, lx/m2 evs solar direct illuminance, lx/m2 𝐴 area of the surface element, m2 𝑛 number of opaque surface element inside the room 𝑚 number of glazed surface element inside the room 𝑝 number of ground surface element 𝑖 index opaque surface element 𝑤 index glazed surface element g index ground surface element 𝐶 yearly energy consumption in the case in exam, w/year 𝐶max yearly energy consumption in the case in absence of natural illuminance, w/year 𝐶ref yearly energy consumption in the reference case, w/year 𝑅1 percentage of energy savings compared to the case in absence of natural illuminance 𝑅2 percentage of energy savings compared to a reference case nmbe normalized mean bias error rmsep root mean square error percentage 𝑉calc calculated data 𝑉meas measured data n total number of data greek symbols  elevation angle of a point of the sky, rad s sun elevation angle, rad g elevation angle of a point of the ground, rad  sky brightness  clearness index 𝜁 angular distance between the sun and a point of the sky, rad ρ reflectivity 𝜑 azimuth of a point of the sky, rad 𝜑𝑠 solar azimuth, rad 𝜑𝑔 azimuth of a point of the ground, rad 𝜏 transmissivity 𝜗𝑠 angle between the direction of solar radiation and the normal direction of the surface, rad acknowledgement the author f. nicoletti thanks regione calabria, pac calabria 2014-2020 asse prioritario 12, azione b) 10.5.12, for funding the research. references [1] j. svatos, j. holub, t. pospisil, a measurement system for the long-term diagnostics of the thermal and technical properties of wooden houses, acta imeko 9 (2020) 3, pp. 3-10. doi: 10.21014/acta_imeko.v9i3.762 [2] r. bruno, p. bevilacqua, a. rollo, f. barreca, n. arcuri, a novel bio-architectural temporary housing designed for the mediterranean area: theoretical and experimental analysis. energies, 2022, 15(9), 3243. doi: 10.3390/en15093243 [3] r. bruno, p. bevilacqua, v. ferraro, n. arcuri, reflective thermal insulation in non-ventilated air-gaps: experimental and theoretical evaluations on the global heat transfer coefficient. energy build. volume 236, 1 april 2021, 110769. doi: 10.1016/j.enbuild.2021.110769 [4] r. bruno, p. bevilacqua, n. arcuri, assessing cooling energy demands with the en iso 52016-1 quasi-steady approach in the mediterranean area. j. build. eng., volume 24, july 2019, 100740. doi: 10.1016/j.jobe.2019.100740 [5] r. bruno, v. ferraro, p. bevilacqua, n. arcuri, on the assessment of the heat transfer coefficients on building components: a comparison between modeled and experimental data. build. environ., volume 216, 15 may 2022, 108995. doi: 10.1016/j.buildenv.2022.108995 [6] giovanni nicoletti, r. bruno, p. bevilacqua, n. arcuri, gerardo nicoletti, a second law analysis to determine the environmental impact of boilers supplied by different fuels. processes 9(1), 113. doi: 10.3390/pr9010113 https://doi.org/10.21014/acta_imeko.v9i3.762 https://doi.org/10.3390/en15093243 https://doi.org/10.1016/j.enbuild.2021.110769 https://doi.org/10.1016/j.jobe.2019.100740 https://doi.org/10.1016/j.buildenv.2022.108995 https://doi.org/10.3390/pr9010113 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 10 [7] a. manasrah, m. masoud, y. jaradat, p. bevilacqua, investigation of a real-time dynamic model for a pv cooling system. energies 15(5), 2022, 1836. doi: 10.3390/en15051836 [8] p. bevilacqua, a. morabito, r. bruno, v. ferraro, n. arcuri, seasonal performances of photovoltaic cooling systems in different weather conditions. j. clean. prod., volume 272, 1 november 2020, 122459. doi: 10.1016/j.jclepro.2020.122459 [9] p. sdringola, s. proietti, u. desideri, g. giombini, thermo-fluid dynamic modeling and simulation of a bioclimatic solar greenhouse with self-cleaning and photovoltaic glasses. energy build. volume 68, part a, january 2014, pp. 183-195. doi: 10.1016/j.enbuild.2013.08.011 [10] p. bevilacqua, the effectiveness of green roofs in reducing building energy consumptions across different climates. a summary of literature results. renew. sustain. energy rev. volume 151, november 2021, 111523. doi: 10.1016/j.rser.2021.111523 [11] p. bevilacqua, r. bruno, n. arcuri, 2020. green roofs in a mediterranean climate: energy performances based on in-situ experimental data. renew. energy. volume 152, june 2020, pp. 1414-1430. doi: 10.1016/j.renene.2020.01.085 [12] j. szyszka, p. bevilacqua, r. bruno, 2020. an innovative trombe wall for winter use: the thermo-diode trombe wall. energies 13(9), 2188. doi: 10.3390/en13092188 [13] s. serroni, m. arnesano, l. violini, g. m. revel, an iot measurement solution for continuous indoor environmental quality monitoring for buildings renovation. acta imeko 10 (2020) 4, pp. 230 238. doi: 10.21014/acta_imeko.v10i4.1182 [14] f. lamonaca, c. scuro, p. f. sciammarella, r. s. olivito, d. grimaldi, d. l. carnì, a layered iot-based architecture for a distributed structural health monitoring system. acta imeko 8 (2019) 2, pp. 45 52. doi: 10.21014/acta_imeko.v8i2.640 [15] a. ahmad, a. kumar, o. prakash, a. aman, daylight availability assessment and the application of energy simulation software – a literature review, mater. sci. energy technol. 3. doi: 10.1016/j.mset.2020.07.002 [16] e. guerry, c. d. gǎlǎtanu, l. canale, g. zissis, optimizing the luminous environment using dialux software at “constantin and elena” elderly house-study case, procedia manufacturing. doi: 10.1016/j.promfg.2019.02.241 [17] x. yu, y. su, x. chen, application of relux simulation to investigate energy saving potential from daylighting in a new educational building in uk. energy build. 74. doi: 10.1016/j.enbuild.2014.01.024 [18] spatial distribution of daylight-luminance distributions of various reference skies, cie publication 110, central bureau of the cie, vienna, austria, 1994, color res. appl. 20. doi: 10.1002/col.5080200119 [19] a. de rosa, v. ferraro, n. igawa, d. kaliakatsos, v. marinelli, inlux: a calculation code for daylight illuminance predictions inside buildings and its experimental validation. build. environ. 44. doi: 10.1016/j.buildenv.2008.11.014 [20] v. ferraro, n. igawa, v. marinelli, inlux dbr a calculation code to calculate indoor natural illuminance inside buildings under various sky conditions. energy. doi: 10.1016/j.energy.2010.05.021 [21] illumination; its distribution and measurement. (1911). nature, 88(2194), 72–73. doi: 10.1038/088072a0 [22] a. dresler, the “reflected component” in daylighting design. transactions of the illuminating engineering society. 1954;19(2_iestrans): 50-60. doi: 10.1177/147715355401900203 [23] r. g. hopkinson, j. longmore, p. petherbridge, an empirical formula for the computation of the indirect component of daylight factor. transactions of the illuminating engineering society. 1954;19(7_iestrans): 201-219. doi: 10.1177/147715355401900701 [24] p. r. tregenza, modification of the split-flux formulae for mean daylight factor and internal reflected component with large external obstructions. lighting research & technology. 1989; 21(3): 125-128. doi: 10.1177/096032718902100305 [25] m. e. aizlewood, p. j. littlefair, daylight prediction methods: a survey of their use. united kingdom, 1994. [26] n. ruck, ø. aschehoug, s. aydinli, j. christoffersen, i. edmonds, r. jakobiak, m. kischkoweit-lopin, m. klinger, e. lee, g. courret, l. michel, j.-l. scartezzini, s. selkowitz, daylight in buildings a source book on daylighting systems and components. [27] a. nabil, j. mardaljevic, useful daylight illuminances: a replacement for daylight factors. energy build. 38. doi: 10.1016/j.enbuild.2006.03.013 [28] k. alhagla, a. mansour, r. elbassuoni, optimizing windows for enhancing daylighting performance and energy saving. alexandria engineering journal, 58(1), 283–290. doi: 10.1016/j.aej.2019.01.004 [29] a. tzempelikos, a. athienitis, the effect of shading design and control on building cooling demand, 1st int. conference on passive and low energy cooling for the built environment, santorini, greece. [30] m. krarti, p. m. erickson, t. c. hillman, a simplified method to estimate energy savings of artificial lighting use from daylighting. building and environment, 40(6), 747–754. doi: 10.1016/j.buildenv.2004.08.007 [31] w. j. hee, m. a. alghoul, b. bakhtyar, o. elayeb, m. a. shameri, m. s. alrubaih, k. sopian, the role of window glazing on daylighting and energy saving in buildings. in renewable and sustainable energy reviews vol. 42, february 2015, pp. 323–343 doi: 10.1016/j.rser.2014.09.020 [32] j. w. moon, y. k. baik, s. kim, operation guidelines for daylight dimming control systems in an office with lightshelf configurations. building and environment, 180, august 2020, 106968. doi: 10.1016/j.buildenv.2020.106968 [33] z. s. zomorodian, m. tahsildoost, assessing the effectiveness of dynamic metrics in predicting daylight availability and visual comfort in classrooms. renewable energy, 134, april 2019, 669– 680. doi: 10.1016/j.renene.2018.11.072 [34] s. m. yacine, z. noureddine, b. e. a. piga, e. morello, d. safa, towards a new model of light quality assessment based on occupant satisfaction and lighting glare indices. energy procedia, 122, 805–810. doi: 10.1016/j.egypro.2017.07.408 [35] j. k. day, b. futrell, r. cox, s. n. ruiz, blinded by the light: occupant perceptions and visual comfort assessments of three dynamic daylight control systems and shading strategies. building and environment, 154, 107–121. doi: 10.1016/j.buildenv.2019.02.037 [36] d. h. w. li, a. c. k. cheung, s. k. h. chow, e. w. m. lee, study of daylight data and lighting energy savings for atrium corridors with lighting dimming controls. energy and buildings, 72, 457– 464. doi: 10.1016/j.enbuild.2013.12.027 [37] l. t. doulos, a. tsangrassoulis, p. a. kontaxis, a. kontadakis, f. v. topalis, harvesting daylight with led or t5 fluorescent lamps? the role of dimming. energy and buildings, 140, 336–347. doi: 10.1016/j.enbuild.2017.02.013 [38] l. bellia, f. fragliasso, automated daylight-linked control systems performance with illuminance sensors for side-lit offices in the https://doi.org/10.3390/en15051836 https://doi.org/10.1016/j.jclepro.2020.122459 https://doi.org/10.1016/j.enbuild.2013.08.011 https://doi.org/10.1016/j.rser.2021.111523 https://doi.org/10.1016/j.renene.2020.01.085 https://doi.org/10.3390/en13092188 https://doi.org/10.21014/acta_imeko.v10i4.1182 https://doi.org/10.21014/acta_imeko.v8i2.640 https://doi.org/10.1016/j.mset.2020.07.002 https://doi.org/10.1016/j.promfg.2019.02.241 https://doi.org/10.1016/j.enbuild.2014.01.024 https://doi.org/10.1002/col.5080200119 https://doi.org/10.1016/j.buildenv.2008.11.014 https://doi.org/10.1016/j.energy.2010.05.021 https://doi.org/10.1038/088072a0 https://doi.org/10.1177/147715355401900203 https://doi.org/10.1177/147715355401900701 https://doi.org/10.1177/096032718902100305 https://doi.org/10.1016/j.enbuild.2006.03.013 https://doi.org/10.1016/j.aej.2019.01.004 https://doi.org/10.1016/j.buildenv.2004.08.007 https://doi.org/10.1016/j.rser.2014.09.020 https://doi.org/10.1016/j.buildenv.2020.106968 https://doi.org/10.1016/j.renene.2018.11.072 https://doi.org/10.1016/j.egypro.2017.07.408 https://doi.org/10.1016/j.buildenv.2019.02.037 https://doi.org/10.1016/j.enbuild.2013.12.027 https://doi.org/10.1016/j.enbuild.2017.02.013 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 11 mediterranean area. automation in construction, 100, april 2019, pp. 145–162. doi: 10.1016/j.autcon.2018.12.027 [39] cie, spatial distribution of daylight-cie standard general sky, cie standard s011/e, vienna, 2003. [40] r. perez, r. seals, j. michalsky, all-weather model for sky luminance distribution-preliminary configuration and validation. sol. energy volume 50, issue 3, march 1993, pp. 235-245 doi: 10.1016/0038-092x(93)90017-i [41] igawa, n., koga, y., matsuzawa, t., nakamura, h., 2004. models of sky radiance distribution and sky luminance distribution. sol. energy volume 77, issue 2, 2004, pp. 137-157. doi: 10.1016/j.solener.2004.04.016 [42] iso, spatial distribution of daylight-cie standard general sky, iso standard 15469, geneva, 2004. [43] p. r. tregenza, subdivision of the sky hemisphere for luminance measurements. light. res. technol., vol. 19, 1987, issue 1. doi: 10.1177/096032718701900103 [44] s@tel-light. online [accessed 29 november 2022] http://satellight.entpe.fr/ [45] r. perez, p. ineichen, r. seals, j. michalsky, r. stewart, modeling daylight availability and irradiance components from direct and global irradiance. sol. energy 44. doi: 10.1016/0038-092x(90)90055-h [46] italian health ministerial decree, 5 july 1975. https://doi.org/10.1016/j.autcon.2018.12.027 https://doi.org/10.1016/0038-092x(93)90017-i https://doi.org/10.1016/j.solener.2004.04.016 https://doi.org/10.1177/096032718701900103 http://satellight.entpe.fr/ https://doi.org/10.1016/0038-092x(90)90055-h decay of a roman age pine wood studied by micro magnetic resonance imaging, diffusion nuclear magnetic resonance and portable nuclear magnetic resonance acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 10 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 decay of a roman age pine wood studied by micro magnetic resonance imaging, diffusion nuclear magnetic resonance and portable nuclear magnetic resonance valeria stagno1,2, silvia capuani2,3 1 earth sciences department, sapienza university of rome, piazzale aldo moro 5, 00185 rome, italy 2 national research council institute for complex systems (cnr-isc) c/o physics department sapienza university of rome, piazzale aldo moro 5, 00185 rome, italy 3 centro fermi museo storico della fisica e centro studi e ricerche enrico fermi, piazza del viminale 1, rome 00184, italy section: research paper keywords: archaeological waterlogged wood; micro-mri; diffusion-nmr; portable nmr citation: valeria stagno, silvia capuani, decay of a roman age pine wood studied by micro magnetic resonance imaging, diffusion nuclear magnetic resonance and portable nuclear magnetic resonance, acta imeko, vol. 11, no. 1, article 12, march 2022, identifier: imeko-acta-11 (2022)-01-12 section editor: fabio santaniello, university of trento, italy received march 3, 2021; in final form march 15, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: valeria stagno, e-mail: valeria.stagno@uniroma1.it 1. introduction wood is a porous material with complex morphology. in the past, it has been widely used by men to produce artworks. for this reason, wood is widespread in the cultural heritage world and its microstructure has always been studied for the species identification, for dendrochronological analyses and for extracting important information about ancient human activities [1]. two principal types of wood can be recognized: softwood and hardwood. softwood has a very homogeneous structure mainly composed of tracheids and fibre-tracheids [2]. the resin canals are one of its main characteristics that allow it to be distinguished from the hardwood. in addition, its annual rings are usually well separated through the annual ring limit and they are made of an earlywood area, with pores characterized by larger lumens and thinner walls, and a latewood area with thicker walls and smaller lumens [2]. however, these two areas are not always well differentiated. some softwoods present an abrupt passage from earlywood to latewood, while some others a gradual one [2]. among the many anatomical elements described above, the growth ring is surely one of the most important because its characterization provides the age of the tree and the climatic conditions in which it has grown [3], [4], as well as being crucial for the species identification. because of its biodegradability [4] wood is hardly preserved with no microstructural alterations. waterlogged wood is usually well preserved for the microscopic observation of its annual rings, especially when the wooden object was buried under the sea sediments [6]. in fact, thanks to the anaerobic conditions fungal attacks are more or less excluded abstract wood is a hygroscopic biodegradable porous material widely used by men in the past to create artworks. its total preservation over time is quite rare and one of the best preservation modalities is waterlogging. observing the anatomy of waterlogged archaeological wood could also be complicated because of its bacterial degradation. however, the characterization of wood morphology and conservation state is a fundamental step before starting any restoration intervention as it allows to extract information about past climatic conditions and human activities. in this work, a micro-invasive approach based on the combined use of high-resolution magnetic resonance imaging (mri) and diffusion-nuclear magnetic resonance (nmr) was tested both on a modern and an ancient pine wood sample. furthermore, a completely non-invasive analysis was performed by using portable nmr. this multi-analytical nmr approach allowed to highlight the effect of decay on the wood microstructure, through alterations in the pores size, tortuosity, and images contrast of the ancient pine compared to the modern one. this work pointed out the different but complementary multi-parametric information that can be obtained by using nmr and tested the potential of high-field mri and low-field portable nmr in the detection of wood diagnostic features.. mailto:valeria.stagno@uniroma1.it acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 [6] and if the object was buried under the seabed also the marine borer activity is limited [7]. in this environment, the main degradation can be attributed to erosion bacteria [8]-[10]. conversely, at a macroscopic level the wood rings are not always visible with the naked eye because of changes in colour, consistency, and superficial morphology of the wood structure [11]. as a consequence, microscopic imaging techniques are always required. moreover, the evaluation of the effect of age is also very important, for example, to determine the conservation state of an artwork. knowing the state of wooden remains, such as its degree of decay and its pore size and morphology, is useful for planning the restoration and choosing the restoration materials [11], [12]. in the literature, several works [13]-[22] about the use of nuclear magnetic resonance (nmr) on wooden artworks have proven its utility in the microstructural characterization and the evaluation of their conservation state. the above cited works employed invasive or destructive nmr techniques, while some others [22]-[25] pointed out the potential of non-invasive portable nmr. among these, our previous work stagno et al. [23] showed how 1d and 2d low-field nmr experiments can be used as complementary techniques to study water compartmentalization in archaeological waterlogged wood with the support of optical microscopy and magnetic resonance images. furthermore, we suggested the ability of low-field nmr in detecting cell wall decay and paramagnetic impurities. in respect to our previous work [23], in this study we investigated a decayed pine wood of the roman age by using mr images with higher resolution and by characterizing the wood structure also with the pore size distribution extracted from the relaxation measurements. the results obtained from the archaeological sample were compared with the results obtained from its modern counterpart. specifically, the aim of this work was to test the potential of nmr multi-analytical approach to evaluate archaeological wood decay. we compared micro-invasive magnetic resonance imaging (mri) and diffusion-nmr [26]-[29] with non-invasive portable nmr [30], [31]. the potential of mr images as an alternative approach to the conventional techniques for the characterization of the annual rings, as well as of all the diagnostic features of waterlogged wood, was tested. moreover, high-field diffusion and low-field portable nmr were used to highlight the effect of decay on the wood microstructure, such as variations in the pore size of the ancient compared to the modern pine. 2. background theory 2.1. diffusion-nmr and pore size molecules constituting a fluid whose absolute temperature is greater than zero kelvin are in constant movement because of their kinetic energy [27]. this process is called self-diffusion. when a fluid totally fills porous medium, molecules moving with random motion are subject to continuities deviations of their trajectories due to collision with the pores walls. diffusion-nmr techniques investigate diffusion dynamics by following in time the fluid molecules. the root mean square distance travelled, 𝓁d (or diffusion length), increases with time as long as no boundaries are encountered, according to the einstein relation: 𝓁d =√2 𝑛 𝐷 𝑡 where 𝑛 = 1, 2, 3 is the space dimension and 𝐷 is the bulk diffusion coefficient that can be measured by a pulsed field gradient sequence [26]. at a fixed diffusion time 𝑡 = ∆ (where ∆ is the delay time between the two gradient pulses) and fixed pulse magnetic-field gradient duration 𝛿 such that 𝛿 ≪ ∆, it is possible to vary the magnetic-field gradient strength 𝑔, so that the nmr-signal amplitude 𝑆(𝑔) is given by [28], [32]: 𝑆(𝑔) = 𝑆(0) e −𝛾2𝑔2𝛿2𝐷(∆− 𝛿 3 ) , (1) where 𝛾 is the gyromagnetic ratio of protons, 𝑏 = is the so called 𝑏-value, 𝐷 is the diffusion coefficient obtained at a specific diffusion time ∆ and 𝑆(0) is the signal at 𝑔 =0. since during ∆ in a pulsed gradient stimulated echo (pgste) experiment longitudinal relaxation decay 𝑇1 would occur, the maximum δ accessible is limited by the relation ∆ < 𝑇1 [33]. geometrical restrictions of the medium, such as the pores walls, lead to a 𝐷(∆) decreasing with time. in a heterogeneous porous system as wood, it is possible to derive useful information about pores size, pores interconnection and membrane permeability [29] by studying the 𝐷(∆) behaviour. in the case of long ∆ limit (i.e., ∆ ≫ 𝐿2 𝐷0⁄ , where 𝐿 is the mean pore diameter) and impermeable wall, the water diffusion coefficient varies according to: 𝐷(∆) = 𝐿2 2 ∆ . (2) equation (2) indicates that 𝐿 can be obtained from the slope of 𝐷 vs ∆−1. however, deviation from equation (2) can occur for semi-permeable walls [34]-[35]. for materials with interconnected pores, at very long times 𝐷(∆) approaches an asymptotic value 𝐷∞ that is independent of the diffusion time ∆ and directly related to the tortuosity 𝜏 of the porous material: 𝜏 = 𝐷0 𝐷∞ . (3) tortuosity is an intrinsic property of a porous medium usually defined as the ratio of actual flow path length to the straight distance between the ends of the flow path. so, 𝜏 of the porous system reflects the connectivity degree of the porous network [36]-[38]. 2.2. transversal relaxation and pore size the transversal or spin-spin relaxation time 𝑇2 is due to the loss of coherence of the spins among themselves. when an ensemble of spins is considered, their magnetic fields interact (spin-spin interaction), slightly modifying their precession rate. these interactions are temporary and random. thus, spin-spin relaxation causes a cumulative loss in phase resulting in transverse magnetization decay [26]. in a porous medium, the 𝑇2 relaxation of the fluid (e.g., water) is given by the sum of different contributions [39]-[43]: 1 𝑇2 = 1 𝑇2b + 1 𝑇2s + 𝐷(𝛾 𝐺 𝑇e) 2 12 , (4) where 1 𝑇2⁄ is the total spin-spin relaxation rate, 1 𝑇2b⁄ is the relaxation rate of bulk water and 1 𝑇2s⁄ is the relaxation rate of water on the pore surface. the term 𝐷(𝛾 𝐺 𝑇e) 2 12⁄ is related to the dephasing caused by the presence of a magnetic field gradient 𝐺, named “diffusion relaxation” term. 𝐷 is the diffusion coefficient of water in the porous system, 𝛾 is the hydrogen gyromagnetic ratio and 𝑇e is the echo time. when the 𝑇2 is measured by a carr-purcell meiboom-gill (cpmg) sequence, the contribution of diffusion relaxation is averaged out by means of an echo train if a very short 𝑇e is selected [44], [45]. in the presence of a porous system, the acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 dominant relaxation rate is 1 𝑇2s⁄ while the term 1 𝑇2b⁄ can be neglected because of the low efficiency of bulk water relaxation [46]. equation (4) becomes: 1 𝑇2 ≈ 1 𝑇2𝑠 = 𝜌 𝑆 𝑉 , (5) where 𝜌 is the surface relaxivity [46], 𝑆 𝑉⁄ is the surface-tovolume ratio of the pore. therefore, by measuring the spin-spin relaxation in a porous medium the pore diameter 𝑑 can be calculated by [46]-[48]: 1 𝑇2 = 𝜌 2 𝑛 𝑑 , (6) where 𝑛 is a shape factor (for our samples a spherical shape was considered, 𝑛 = 3). 3. materials and methods 3.1. samples two cylinders-like wood samples of a water-soaked modern wood and an archaeological waterlogged wood were studied. their size was less than 15 mm in length and 8 mm in diameter, suitable for a 10 mm nmr tube. the archaeological sample detached from an ancient pole of the roman harbour of naples, dated to the v century ad [49], [50]. it was well preserved in waterlogged conditions and for this reason was always kept in water during the analysis. the modern wood, instead, was previously maintained at the environmental conditions of 20 °c and 50 % of relative humidity. in order to perform nmr acquisitions, the modern sample was imbibed with distilled water until the saturation was reached. both the species of modern and archaeological wood was stone pine (pinus pinea l.) [51], [52]. 3.2. high-field nmr acquisitions all the high-field nmr analyses were performed by using a 400 mhz bruker-avance spectrometer with a 9.4 t static magnetic field and a micro-imaging unit equipped with highperformance and high-strength magnetic field gradients for mri and diffusion measurements. the gradients maximum strength was 1240 mt/m and the rise time 100 µs. to measure the diffusion coefficient 𝐷 and the longitudinal relaxation time 𝑇1, useful to choose the longest observation time ∆ accessible during the diffusion experiments according to the ∆ < 𝑇1 relation, the soaked samples were inserted without additional water in the nmr tube, which was sealed on the top with parafilm in order to prevent the sample dehydration. the longitudinal relaxation time was measured with a saturationrecovery 𝑆r sequence with 128 points from 10 µs to 10 s, repetition time 𝑇r of 10 s, number of scans 𝑁s equal to 4. the acquisition time was 1 hour and 40 minutes for each sample. for the measurement of the water diffusion coefficient, a pgste sequence [28], [29] was used. the diffusion was evaluated along the x axis (i.e., perpendicular to the main direction of the wood grain) corresponding to the radial direction. the pgste signal was obtained using 𝑇r = 5 s, echo time 𝑇e = 1.9 ms, pulse gradient duration 𝛿 = 3 ms, 32 steps of the gradient strength 𝑔, from 26 to 1210 mt/m, for each diffusion time ∆ and 𝑁s = 16. the ∆ values used were 0.04 0.08 0.12 0.16 0.2 0.3 0.4 s. the 𝑏-value spanned from a minimum of 1.6 × 107 s/m2 to a maximum of 9.5 × 1011 s/m2. the acquisition time was about 6 hours for each sample. for the acquisition of mr images, the samples were inserted with distilled water in the nmr tube and sealed with parafilm in order to prevent water evaporation. in this way, 𝑇2 ∗-weighted images were performed with a gradient echo fast imaging (gefi) sequence [26] in the transversal, tangential and radial direction. the images were weighted on the 𝑇2 ∗ parameter, which depends on both 𝑇2 and magnetic field inhomogeneities. the optimized parameters used in the gefi sequence are reported in table 1, where stk is the slice thickness, fov the field of view, mtx the image matrix and r the in-plane resolution. 3.3. low-field portable nmr acquisitions for the low-field nmr acquisitions the samples were just placed on the surface of the sensitive area of the portable spectrometer. a bruker minispec mq-profiler with a single-sided magnet that generates a static magnetic field of 0.35 t with a 1h resonance frequency of 15 mhz and dead time of 2 µs was used. the single-sided nmr was equipped with a rf coil for performing experiments within 2 mm starting from the sample surface to the inside of the sample [53]. the portable spectrometer has a constant magnetic gradient field with strength of about 3 t/m. the transversal relaxation time 𝑇2 was measured with a cpmg sequence with 𝑇e = 42 µs, 6500 echo times, 𝑇r = 1 s, 𝑁s = 128 and acquisition time of about 5 minutes. 3.4. data processing the longitudinal relaxation time was obtained by plotting the signal intensities 𝑆 as a function of 𝑡 = 𝑆r delays for fitting the following equation: 𝑆(𝑡) = 𝑀 [1 − e − 𝑡 𝑇1 ] (7) to data, where 𝑇1 is the longitudinal relaxation time and 𝑀 is the associated equilibrium magnetization. the diffusion coefficient values were obtained by fitting the following equation: 𝑆(𝑏) = 𝑀1 e −𝐷1𝑏 + 𝑀2 e −𝐷2𝑏 (8) to data acquired at different 𝑏-values, where 𝑆(𝑏) is the nmr signal as function of the 𝑏-value, 𝐷1 and 𝐷2 are the two components of the diffusion coefficient associated with the magnetizations 𝑀1 and 𝑀2, respectively. the fit goodness was evaluated by the 𝑅2̅̅̅̅ (i.e., the 𝑅2 corrected for the number of the regressors). fits of equation (7) and equation (8) were performed by using the originpro 8.5 software. plots of the diffusion coefficient 𝐷 vs. the diffusion time ∆ were performed with matlabr2019b. from the 𝐷 vs. ∆ trend, the first and last points, corresponding to the free water diffusion and the diffusion through semi-permeable membranes (i.e. the wood cell walls), were removed and the pores radius 𝐿 was table 1. acquisition parameters of t2*-weighted images. parameters modern pine archaeological pine te/tr (ms) 3/1200 5/1500 number of slice 3 3 ns 128 128 stk (µm) 200 300 fov (cm2) 0.9 × 0.9 / 1.4 × 1.4 0.9 × 0.9 mtx (pixels) 512 × 512 512 × 512 r (µm2) 18 × 18 / 27 × 27 18 × 18 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 estimated by the linear fit of equation (2) [54]. from the pores radius 𝐿, the pores diameter 𝑑 was then calculated. in addition, the value for ∆ = ∞ of the normalized diffusion coefficient 𝐷(∆) 𝐷0⁄ , where 𝐷0 is the free water diffusion coefficient equal to 2.3 × 10-9 m2/s, was calculated and used to evaluate tortuosity according to equation (3) [54]. finally, the inverse laplace transform [55] in matlab (matlab r2019b) was used to obtain the low-field 𝑇2 distribution and the pores diameter distribution according to equation (6). the surface relaxivity 𝜌 was previously estimated for the modern and ancient pine from the slope of the line given by 𝑇2 time vs. pore size 𝛼, where 𝛼 was calculated from diffusion measurements by the relation 𝛼 = 2 √𝐷∆ [46], [47]. 4. results 𝑇2 ∗-weighted images of the transversal section of modern and archaeological stone pine are displayed in figure 1a) and figure 1b), respectively. here, all the anatomical elements observable with conventional optical microscopy [51], [52] are detectable. first, the annual rings limit (white arrows) is well visible in both figure 1a) and figure 1b) with two separate areas with different contrast. in both modern and archaeological pine, the darker area corresponds to structures with low 𝑇2 ∗ values, while the brighter ones to structures with high 𝑇2 ∗ values [56]. these structures correspond to tracheids, which are considered as the predominant constituents of all softwoods [57]. the dark area is the latewood (light blue circles) while the bright area is the earlywood (green circles). resin canals (red circles), likely with the presence of resin, can also be observed. moreover, rays (pink arrows) can be seen in both the samples but in figure 1b) the archaeological pine shows black spots and artefacts (yellow circles) located along the rays and on the edge of the sample. these artefacts are typical of mr images of waterlogged woods [13] because of the deposition of impurities, i.e., bacterial erosion wood products and seabed sediments, in the wood microstructure during the burial period. these paramagnetic impurities provide a black contrast in 𝑇2 ∗images [58] revealing the distribution of the degradation zones since they are stored in the decayed structures of wood [13]. figure 2a) and figure 2b) display the radial section of modern and archaeological pine, respectively. in both the images, the anatomical element observable is the annual rings limit (white arrows) and in figure 2b) the above-mentioned distribution of paramagnetic inclusions (yellow circles). the tangential section of the modern and archaeological samples is shown in figure 3a) and figure 3b), respectively. the red circles highlight the tangential resin canals and the pink arrows the medullary rays. again, in figure 3b) the archaeological pine shows decayed zones (yellow circles) corresponding to black spots and artefacts produced by the presence of paramagnetic agents. in table 2 the relaxation time 𝑇1 measured at high magnetic field and obtained by equation (7) is displayed. both modern and archaeological pine show a 𝑇1 around 500 ms. this limited the observation time ∆ of the diffusion measurements, whose maximum value was set to 400 ms. in figure 4a) and figure 4b) the first (𝐷𝑥1) and second (𝐷𝑥2) diffusion component as function of the diffusion time ∆ obtained by equation (8) are displayed. in table 3 the pores diameter and the tortuosity calculated from the high-field measurements by equation (2) and equation (3), respectively, are reported. in figure 5a) the 𝑇2 time distribution obtained by portable nmr is showed both for the modern (dashed line) and the ancient (solid line) pine. the pores size distribution calculated by equation (6) from the above mentioned low-field 𝑇2-distribution is presented in figure 5b) for the modern pine (dashed line) and the archaeological pine (solid line). figure 1. t2*-weighted mr images of the transversal section of modern pine (a) and archaeological pine (b) obtained at high magnetic field (9.4 t). in both the samples there are resin canals (red circles), earlywood (green circles), latewood (blue circles), annual rings limit (white arrows) and rays (pink arrows). the ancient pine shows artefacts induced by paramagnetic ions in decayed areas (yellow circles). acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 5. discussions compared to the conventional methods used to observe wood anatomy, i.e., optical microscopy or scanning electron microscopy, our mr images allowed to recognize some important diagnostic characters of wood especially in the transversal section (figure 1). conversely, fewer characters were observed in the tangential (figure 3) and radial (figure 2) sections due to the current limitation on the resolution of mri, whose maximum value is around 10 µm. indeed, when the species of a softwood has to be recognized, the radial section is of fundamental importance because of its diagnostic features, such as pits of cross-fields and rays, which allow to discriminate among quite similar softwood structures (e.g., pine and spruce). however, the resolution of figure 2. t2*-weighted mr images of the radial section of modern pine (a) and archaeological pine (b) obtained at high magnetic field (9.4 t). in both the samples the annual rings limit (white arrows) is observable. the ancient pine shows artefacts induced by paramagnetic ions in decayed areas (yellow circles). figure 3. t2*-weighted mr images of the tangential section of modern pine (a) and archaeological pine (b) obtained at high magnetic field (9.4 t). in both tangential resin canals (red circles) and rays (pink arrows) are observable. the ancient pine shows artefacts induced by paramagnetic ions in decayed areas (yellow circles). acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 mri is not a physical limit but it depends on the characteristics of the instrumentation used. in this work, mr images showed in figure 1, figure 2 and figure 3 aim at investigating the decay effect and their specific contrast can be used to reconstruct the decay distribution and to detect the presence of paramagnetic impurities. all the sample volume can be imaged with mri, whereas optical and scanning electron microscopy only provide images of a small portion of the wood sample. moreover, the mechanical preparation of the sample in optical and scanning electron microscopy leads to its destruction if compared with the virtual sectioning operated by mri. the longitudinal relaxation time 𝑇1 could provide information about the water molecules surrounding environment, the sample structure and composition [26]. however, in a porous medium 𝑇1 can be influenced by paramagnetic ions [59], [60] that usually cause its reduction, as found in stagno et al. [23]. in our ancient wood many paramagnetic inclusions were detected as artifacts in the mr images [13] and 𝑇1 values, displayed in table 2, seem to be exchange averaged [59], therefore they do not include such detailed information about different structural compartments as 𝑇2 value. for the aforementioned reason, the longitudinal relaxation time of our modern and ancient pine cannot be used to describe structural changes among them. nevertheless, the measurement of 𝑇1 is useful for setting the diffusion observation time. from the comparison of the transversal, radial and tangential section of the modern (figure 1a, figure 2a and figure 3a) and the ancient (figure 1b, figure 2b and figure 3b) pine, it is possible to observe morphological differences. first of all, the black spots (see section 4), can be associated with a degradation process operated by microorganisms with the inclusion of paramagnetic impurities (i.e., bacterial erosion wood products and burying sediments) into the wood structure. these black zones are not present in the modern wood and they are mostly located along the rays and on the edge of the sample (figure 1b, figure 2b and figure 3b yellow circles). this indicates a strong degradation in these zones. the image contrast allows to observe that both the woods show well delimited growth rings. while the modern wood does not show particular changes in the annual ring thickness, in figure 1b) the ancient pine has some thinner annual rings with no latewood. we can hypothesize that this is due to a climatic change during the tree growth (before the v century ad). in fact, thinner rings are usually attributed to not rainy periods [4], [11]. however, the ring thickness can be also influenced by other factors, for example the tree age. therefore, more than one sample should be analysed to confirm our hypothesis. the plots of the diffusion coefficient vs. the diffusion time in figure 4a) and 4b) show that both wood samples have two main diffusion compartments but the diffusion in the modern pine is slower than in the ancient pine. the difference is around one order of magnitude for both the compartments 𝐷𝑥1 and 𝐷𝑥2. this situation can be explained as the consequence of the degradation process that occurred in the ancient pine. in fact, the decay of the cellular structure and wood polymers may have produced the cell walls thinning and the lumens enlargement in the archaeological pine. from the point of view of a porous system, ancient pine is characterized by larger pores compared to modern pine. the existence of two different diffusion compartments means that both the woods have at least two main pores sizes as shown in table 3. the two sizes 𝑑1 and 𝑑2 calculated from the high-field nmr diffusion measurements can be identified as the earlywood and latewood tracheids diameter [60], considering that tracheids are the main constituents of softwood. by comparing the calculated diameters of the modern and the ancient pine, despite in the earlywood the diameter seems to be similar between the two samples, for the latewood there is an increment of the tracheids size in the archaeological pine. a possible explanation is that the decay is higher in areas with high concentration of wood polymers, such as the latewood cell walls. these polymers are predominantly cellulose and hemicellulose that, as pointed out by high-resolution nmr spectra of both modern and ancient wood [19], are more degraded in respect to the usually well-preserved lignin. this result means that the greater the thickness of the cell wall, the stronger its deterioration. this is in agreement with the distribution of paramagnetic impurities revealed by the mr images, which follows the distribution of decay that is mainly located in rays table 2. longitudinal relaxation time t1. sample t1 ± se (ms) modern pine 541 ± 13 archaeological pine 511 ± 19 figure 4. plots (a) and (b) show the dx1 and dx2 decay as function of δ for modern pine (circle markers) and archaeological pine (triangle markers). dashed lines are for illustration purpose only. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 having thick cell walls. further information can be deduced from the tortuosity 𝜏 (table 3). the modern pine shows a higher tortuosity (7.0 ± 1.1) compared to the archaeological pine (3.5 ± 0.6). however, for the ancient pine the normalized d data did not reach a limit value (see figure 4). therefore, we likely underestimated the value of tortuosity. nevertheless, the tortuosity value obtained for modern pine seems to be in agreement with the literature value around 10 calculated for thermally modified pinus sylvestris , also considering the effect of thermal modification and that we used a different species (pinus pinea) [61]. moreover, tortuosity is also in good agreement with the diffusion coefficient and diameters results. in fact, the higher the tortuosity, the more complicated the water routes. this means that the modern wood has a complex structure within which water cannot move easily. conversely, the ancient pine has lost this complexity because of the structure’s degradation, which has produced new voids and widened the existing pores lumens making the water motion easier. specifically, the two different tortuosity values corroborate the pore size results obtained by nmr diffusion. indeed, the low tortuosity measured for the ancient pine is compatible with its larger and more interconnected pores. differences in the pore size of the modern and the ancient pine were also found by using the low-field portable nmr, in good agreement with the high-field results. specifically, in figure 5a) the 𝑇2 time distribution obtained by using the portable nmr is displayed and indicates that the archaeological pine (solid line) has longer 𝑇2 than the modern pine (dashed line). the only exception is the shortest component around 1 ms that is longer for the modern wood than for the archaeological one. since in previous works [62], [63] a component around 1 ms was attributed to bound water in the cell walls, we can suggest that bound water of the archaeological pine is highly influenced by paramagnetic impurities, whose accumulation is greater in more degraded areas, i.e. in the cell walls. this is confirmed by other studies [23], [64], where the 𝑇2 component around 1 ms ascribable to bound water in the cell walls was not detected due to their strong degradation state. since the 𝑇2 is proportional to the degree of decay, we can suggest that the increment of 𝑇2 for the ancient pine is a consequence of decay. this also means that water in the archaeological pine is located in larger structures and the water content has raised as shown by the probability associated with 𝑇2 (figure 5a). the pores diameter calculated from high-field diffusion-nmr (table 3) can be compared with the pores diameter obtained by low-field 𝑇2 distribution (figure 5b). however, compared to high-field diffusion, the low-field relaxation measurements allowed to provide a global pore distribution of the sample while the intrinsic resolution of the diffusion-nmr is limited by the 𝑇1. in our case, the maximum distance (𝓁d) accessible is 43 µm [65] at ∆ = 400 ms and 𝐷 = 2.3 × 10-9 m2/s (the free water diffusion coefficient). nevertheless, we think that in case of the archaeological pine the 𝐷 vs ∆−1 behaviour is still affected by the presence of free-like water due to the existence of very large pores, not detectable with diffusion nmr techniques [54]. this explains why the archaeological pine shows pore distribution around 70 µm (figure 5b) that was not detected by nmr diffusion. the pore size distribution displayed in figure 5b) confirms the enlargement of pores in the archaeological pine as predicted by diffusion and 𝑇2 measurements. however, the peak around 70 µm is quite broad indicating a continuous distribution among peak with diameter from about 55 µm to 90 µm. the most intense peak for the ancient pine is around 17 µm. it should be noticed that both the diameters obtained from diffusion (table 3) and associated with earlywood and latewood are included in the peak around 17 µm. this indicates a continuous distribution among earlywood and latewood tracheids, likely with water exchange. for modern wood, the two predominant peaks as well as their probability in figure 5b) are in good agreement with the diameters calculated from diffusion analyses, with the mr images and with the literature values [61], [66] in which the mean lumen diameter was found around 15 µm – 20 µm. the existence of two separate peaks shows distinct spins populations for earlywood and latewood. table 3. pores diameter, associated magnetizations and tortuosity obtained by nmr diffusion. sample d1 (µm) d2 (µm) τ modern pine 16.0 ± 1.0 5.4 ± 0.6 7.0 ± 1.1 archaeological pine 18.4 ± 3.5 13.6 ± 1.2 3.5 ± 0.6 figure 5. t2 relaxation time distribution (a) and pore diameter distribution (b) for the modern pine (dashed line) and the ancient pine (solid line) obtained by using portable low-field nmr. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 8 smaller pore sizes can be associated to parenchyma cells and voids of the cell walls. also in this case, the values obtained by equation (6) and displayed in figure 5b) confirm that the ancient pine has larger pores (from 0.1 µm to 3 µm) than the modern pine (from 0.1 to 1 µm). particularly, the increment of size of the cell walls voids reveals the degradation of polymers constituting the cell wall itself. 6. conclusion this work suggests that by using high-field micro-mri and high-field diffusion-nmr it is possible to obtain information about the archaeological wood decay. information related to changes in the mean pore size, caused by wood decay, can be also obtained by low-field nmr relaxometry. the use of different nmr techniques and instrumentations provided the same result about the increment of the mean pore size for the archaeological pine wood. by comparing the modern pine sample and the ancient pine sample, the effect of the degradation process on the wood microstructure can be observed through the contrast of the mr images and quantified by the diffusion coefficient of water, the tortuosity and the pore size. the high-field nmr showed that the decay in pine wood mostly occurred in areas with high concentration of polymers, such as rays and latewood cell walls, with the enlargement of the pores lumen and the loss of wood structural complexity. mri can also reveal morphological aspects of wood that are not observable with the naked eye, such as the annual rings, which can inform about past climate changes. in the pore size determination obtained by using portable low-field nmr, the method based on the 𝑇2distribution seems to be superior to the high-field diffusion method, whose resolution is limited by 𝑇1. in conclusion, highfield techniques require a sampling; however, compared to other conventional analyses, they are non-destructive towards the sample that can be relocated in its original position in the artwork. the low-field portable nmr, instead, is mobile and suitable for in-situ analysis on samples of any size thus being non-invasive and non-destructive. compared to high-field nmr it is also low cost, with shorter acquisition and processing time. we suggest that single-sided portable nmr is a powerful technique for revealing porosity changes of the entire wood structure produced by decay. acknowledgement the authors would like to thank the “istituto centrale per il restauro” (icr) of rome (italy) for providing the archaeological wood sample. we acknowledge funding of regione lazio under the adamo project no. b86c18001220002 of the centre of excellence at the technological district for cultural heritage of lazio (dtc). references [1] english heritage, waterlogged wood guidelines on the recording, sampling, conservation and curation of waterlogged wood, english heritage publishing, 2010, product code 51578. [2] iawa committee, iawa list of microscopic features for softwood identification, iawa j. 25 (2004) pp. 1-70. doi:1163/22941932-90000349 [3] climate data information. online [accessed 15 march 2022] http://www.climatedata.info/proxies/tree-rings [4] d. castagneri, g. battipaglia, g. von arx, a. pacheco, m. carrer, tree-ring anatomy and carbon isotope ratio show both direct and legacy effects of climate on bimodal xylem formation in pinus pinea, tree physiol. 00 (2018) pp. 1-12. doi: 10.1093/treephys/tpy036 [5] n. macchioni, wood: conservation and preservation. in: smith c. (eds) encyclopedia of global archaeology, springer new york, 2014, isbn 978-1-4419-0426-3. doi: 10.1007/978-1-4419-0465-2_480 [6] d. h. jennings, g. lysek, fungal biology: understanding the fungal lifestyle, bios scientific publishers, guildford, 1996, isbn 978-1859961087. [7] m. a. jones, m. h. rule, preserving the wreck of the mary rose, in p. hoffman (ed.), proc. of the 4th icom-group on wet organic archaeological materials conference, bremerhaven, 1991, pp.25-48. [8] r. a. eaton, m. d. hale, wood: decay, pests and protection, chapman and hall ltd, london, 1993, isbn 0412531208. [9] r. a. blanchette, a review of microbial deterioration found in archaeological wood from different environments, int. biodeterior. biodegradation 46 (2000) pp. 189–204. doi: 10.1016/s0964-8305(00)00077-9 [10] n. b. pedersen, c. g. björdal, p. jensen, c. felby, bacterial degradation of archaeological wood in anoxic waterlogged environments. in: stability of complex carbohydrate structures. biofuel, foods, vaccines and shipwrecks. ed. s. e. harding, the royal society of chemistry, cambridge, 2013, isbn 978-1-84973563-6, pp. 160–187. [11] d. m. pearsall, paleoethnobotany: a handbook of procedures, 3rd edition, routledge, 2015, isbn 9781611322996. [12] t. nilsson, r. rowell, historical wood-structure and properties, j. cult. herit. 13 (2012) pp. s5–s9. doi: 10.1016/j.culher.2012.03.016 [13] d. j. cole-hamilton, b. kaye, j. a. chudek, g. hunter, nuclear magnetic resonance imaging of waterlogged wood, stud. conserv. 40 (1995) pp. 41-50. doi: 10.2307/1506610 [14] s. maunu, nmr studies of wood and wood products, prog. nucl. magn. reson. spectrosc. 40 (2002) pp. 151–174. doi: 10.1016/s0079-6565(01)00041-3 [15] m. bardet, a. pournou, nmr studies of fossilized wood, annu. rep. nmr spectrosc. (2017) pp. 41–83. doi: 10.1016/bs.arnmr.2016.07.002 [16] a. salanti, l. zoia, e. l. tolppa, g. giachi, m. orlandi, characterization of waterlogged wood by nmr and gpc techniques, microchem. j. 95 (2010) pp. 345–352. doi: 10.1016/j.microc.2010.02.009 [17] a. maccotta, p. fantazzini, c. garavaglia, i. d. donato, p. perzia, m. brai, f. morreale, preliminary 1h nmr study on archaeological waterlogged wood, ann. chim. 95 (2005) pp. 117– 124. doi: 10.1002/adic.200590013 [18] j. kowalczuk, a. rachocki, m. broda, b. mazela, g. a. ormondroyd, j. tritt-goc, conservation process of archaeological waterlogged wood studied by spectroscopy and gradient nmr methods, wood. sci. technol. 53 (2019) pp. 1207– 1222. doi: 10.1007/s00226-019-01129-5 [19] m. alesiani, f. proietti, s. capuani, m. paci, m. fioravanti, b. maraviglia, 13c cpmas nmr spectroscopic analysis applied to wood characterization, appl. magn. reson. 29 (2005) pp. 177– 184. doi: 10.1007/bf03167005 [20] l. rostom, d. courtier-murias, s. rodts, s. care, investigation of the effect of aging on wood hygroscopicity by 2d 1h nmr relaxometry, holzforschung 74 (2019) pp. 400-411. doi: 10.1515/hf-2019-0052 [21] s. viel, d. capitani, n. proietti, f. ziarelli, a. l. segre, nmr spectroscopy applied to the cultural heritage: a preliminary study https://doi.org/10.1163/22941932-90000349 http://www.climatedata.info/proxies/tree-rings https://doi.org/10.1093/treephys/tpy036 https://doi.org/10.1007/978-1-4419-0465-2_480 https://doi.org/10.1016/s0964-8305(00)00077-9 https://doi.org/10.1016/j.culher.2012.03.016 https://doi.org/10.2307/1506610 https://doi.org/10.1016/s0079-6565(01)00041-3 https://doi.org/10.1016/bs.arnmr.2016.07.002 https://doi.org/10.1016/j.microc.2010.02.009 https://doi.org/10.1002/adic.200590013 https://doi.org/10.1007/s00226-019-01129-5 https://doi.org/10.1007/bf03167005 https://doi.org/10.1515/hf-2019-0052 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 9 on ancient wood characterization, appl. phys. a 79 (2004) pp. 357–361. doi: 10.1007/s00339-004-2535-z [22] v. di tullio, d. capitani, a. atrei, f. benetti, g. perra, f. presciutti, n. marchettini, advanced nmr methodologies and micro-analytical techniques to investigate the stratigraphy and materials of 14th century sienese wooden paintings, microchem. j. 125 (2016) pp. 208–218. doi: 1016/j.microc.2015.11.036 [23] v. stagno, s. mailhiot, s. capuani, g. galotta, v.-v. telkki, testing 1d and 2d single-sided nmr on roman age waterlogged woods, j. cult. herit. 50 (2021) pp. 95–105. doi: 10.1016/j.culher.2021.06.001 [24] d. capitani, v. di tullio, n. proietti, nuclear magnetic resonance to characterize and monitor cultural heritage, prog. nucl. magn. reson. spectrosc. 64 (2012) pp. 29–69. doi: 10.1016/j.pnmrs.2011.11.001 [25] c. rehorn, b. blümich, cultural heritage studies with mobile nmr, angew. chem. int. ed. 57 (2018) pp. 7304–7312. doi: 1010.1002/anie.201713009 [26] p. t. callaghan, principles of nuclear magnetic resonance microscopy, oxford university press inc, new york, 1991, isbn 0‐19‐853944‐4. [27] w. s. price, nmr studies of translational motion: principles and applications, cambridge university press, cambridge, 2009, isbn 978-0-521-80696-1. [28] e. o. stejskal, j. e. tanner, spin diffusion measurements: spin echoes in the presence of a time dependent field gradient, j. chem. phys. 42 (1965) pp. 288-292. doi: 10.1063/1.1695690 [29] p. n. sen, time-dependent diffusion coefficient as a probe of geometry, concepts magn. reson. 23a (2004) pp. 1-21. doi: 10.1002/cmr.a.20017 [30] n. proietti, d. capitani, v. di tullio, applications of nuclear magnetic resonance sensors to cultural heritage, sensors 14 (2014) pp. 6977-6997. doi: 10.3390/s140406977 [31] d. besghini, m. mauri, r. simonutti, time-domain nmr in polymer science: from the laboratory to the industry, appl. sci. 9 (2019) pp. 1801. doi: 10.3390/app9091801 [32] d. g. norris, the effects of microscopic tissue parameters on the diffusion weighted magnetic resonance imaging experiment, nmr biomed. 14 (2001) pp. 77–93. doi: 10.1002/nbm.682 [33] d. a. faux, p. j. mcdonald, nuclear-magnetic-resonance relaxation rates for fluid confined to closed, channel, or planar pores, phy. rev. e 98 (2018) pp. 1 – 14. doi: 10.1103/physreve.98.063110 [34] a. v. anisimov, n. y. sorokina, n. r. dautova, water diffusion in biological porous systems: a nmr approach, magn. reson. imaging 16 (1998) pp. 565–568. doi: 10.1016/s0730-725x(98)00053-8 [35] r. valiullin, v. skirda, time dependent self-diffusion coefficient of molecules in porous media, j. chem. phys. 114 (2001) pp. 452– 458. doi: 10.1063/1.1328416 [36] f. a. l. dullien, porous media: fluid transport and pore structure, academic press, new york, 1991, isbn: 9780323139335. [37] p. p. mitra, p. n. sen, l. m. schwartz, p. ledoussal, diffusion propagator as a probe of the structure of porous media, phys. rev. lett. 68 (1992) pp. 3555–3558. doi: 10.1103/physrevlett.68.3555 [38] m. zecca, s. j. vogt, p. r. connolly, e. f. may, m. l. johns, nmr measurements of tortuosity in partially saturated porous media, transp. porous media, 125 (2018) pp. 271-288. doi: 10.1007/s11242-018-1118-y [39] k. r. brownstein, c. e. tarr, spin‐lattice relaxation in a system governed by diffusion, j. magn. reson. (1969) 26(1) (1977) pp. 17– 24. doi: 10.1016/0022-2364(77)90230-x [40] r. kleinberg, m. horsfield, transverse relaxation processes in porous sedimentary rock. j. magn. reson. (1969) 88 (1990) pp. 9– 19. doi: 10.1016/0022-2364(90)90104-h [41] s. de santis, m. rebuzzi, g. di pietro, f. fasano, b. maraviglia, s. capuani, in vitro and in vivo mr evaluation of internal gradient to assess trabecular bone density, phys. med. biol. 55 (2010) pp. 5767. doi: 10.1088/0031-9155/55/19/010 [42] e. toumelin, c. torres-verdín, b. sun b, k. j. dunn, randomwalk technique for simulating nmr measurements and 2d nmr maps of porous media with relaxing and permeable boundaries, j. magn. reson. 188 (2007) pp. 83–96. doi: 10.1016/j.jmr.2007.05.024 [43] m. ronczka, m. muller-petke, optimization of cpmg sequences to measure nmr transverse relaxation time t2 in borehole applications, geosci. instrum. method. data syst. 1 (2012) pp. 197–208. doi: 10.5194/gi-1-197-2012 [44] h. y. carr, e. m. purcell, effects of diffusion on free precession in nuclear magnetic resonance experiments, phys. rev., 94 (1954) pp. 630–638. doi: 10.1103/physrev.94.630 [45] s. meiboom, d. gill, modified spin‐echo method for measuring nuclear relaxation times, rev. sci. instrum. 29 (1958) pp. 688–691. doi: 10.1063/1.1716296 [46] x. li, z. zhao, time domain‑nmr studies of average pore size of wood cell walls during drying and moisture adsorption, wood sci. technol. 54 (2020), pp. 1241–1251. doi: 10.1007/s00226-020-01209-x [47] p. r. j. connolly, w. yan, d. zhang, m. mahmoud, m. verrall, m. lebedev, s. iglauer, p. j. metaxas, e. f. may, m. l. johns, simulation and experimental measurements of internal magnetic field gradients and nmr transverse relaxation times (t2) in sandstone rocks, j. petrol. sci. eng. 175 (2019) pp. 985–997. doi: 10.1016/j.petrol.2019.01.036 [48] g. h. sørland, k. djurhuus, h. c. widerøe, j. r. lien, a. skauge, absolute pore size distributions from nmr, diffus. fundam. 5 (2007) pp. 4.1-4.15. [49] v. di donato, m. r. ruello, v. liuzza, v. carsana, d. giampaola, m. a. di vito, c. morhange, a. cinque, e. russo ermolli, development and decline of the ancient harbor of neapolis, geoarchaeology 33 (2018), pp. 542–557. doi: 10.1002/gea.21673 [50] d. giampaola, v. carsana, g. boetto, f. crema, c. florio, d. panza, m. bartolini, c. capretti, g. galotta, g. giachi, n. macchioni, m. p. nugari, m. bartolini, la scoperta del porto di "neapolis": dalla ricostruzione topografica allo scavo e al recupero dei relitti, in archaeologia maritima mediterranea: international journal on underwater archaeology, istituti editoriali e poligrafici internazionali, fabrizio serra, pisa-roma, 2005, pp. 1000–1045. [in italian]. online [accessed 14 march 2022] http://digital.casalini.it/10.1400/52974 [51] insidewood.2004-onwards. online [accessed 13 march 2022]. https://insidewood.lib.ncsu.edu/ [52] wood anatomy of central european species. online [accessed 13 march 2022]. http://www.woodanatomy.ch [53] v. stagno, c. genova, n. zoratto, g. favero, s. capuani, singlesided portable nmr investigation to assess and monitor cleaning action of pva-borax hydrogel in travertine and lecce stone, molecules 26 (2021) pp. 3697. doi: 10.3390/molecules26123697 [54] v. stagno, f. egizi, f. corticelli, v. morandi, f. valle, g. costantini, s. longo, s. capuani, microstructural features assessment of different waterlogged wood species by nmr diffusion validated with complementary techniques, magn. reson. imaging 83 (2021), pp. 139-151. doi: 10.1016/j.mri.2021.08.010 https://doi.org/10.1007/s00339-004-2535-z https://doi.org/10.1016/j.microc.2015.11.036 https://doi.org/10.1016/j.culher.2021.06.001 https://doi.org/10.1016/j.pnmrs.2011.11.001 https://doi.org/1010.1002/anie.201713009 https://doi.org/10.1063/1.1695690 https://doi.org/10.1002/cmr.a.20017 https://doi.org/10.3390/s140406977 https://doi.org/10.3390/app9091801 https://doi.org/10.1002/nbm.682 https://doi.org/10.1103/physreve.98.063110 https://doi.org/10.1016/s0730-725x(98)00053-8 https://doi.org/10.1063/1.1328416 https://doi.org/10.1103/physrevlett.68.3555 https://doi.org/10.1007/s11242-018-1118-y https://doi.org/10.1016/0022-2364(77)90230-x https://doi.org/10.1016/0022-2364(90)90104-h https://doi.org/10.1088/0031-9155/55/19/010 https://doi.org/10.1016/j.jmr.2007.05.024 https://doi.org/10.5194/gi-1-197-2012 https://doi.org/10.1103/physrev.94.630 https://doi.org/10.1063/1.1716296 https://doi.org/10.1007/s00226-020-01209-x https://doi.org/10.1016/j.petrol.2019.01.036 https://doi.org/10.1002/gea.21673 http://digital.casalini.it/10.1400/52974 https://insidewood.lib.ncsu.edu/ http://www.woodanatomy.ch/ https://doi.org/10.3390/molecules26123697 https://doi.org/10.1016/j.mri.2021.08.010 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 10 [55] l. venkataramanan, y.-q. song, m. d. hürlimann, solving fredholm integrals of the first kind with tensor product structure in 2 and 2.5 dimensions, ieee trans. signal process. 50 (2002) pp. 1017-1026. doi: 10.1109/78.995059 [56] p. m. kekkonen, v.-v. telkki, j. jokisaari, determining the highly anisotropic cell structures of pinus sylvestris in three orthogonal directions by pgste nmr of absorbed water and methane, j. phys. chem. b 113 (2009) pp. 1080. doi: 10.1021/jp807848d [57] g. t. tsoumis, wood, encyclopædia britannica. online [accessed 14 march 2022] https://www.britannica.com/science/wood-plant-tissue [58] s. takahashi, t. kim, t. murakami, a. okada, m. hori, y. narumi, h. nakamura, influence of paramagnetic contrast on single-shot mrcp image quality, abdom. imaging 25 (2000) pp. 511–513. doi: 10.1007/s002610000083 [59] a. yilmaz, m. yurdakoç, b. işik, influence of transition metal ions on nmr proton t1 relaxation times of serum, blood, and red cells, biol. trace elem. res. 67 (1999) pp. 187–193. doi: 10.1007/bf02784073 [60] s. capuani, v. stagno, m. missori, l. sadori, s. longo, highresolution multiparametric mri of contemporary and waterlogged archaeological wood, magn. reson. chem. 58 (2020) pp. 860-869. doi: 10.1002/mrc.5034 [61] m. urbańczyk, y. kharbanda, o. mankinen, v.-v. telkki, accelerating restricted diffusion nmr studies with timeresolvedand ultrafast methods, anal. chem. 92 (2020) pp. 99489955. doi: 10.1021/acs.analchem.0c01523 [62] v.-v. telkki, m. yliniemi, j. jokisaari, moisture in softwoods: fiber saturation point, hydroxyl site content and the amount of micropores determined from nmr relaxation time distributions. holzforschung 67 (2013) pp. 291–300. doi: 10.1515/hf-2012-0057 [63] p. m. kekkonen, a. ylisassi, v.-v. telkki, absorption of water in thermally modified pine wood as studied by nuclear magnetic resonance, j. phys. chem. c 118 (2014) pp. 2146–2153. doi: 10.1021/jp411199r [64] s. hiltunen, a. mankinen, m. a. javed, s. ahola, m. venäläinen, v.-v. telkki, characterization of the decay process of scots pine wood caused by coniophora puteana using nmr and mri, holzforschung 74 (2020) pp.1021–1032. doi: 10.1515/hf-2019-0246 [65] k. r. brownstein, c. e. tarr, importance of classical diffusion nmr studies of water in biological cells, phys. rev. a, 6 (1979) pp. 2446–2453. doi: 10.1103/physreva.19.2446 [66] i. sable, u. grinfelds, a. jansons, l. vikele, i. irbe, a. verovkins, a. treimanis, comparison of the properties of wood and pulp fibers from lodgepole pine (pinus contorta) and scots pine (pinus sylvestris), bioresources 7 (2012) pp. 1771-1783. doi: 10.15376/biores.7.2.1771-1783 https://doi.org/10.1109/78.995059 https://doi.org/10.1021/jp807848d https://www.britannica.com/science/wood-plant-tissue https://doi.org/10.1007/s002610000083 https://doi.org/10.1007/bf02784073 https://doi.org/10.1002/mrc.5034 https://doi.org/10.1021/acs.analchem.0c01523 https://doi.org/10.1515/hf-2012-0057 https://doi.org/10.1021/jp411199r https://doi.org/10.1515/hf-2019-0246 https://doi.org/10.1103/physreva.19.2446 https://doi.org/10.15376/biores.7.2.1771-1783 journal contacts acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 2 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 journal contacts about the journal acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are mainly based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. about imeko the international measurement confederation, imeko, is an international federation of actually 42 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses principal contact prof. francesco lamonaca university of calabria department of computer science, modelling, electronic and system science via p. bucci, 41c, vi floor, arcavacata di rende, 87036 (cs), italy e-mail: editorinchief.actaimeko@hunmeko.org support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig, germany e-mail: dirk.roeske@ptb.de editor‐in‐chief francesco lamonaca, italy founding editor‐in‐chief paul p. l. regtien, netherlands associate editor dirk röske, germany copy editors egidio de benedetto, italy silvia sangiovanni, italy layout editors dirk röske, germany leonardo iannucci, italy domenico luca carnì, italy editorial board leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france paolo carbone, italy lorenzo ciani, italy catalin damian, romania pasquale daponte, italy luca de vito, italy sascha eichstaedt, germany ravi fernandez, germany luigi ferrigno, italy edoardo fiorucci, italy alistair forbes, united kingdom helena geirinhas ramos, portugal sabrina grassini, italy leonardo iannucci, italy fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom yasuharu koike, japan dan kytyr, czechia francesco lamonaca, italy aime lay ekuakille, italy massimo lazzaroni, italy fabio leccese, italy rosario morello, italy michele norgia, italy franco pavese, italy pedro miguel pinto ramos, portugal nicola pompeo, italy sergio rapuano, italy renato reis machado, brazil álvaro ribeiro, portugal gustavo ripper, brazil dirk röske, germany maik rosenberger, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy michela sega, italy enrico silva, italy pier giorgio spazzini, italy krzysztof stepien, poland ronald summers, uk marco tarabini, italy tatjana tomić, croatia joris van loco, belgium zsolt viharos, hungary bernhard zagar, austria davor zvizdic, croatia mailto:editorinchief.actaimeko@hunmeko.org mailto:dirk.roeske@ptb.de acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 section editors vol. 7 – 11, 2018-2022 yvan baudoin, belgium piotr bilski, poland francesco bonavolonta, italy giuseppe caravello, italy carlo carobbi, italy marcantonio catelani, italy mauro d’arco, italy egidio de benedeto, italy alessandro depari, italy leila es sebar, italy alessandro germak, italy istván harmati, hungary min-seok kim, korea bálint kiss, hungary momoko kojima, japan koji ogushi, japan vilmos palfi, hungary jeerasak pitakarnnop, thailand md zia ur rahman, india fabio santaniello, italy jan saliga, slovakia emiliano sisinni, italy ciro spataro, italy oscar tamburis, italy zafar taqvi, usa jorge c. torres-guzman, mexico ioan tudosa, italy ian veldman, south africa rugkanawan wongpithayadisai, thailand claudia zoani, italy reviewers vol. 11, 2022 (number of reviews) a. vimala juliet (2) alberto de bonis (1) alessandro pozzebon (1) álvaro sánchez-climent (1) amina vietti (2) andrea d'andrea (1) andrea rosati (1) andrea scorza (1) andrej babinec (1) anna piccirillo (1) assunta pelliccio (1) ayesha tarannum (4) bruno andò (1) carmelo scuro (3) caterina balletti (1) chandra bhushan rao (3) chiara comegna (1) daniele fontanelli (4) dario ambrosini (1) davide colombi (1) domenico luca carnì (10) dominik pražák (1) efstathios adamopoulos (2) egidio de benedetto (2) elena fitkov-norris (1) eleonora balliana (1) elia quirós (1) elisabeth costa monteiro (1) emanuele alcaras (1) emanuele zappa (1) emilio sardini (1) emma angelini (4) evgeny borovin (2) fabio leccese (3) francesca di turo (1) francesco crenna (1) francesco demarco (1) francesco lamonaca (2) francesco picariello (3) francesco scardulla (1) franco simini (1) g v subba rao (2) g. festa (1) gabriele rossi (1) geert de cubber (1) giacomo fiocco (2) giorgia ghiara (1) giorgio verdiani (1) giovanni muscato (1) girika jyoshna (6) gustavo r. alves (1) gvs yaswanth (1) ian robinson (1) ignacio lira (1) ioan doroftei (2) ioan tudosa (1) isabella sannino (2) jakub svatos (1) jalagam mahesh (2) jan holub (1) jordi salazar (1) jurek sasiadek (1) katarina almeida-warren (1) kenta arai (1) l koteswara rao (1) lamia berrah (1) leila es sebar (1) leonardo iannucci (2) lidia álvarez-morales (1) lorenzo ciani (1) lorenzo scalise (1) luca di angelo (1) luca mari (1) luca zoia (1) luisa vigorelli (1) maik rosenberger (2) malakonda reddy (1) marco giorgio bevilacqua (2) maria grazia d'urso (1) mariapia casaletto (1) mariateresa lettieri (2) mauro campagna (1) moru leela (1) nalluri siddiah (2) panayota vassiliou (1) paolo belardi (1) pasquale daponte (1) pawel mazurek (1) pedro manuel brito silva girao (1) piervincenzo rizzo (1) raffaele persico (1) raffaella de marco (4) roberta spallone (1) roberto scopigno (1) roberto senesi (1) ronald summers (1) rosario lo schiano lo moriello (1) rugkanawan wongpithayadisai (1) s rooban (1) sala surekha (4) sang-youn kim (1) sara girón (1) saravanan velusamy (2) shafi mirza (5) silvestro a. ruffolo (1) srinivas reddy putluri (1) stefania zinno (1) stefano brusaporci (2) stefano gialanella (1) stephan schlamminger (1) subha sree (1) tilde de caro (1) valeria di cola (1) valerio baiocchi (1) ville-veikko telkki (1) vincenzo palleschi (1) vittoria guglielmi (1) walter bich (1) yanhe zhu (2) yasmin fathima (1) yasuharu koike (2) yvan baudoin (1) zhipeng liang (1) multi-analytical approach for the study of an ancient egyptian wooden statuette from the collection of museo egizio of torino acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 10 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 multi-analytical approach for the study of an ancient egyptian wooden statuette from the collection of museo egizio of torino luisa vigorelli1,2,3, alessandro re2,3, laura guidorzi2,3, tiziana cavaleri4,5, paola buscaglia4, marco nervo3,4, paolo del vesco6, matilde borla7, sabrina grassini8, alessandro lo giudice2,3 1 dipartimento di elettronica e telecomunicazioni, politecnico di torino, c.so duca degli abruzzi 24, 10129 torino, italy 2 dipartimento di fisica, università degli studi di torino, via pietro giuria 1, 10125 torino, italy 3 infn, sezione di torino, via pietro giuria 1, 10125 torino, italy 4 centro conservazione e restauro “la venaria reale”, piazza della repubblica, 10078 venaria reale, torino, italy 5 dipartimento di economia, ingegneria, società e impresa, università degli studi della tuscia, via santa maria in gradi 4, 01100 viterbo, italy 6 fondazione museo delle antichità egizie di torino, via accademia delle scienze 6, 10123 torino, italy 7 soprintendenza abap-to, torino, piazza san giovanni 2, 10122 torino, italy 8 dipartimento di scienza dei materiali e ingegneria chimica, politecnico di torino, c.so duca degli abruzzi 24, 10129 torino, italy section: research paper keywords: egypt; statuette; multi-analytical; museo egizio; tomography citation: luisa vigorelli, alessandro re, laura giudorzi, tiziana cavaleri, paola buscaglia, marco nervo, paolo del vesco, matilde borla, sabrina grassini, alessandro lo giudice, multi-analytical approach for the study of an ancient egyptian wooden statuette from the collection of museo egizio of torino, acta imeko, vol. 11, no. 1, article 15, march 2022, identifier: imeko-acta-11 (2022)-01-15 section editor: fabio santaniello, university of trento, italy received march 7, 2021; in final form march 14, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: nexto project (progetto di ateneo 2017) funded by compagnia di san paolo, neu art project funded by regione piemonte corresponding author: alessandro re, e-mail: alessandro.re@unito.it 1. introduction scientific research in the cultural heritage field involves, in most cases, the development and use of physical and chemical methods to answer specific questions for a better understanding of objects produced in different contexts during history. through the analyses, it may be possible to reveal and identify materials and technologies used in the past, and also provide more grounded parameters for the preservation and conservation of cultural heritage artefacts [1]. the use of non-invasive methods (with no sampling) is very suitable in this kind of study, allowing the analysis of different abstract in the field of cultural heritage, the interdisciplinary and multi-technique approach to the study of ancient artifacts is widely used, providing more reliable and complementary results. to study these great-value objects, non-invasive approach is always preferred, although micro-invasive techniques may be necessary to answer specific questions. in this work, a study based on both non-invasive and micro-invasive techniques in a two-step approach was applied as a powerful tool to characterise materials and their layering, as well as to get a deeper understanding of the artistic techniques and the conservation history. the object under study is an ancient egyptian wooden statuette, belonging to the collections of the museo egizio of torino. analyses were performed at the centro conservaz ione e restauro “la venaria reale” (ccr), starting from non-invasive multispectral and x-ray imaging on the whole object, in order to obtain information about the technique of assembly and on some aspects of the constituent materials, and up to non-invasive xrf analysis and ft-ir, sem-edx and optical microscopy on micro-samples. this work is intended to lay the groundwork to the study of other wooden objects and statuettes belonging to the same funerary equipment, with the definition of a measuring protocol to study the most significant aspects of the artistic technique. mailto:alessandro.re@unito.it acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 types of objects fully respecting their integrity. some of these non-invasive techniques cover a large part of the electromagnetic spectrum, ranging from the analysis with gamma and x-ray radiation going through the ultraviolet, visible and infrared regions [2]-[10]. furthermore, for chemical and compositional analysis, some micro-invasive techniques are often used [11], [12]. together, all these complementary methods are powerful to give valuable information on the elemental composition as well as on the state of preservation, getting to the artistic processes performed by the artists [13], [14]. in this paper a multi-technique approach was used to finalize an in-depth study on an ancient egyptian wooden statuette, belonging to the collection of the museo egizio, in torino. all measurements were performed at centro conservazione e restauro (ccr) “la venaria reale”, where several scientific laboratories for the study and characterization of the different materials of artworks and ancient artefacts are available, before carrying out the required conservation treatments. the approach is based on the combination of imaging techniques, in particular ultraviolet fluorescence (uvf), visible-induced infrared luminescence (vil) and infrared reflectography (ir) that were employed to map the distribution of the different materials, such as the pigments used in the polychromies, their thicknesses and layering, as seen in previous studies [15]-[17]. among imaging analyses, a radiographic (rx) and tomographic (ct) study was also performed in order to investigate the inner part of the objects and to reach a deeper understanding on the execution technique and state of preservation [18], [19]. the next sections show in addition the results of compositional and elemental analysis acquired both directly on the surface with the noninvasive x-ray fluorescence (xrf) technique, and on samples with optical microscopy, fourier transform infrared spectroscopy (ft-ir) and scanning electron microscopy (semedx), largely used also in some other studies [20]-[24]. all the techniques listed above proved to be equally important in this study, each one providing different and complementary information, but essential to get a thorough knowledge of the artefact and to finally implement the best conservation strategy. this work contributes to the creation of a measuring protocol, applicable to other wooden objects and statuettes belonging to the same funerary assemblage, in order to significantly increase our understanding of the entire group of finds retrieved from a specific archaeological context. 2. the statuette the painted wooden statuette, representing an offering bearer (inv. no. s.8795, figure 1), was found during the 1908 excavation season of the italian archaeological mission, directed by ernesto schiaparelli, in the necropolis of asyut (egypt), a site situated some 375 km south of cairo. the statuette was part of the rich funerary assemblage of the so-called “tomb of minhotep”, which included additional statuettes of offering bearers, larger wooden statues, a model of a bakery, boat models, as well as coffins, wooden sticks, a bow with arrows and numerous earthenware jars and bowls [25], [26]. most of the equipment derived from specialized workshops operating in asyut during the early middle kingdom (ca. 1980-1900 bce). according to ancient egyptian religious beliefs, the tomb had to maintain the memory of the deceased, to preserve his/her body and to grant his/her survival in the afterlife thanks to specific rituals and, above all, food offerings. funerary offerings could be real, simply listed on stelae and coffins, or even scale models of servants carrying or processing food. at the beginning of the middle kingdom these scale models, together with models of granaries, boats or artisanal activities, become the main element of tomb assemblages and were placed within the burial chamber, usually near the coffin. the statuette examined for the present study is the typical representation of a female “offering bearer”, carrying a basket on her head, generally held in position with the left hand, and a duck in the other hand. although nowadays the two arms of the statuette are missing, an old photo preserved in the alinari archive shows the object complete of these two parts among the other finds from the same tomb as it was displayed in the museo egizio in the early 20th century [27]. the statuette is structurally composed of three elements (the basket, the human figure and the base) and measures 60.0 (h) × 12.5 (d) × 25.5 (w) cm3. 3. material and methods in the field of cultural heritage, diagnostic protocols usually give priority to non-invasive and imaging analyses because they provide an overview of the main characteristics of the object highlighting some material differences; consequently, they are essential for selecting the most representative subsequent analysis and sampling points. with this approach, the taking of some micro-samples from the artefact is the last step of the diagnostic campaign necessary to answer specific questions that arise during the early stages of the investigation. in this specific case study, non-invasive investigations (imaging and chemical analysis) were initially carried out; the results then led to the need of very small samples (~µm) for more in-depth measurements, such as microscopic investigation and ft-ir spectroscopy, in order to obtain useful information on the state of preservation and previous restorations. figure 1. the “offering bearer” statuette (s. 08795), frontal (left) and lateral (right) views after the conservation treatment. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 3.1. techniques and instrumentation uv induced visible fluorescence (uvf) the statuette was irradiated with uv labino® spot lamps (uv light mpxl and uv floodlight), with emission peak at 365 nm. the fluorescence produced in the vis region was captured with a nikon d810 full-frame reflex camera equipped with a peca 916 filter. the post-production of the photographs using adobe's lightroom software provided for the chromatic balance thanks to a 99% spectralon® and a minolta ceramic. this technique allows to evaluate characteristics such as homogeneity and distribution of surface layers based on the colour and intensity of the visible fluorescence induced by uv radiation. infrared reflectography 950 nm (ir1) the investigation was carried out after the first cleaning phase, that provided the removal of superficial dust, to reduce its interference facilitating the study of the artistic technique. the measure was made with ianiro varibeam halogen 800w lamps. the images were acquired in the photographic infrared range (from 750 nm to 950 nm) with a nikon d810 iruv full frame reflex camera, with b + w 093 filter. the post-production of the images using adobe's lightroom software provided for colour and exposure correction. this technique shows the presence of preparatory traces or changes under the painted film. visible-induced infrared luminescence (vil) the vil technique allows the localization of details made with the egyptian blue pigment, even when in poor conservation state, due to its intense fluorescence emission at around 916 nm when illuminated with visible light [28]. the lighting was guaranteed by led lamps without ir emission and the images were acquired in the photographic infrared range (from 750 nm to 950 nm) with a xnite nikon d810 (digital camera uv + vis + ir functionality) modified full frame reflex camera equipped with hoya r72 filter. the post-production carried out with adobe's lightroom software provided for the chromatic balance using colorchecker® classic of 24 colours and a reference standard of egyptian blue (kremer pigmente n° 10060) inserted in the shooting field. radiography (rx) and tomography (ct) the radiographic and tomographic analysis are useful for studying, for example, the techniques used in assembling the structure (e.g. joining elements) or to detect evidence of previous interventions. in particular, ct allows to observe the inner sections of the object, the orientation in space of its constituent parts, the sequence of the layers that compose it and their thickness. in this particular case a first radiographic measurement was performed using a fixed x-ray imaging set-up, developed in the context of the neu_art project [29], [30] and already used on very different kind of artworks [31], [32] and archaeological finds [19], [33]. it consists of a general electric eresco 42mf4 x-ray source, a rotating platform and a hamamatsu c975020tcn linear x-ray detector with 200 μm of pixel dimension that scans at about 0.2-6 m/min over an area of about 2×2 m². since this set-up was optimized for ct of large artefacts, a second rx followed by a ct analysis were performed using a flat panel detector (fp) shad-o-box 6k hs by teledyne dalsa, which, with an area of only about 160 cm2 and a pixel dimension of 49.5 µm, is more suitable for small objects or for part of large objects where a higher resolution is needed. for both measurements, a voltage of 80 kv and a current of 10 ma was set as acquisition parameters. for the radiography, the elaboration of the images was made with the open-source software imagej, whereas for the ct sections reconstruction a filtered back-projection algorithm [34] by means of a non-commercial software-utility developed by dan schneberk of the lawrence livermore national laboratory (usa) was used; the 3d rendering and segmentation was processed using vgstudio max 2.2 from volume graphics. x-ray fluorescence (xrf) the measurements were performed on representative points selected on the basis of the responses of the imaging techniques. the technique identifies the chemical elements present in the analysed area (spot diameter between 0.65 mm and 1.50 mm) for a variable depth based on the chemical nature of the materials present (approximately 150 μm). the collected data are employed to make hypotheses on the inorganic materials (mineral pigments) of the pictorial palette. for the analysis a portable micro-edxrf bruker artax 200 spectrometer was used with a fine focus x-ray source with molybdenum anode and adc with 4096 channels; the anode voltage is adjustable between 0 kv and 50 kv, the anode current between 0 µa and 1500 μa (maximum power 50 w). the measurements were carried out with a voltage of 30 kv and a current of 1300 µa, by fluxing helium on the measurement area in order to optimize the detection limit of the instrument. fourier transform infrared spectroscopy (ft-ir) the analyses were carried out to characterise original organic substances or possibly localized intervention materials previously detected through uv fluorescence imaging. infrared spectrophotometry allows to identify the organic and inorganic components present in the sample. the measurements were conducted on selective micro-samples with a bruker vertex 70 ft-ir spectrophotometer coupled with a bruker hyperion 3000 infrared microscope working in transmission with the aid of a diamond cell. optical microscopy and scanning electron microscopy with edx (om and sem-edx) the stratigraphic samples were collected after the noninvasive analytical campaign, in order to study the materials in depth. the observation with om in stratigraphy allows to obtain information on the executive technique adopted for the realization of the preparations, the polychromies and the finishes; the sem-edx analysis instead allows to obtain compositional information on the elements present in the different layers. following the sampling, the fragments identified for stratigraphic analysis were incorporated in transparent resin (struers epofix epoxy resin). the samples prepared in a polished section are observed by means of an olympus bx51 mineropetrographic microscope, in visible and uv light, interfaced to a pc by means of a digital camera. the acquisition and processing of images is carried out using analysis five proprietary software. for the sem analysis, the samples were observed with a zeiss evo60 electron microscope for morphological investigations (mainly by means of backscattered electrons detector, bsd). the edx bruker microprobe allows semi-quantitative elemental analysis. the analyses were performed in high vacuum for which the sections were carbon-coated. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 figure 2. frontal and lateral pictures of the statuette under different illumination for the multispectral analysis: (a) and (e) visible light, (b) and (f) uv fluorescence, (c) ir reflectography, (d) and (g) radiography. in picture (a) the measure points for xrf analysis are signed with numbers 1-8. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 4. results and discussion 4.1. assembly and modelling techniques the close observation and the analyses carried out made it possible to describe the assembly technique of the two portions (the basket and the body), which were originally assembled by inserting wooden dowels, with circular section, free of glue or filler material. moreover, a non-uniformity of the surface and the presence of gaps distributed over the entire body could be immediately noticed. in correspondence with some of these, such as on the chest or the hips, it was possible to observe the presence of a double layer of preparation. the first, light brown-coloured and coarser, is directly spread on the wooden material; on this, a second, thinner, white layer was perceivable (figure 2a, figure 3a). in some areas of the sculpture, the thin white layer seems to be applied directly on the wooden material, as observed for example on some gaps in the garment (figure 2e, figure 3b). the pictorial decoration was realized on this white preparation. for a better understanding of the construction technique and the contribution of the preparatory layers in modelling the shape, radiographic and tomographic analyses were carried out [35]. data were acquired on the whole statuette for radiography, while for ct the acquired portion is limited from the basket down to the hips of the statuette. thanks to the x-ray imaging, it was possible to visualize details and features of the object: tomographic data in particular provided important information not only on the assembly, but also on previous structural interventions (discussed in section 4.3). at first glance, it was possible to notice areas of the radiographic images with different radiopacity over the entire volume of the body (figure 2d). this confirm the nonhomogeneity of the preparatory layer distribution, which is more radiopaque than the wooden support. thanks to the ct slices it was also possible to localize the presence of the double layer of preparation mentioned before. the capability of detecting the material of the ground layer allowed us to confirm that it has contributed to partially polish the shape (e.g. head, breasts and hips), and to correct the gaps in volume due to possible defects of the wood or to refine some imprecisions in carving. where the carving was enough refined, a single preparation layer has been laid (figure 4). as regards the assembly technique, the junction between the basket and the head has been realized by means of wooden dowels insertions, as the radiographic, and the tomographic images, have shown (figure 4). the same anchoring system is evident in correspondence of the missing arms, as shown in figure 4e. from the tomographic sections of the head (figure 4c), multiple portions assembled by wooden dowels, peculiarly oversized, are observed, employed in order to achieve the final volume; ct demonstrated to be useful also in understanding the direction of insertion of each dowel. moreover, in correspondence with the right breast, the same type of assembly technique can be seen: the observation of the trend of the inner growth rings allows to recognize this insertion as a remediation for a material detachment probably in the notching phase (figure 4d). to investigate the nature of materials used for the preparation layers (and pigments, as detailed in section 4.2), non-invasive xrf analysis were performed on some representative points (figure 2a and table 1) and two stratigraphic samples, one from the white garment (figure 5) and one from the black wig (figure 6), were analysed by om and sem-edx. the white thin preparation layer results to be made of a calcium carbonate-based material, with a little fraction of quartz. 4.2. pigments and finishing layers for a better understanding of the pictorial materials, imaging analyses have been followed by non-invasive xrf and analysis on two cross-sections, as previously explained. the yellow pale colour used for the skin of the figure results to be made of yellow ochre and/or earth, probably mixed with gypsum and/or calcium carbonate (see figure 2a and table 1). the warm white colour of the garment results to be made of figure 3. detail of the chest (a) and the dress (b) of the statuette; it is visible (a) the preparation layer on which the painted layer is applied; (b) the painted layer applied directly on the wood. figure 4. radiographic image and ct vertical and horizontal slices of the statuette: (a) basket; (b) and (c) head, (d), (e) and (f) body (green arrows: wooden dowels for assemblage; blue arrows: wooden joints; pink arrows: thicker preparation layer). acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 gypsum with minimal impurities of iron oxides, as detected in the cross-section (figure 5). considering the rarely documented use of gypsum as a pigment, this data appears very interesting and could be one of the features to be investigated also on the other finds of the funerary equipment. the presence of p could be referred to possible organogenic material in the rock used to produce the pigment (figure 5e and table 1). a black pigment results to be used in profiling the dress, the eyes and the eyebrows, and to colour the wig. the ir reflectography (figure 2c) suggests it is probably a carbon-based material, based on its strong absorption. om and sem-edx analyses of the cross-section from the wig suggest the carbonaceous nature of this black: the black layer is quite thick, nevertheless only very low signals of inorganic elements have been detected in the layer, suggesting it is composed of organic carbon black (figure 6). as regards the wig, the preliminary hypothesis that there could be some egyptian blue pigment mixed with black was ruled out thanks to the vil survey which did not detect any luminescence of the pigment on the statuette. the preliminary close observation of the statuette and the uvf analysis allowed excluding the presence of a finish layer distributed on the surface, rather confirming a strong material non-homogeneity due to the state of preservation and previous interventions (figure 2b,f), as already observed also by the x-ray imaging analysis. a selective sampling from one of the areas with the greatest surface yellowing, that showed yellow-orange uv fluorescence, was taken for further materials investigations with ft-ir analysis (see section 4.3). however, no information about the type of binder used for the preparatory and pictorial layer could be obtained from the analysis, even if, with reference to the technical literature, the most probable hypothesis is the use of a vegetable rubber-based binder [36]. table 1. xrf analysis results (+++ = main chemical elements; ++ = secondary elements; + = trace elements; = not detected). measurement point element mg al si p s cl k ca ti mn fe sr 1_face yellow + + + + ++ + + ++ + + +++ + 2_body yellow + + + ++ + + +++ + + +++ + 3_body yellowish white + + + + ++ + + +++ + + + 4_body clean white + + + ++ + + +++ + + + + 5_leg gap white + + + + + + + +++ + + + + 6_eyebrow black + + + + ++ + + +++ + + +++ + 7_basket dark yellow + + + + + + +++ + + +++ + 8_basket red + + + + + + +++ + + ++ + figure 5. (a) stratigraphic sample of the white preparation, taken from the body of the statuette; (b) cross section under om in visible light (1, preparation layer; 2, warm white pictorial layer); (c) sem-bsd image of the cross section; (d, e) sem-edx spectra respectively of the preparation and pigment layers. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 4.3. conservation history and previous interventions as regards the state of preservation of the find, it showed a general greying of tones and some gaps in the preparation and paint layers distributed over the entire surface, as documented by photographs in visible light and x-ray imaging. after the close observation and a careful reading of the imaging analyses, in particular of the uv fluorescence, ft-ir analysis were performed on micro samples taken from (i) an area that showed orange-yellow uv fluorescence and (ii) from the face of the bearer, where a glossy, surface film-forming material seemed to be present. from the resulting spectra, an acrylic resin identified as paraloid was found in both the analysed samples (figure 7). the adhesive material paraloid resulted to be distributed over the entire surface and the data is compatible with the general greying of the surface, due to the typical paraloid absorption of dust and atmospheric particulate in time. figure 7b shows also signals referable to the presence of calcium carbonate and silicates in the pictorial layer, in accordance with the other performed analysis. furthermore, the signal at 1540 cm-1 is attributable to the presence of a protein-based substance, probably due to an ancient securing interventions carried out with animal glues, as traditional conservation practices usually envisaged. in addition, the base of the statuette showed some peculiar characteristics: from a first general observation, it was hypothesized that it could be traced back to a reuse of a wood fragment, perhaps taken from a coffin. this hypothesis could be reasonably confirmed by the evidence of a presumably original joint in correspondence of the lower portion of the base, attributable to the portion of the feet, but no longer in place, that led to think of a reuse. more insights concerning this specific characteristic should be provided, in consideration of a similar element observed in another find of the equipment. figure 6. (a) stratigraphic sample from the black wig on the back of the statuette; (b) cross section under om in visible light (1, white preparation layer; 2, warm white pictorial layer; 3, black pictorial layer); (c) sem-bsd image of the cross section; (d, e, f) sem-edx spectra respectively of the white, warm white and black layers. figure 7. ft-ir spectra (black curves) of the two samples taken from the body (orange-yellow uv fluorescence area) (a) and from the face area (greying of tones) (b). the substance was identified as paraloid (the red curve is the paraloid standard spectrum, for comparison). acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 8 as regards the figure-base anchoring, the complete absence of the feet (generally made of wood, sometimes partially completed with details in modelling material, often equipped with a tenon for the base joint) was immediately noted, and a wooden insert to which the legs were anchored, attributable to a previous intervention, was present (figure 8a). in fact, a complete discrepancy between the main block and this wooden insert was observed (for colour, compactness and grain of the wood); the insert was grouted on the perimeter and hidden on the surface by a pictorial retouching attributable to a previous intervention, rather recent. thanks to the radiographic analysis, this portion in the base was clearly distinguished and it could be observed that the insert reaches about half the total thickness of the base. furthermore, the x-ray images showed how both the legs and the wooden insert were applied and fixed: in fact, x-rays enhanced the presence of two holes created to accommodate the end portions of the two legs. the legs and the big wooden insert in correspondence of the base were fixed by applying a filler material (similar to a mortar), more radiopaque than the wooden material (figure 9). a fracture of the wooden matter in the back of the right leg of the bearer and at the calf level corresponded to a tongue and rebate joint for the completion of the leg anatomy, which can be clearly observed from the radiographic images of the lower part (figure 8b,c). in literature, many of the sculptures represented in a progressive position present an assembly of two separate elements for the back leg, a practical ruse to smoothly carve the internal parts. taking into account both this aspect and the fact that this kind of joint is already documented starting from very ancient chronologies and the characteristics of the wood are rather similar to those of the central body (colour, compactness), the originality of the part could not be excluded in a first phase, despite of some anomalies. in fact, the presence of an intervention mortar at the junction between the leg and the body (attributable to a bonding) and the chromatic and morphological differences between the wooden material of the base insert and that of the leg itself, are not sufficient to attribute the leg portion to an original or subsequent intervention. regarding this issue, a comparison with a wooden sculpture picture (xviii dynasty, collections of the museo egizio of torino), taken before restoration in the late 1980s by the doneux laboratory results to be very interesting: also in this case a wooden insert is observed for the lower portion of one of the two legs, certainly a non-original operation in this instance, but conceptually comparable to the one discussed here [37]. more insights should be provided to ascertain this aspect. as of the basket-head anchoring, evidence of previous intervention was identified. on the basket, in fact, a significant presence of an adhesive material was observed, for whose characterization no specific analysis was conducted; however, due to its mechanical and optical characteristics, in addition to the specific reactivity in contact with polar solvents, the adhesive has probably a synthetic origin (presumably of a vinyl nature, figure 10). finally, both from the radiographic analysis and at the time of disassembly, a metal element inserted to fix the two portions of reduced diameter and size, was identified (figure 11). in consideration of its shape, it is possible to suppose its pertinence to a modern structural intervention. figure 8. the detail of the statuette’s legs, (a) the wooden insert for the legs anchorage, (b) and (c) the radiographic images, frontal and lateral respectively, in which the joint is clearly visible. figure 9. radiographic image of the base of the statuette. figure 10. details of the adhesive residues (yellow arrows) detected at the basket-head interface. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 9 5. conclusion the present work reports on the results of a two-step scientific approach useful for the characterization of the materials and the stratigraphy of ancient objects. the combination of imaging and punctual techniques, along with the visual inspection of the artwork, could give indication of the used materials and techniques of assembly, repairs, over paintings, finishes and treatments that have occurred in antiquity and through the centuries. in the specific case under study, the close observation and the tomographic analysis carried out allowed to describe the assembly technique. in fact, the use of several portions assembled using wooden dowels and of a preparation material based on calcium carbonate to achieve the final volume was observed. thanks to chemical investigations, also conducted in consideration of the multispectral imaging results, it was possible to define the nature of the different materials used for the statuette manufacturing. in addition to vil, xrf analysis in combination with om and sem-edx methodology gave the possibility to identify the pigments used for the decoration. taking into account the identification of synthetic materials and the ft-ir analysis results, it has also been possible to distinguish modern interventions, probably dated from the second half of the twentieth century. additionally, more ancient interventions, such as the insertion of wooden elements to complete the figure, seems to be present. all the performed analysis and the consequent evaluations contributed to the definition of the best conservation process for the statuette. in the future, it will be possible to apply the same investigation strategy to other wooden artefacts and statuettes belonging to the same framework, in order to make comparisons among the objects. analogies and differences in terms of materials, manufacturing techniques and state of preservation will support also the egyptological study of specific technical features, aiming at the possible reconstruction of different workshops active in asyut in the early second millennium bce. acknowledgements the nexto project (progetto di ateneo 2017) funded by compagnia di san paolo, the neu_art project funded by regione piemonte, and the infn chnet network are warmly acknowledged. we would like to thank also dr. anna piccirillo and dr. daniele demonte from the centro conservazione e restauro “la venaria reale”, for performing, respectively, ftir spectroscopy analyses and multiband imaging. references [1] m. a. rizzutto, j. f. curado, s. bernardes, p. h. o. v. campos, e. a. m. kajiya, t. f. silva, c. l. rodrigues, m. moro. m. tabacniks, n. added, analytical techniques applied to study cultural heritage objects, inac 2015 são paulo, sp, brazil, october 4-9 (2015) isbn 978-85-99141-06-9 [2] e. peccenini, f. albertin, m. bettuzzi, r. brancaccio, f. casali, m. p. morigi, f. petrucci, advanced imaging systems for diagnostic investigations applied to cultural heritage, j. phys.: conf. ser. 566 012022 (2014). doi: 10.1088/1742-6596/566/1/012022 [3] s. bruni, v. guglielmi, e. della foglia, m. castoldi, g. bagnasco gianni, a non-destructive spectroscopic study of the decoration of archaeological pottery: from matt-painted bichrome ceramic sherds (southern italy, viii-vii b.c.) to an intact etruscan cinerary urn, spectrochimica acta part a: molecular and biomolecular spectroscopy 191 (2018) pp. 88–97. doi: 10.1016/j.saa.2017.10.010 [4] m. hain, j. bartl, v. jacko, multispectral analysis of cultural heritage artefacts, measurement science review, volume 3, section 3 (2003) [5] t. cavaleri, p. croveri, a. giovagnoli, spectrophotometric analysis for pigment palette identification: the case of “profeta stante, 10th international conference on non-destructive investigations and microanalysis for the diagnostics and conservation of cultural and environmental heritage, aipnd, 2011 [6] m. p. morigi, f. casali, m. bettuzzi, r. brancaccio, v. d’errico, application of x-ray computed tomography to cultural heritage diagnostics, appl phys a 100 (2010), pp. 653–661. doi: 10.1007/s00339-010-5648-6 [7] g. fiocco, t. rovetta, m. malagodi, m. licchelli, m. gulmini, g. lanzafame, f. zanini, a. lo giudice, a. re, synchrotron radiation micro-computed tomography for the investigation of finishing treatments in historical bowed string instruments: issues and perspectives, eur. phys. j. plus 133 (2018). doi: 10.1140/epjp/i2018-12366-5 [8] e. di francia, s. grassini, g. e. gigante, s. ridolfi, s. a. barcellos lins, characterisation of corrosion products on copper-based artefacts: potential of ma-xrf measurements, acta imeko, volume 10 (1) (2021) pp. 136-141. doi: 10.21014/acta_imeko.v10i1.859 [9] a. lo giudice, al. re, d. angelici, j. corsi, g. gariani, m. zangirolami, e. ziraldo, ion microbeam analysis in cultural heritage: application to lapis lazuli and ancient coins, acta imeko, 2017, volume 6 (3) (2017) pp. 76-81. doi: 10.21014/acta_imeko.v6i3.465 [10] m. c. leuzzi, m. crippa, g. a. costa, application of nondestructive techniques. the madonna del latte case study, acta imeko, volume 7 (3) (2018), pp. 52 – 56. doi: 10.21014/acta_imeko.v7i3.587 [11] m. schreiner, m. melcher, k. uhlir, scanning electron microscopy and energy dispersive analysis: applications in the field of cultural heritage, anal bioanal chem 387 (2007), pp. 737–747. doi: 10.1007/s00216-006-0718-5 [12] c. invernizzi, g. v. fichera, m. licchelli, m. malagodi, a noninvasive stratigraphic study by reflection ft-ir spectroscopy and uv-induced fluorescence technique: the case of historical violins, microchemical journal 138 (2018), pp. 273–281. doi: 10.1016/j.microc.2018.01.021 [13] a. mangone, g. e. de benedetto, d. fico, l. c. giannossa, r. laviano, l. sabbatini, i. d. van der werfb, a. traini, a multianalytical study of archaeological faience from the vesuvian area as a valid tool to investigate provenance and technological features, new j. chem. 35 (2011), pp. 2860–2868. doi: 10.1039/c1nj20626e [14] g. barone, s. ioppolo, d. majolino, p. migliardo, g. tigano, a multidisciplinary investigation on archaeological excavation in messina (sicily). part i: a comparison of pottery findings in “the strait of messina area”, journal of cultural heritage 3 (2002), pp. 145–153. doi: 10.1016/s1296-2074(02)01170-6 figure 11. metallic element: (a) detail of the rx image that allows its localization and (b) the metallic pin after removal during the intervention. http://dx.doi.org/10.1088/1742-6596/566/1/012022 https://doi.org/10.1016/j.saa.2017.10.010 https://doi.org/10.1007/s00339-010-5648-6 https://doi.org/10.1140/epjp/i2018-12366-5 http://dx.doi.org/10.21014/acta_imeko.v10i1.859 https://doi.org/10.21014/acta_imeko.v6i3.465 http://dx.doi.org/10.21014/acta_imeko.v7i3.587 https://doi.org/10.1007/s00216-006-0718-5 https://doi.org/10.1016/j.microc.2018.01.021 https://doi.org/10.1039/c1nj20626e https://doi.org/10.1016/s1296-2074(02)01170-6 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 10 [15] s. bracci, o. caruso, m. galeotti, r. iannaccone, d. magrini, d. picchi, d. pinna, s. porcinai, multidisciplinary approach for the study of an egyptian coffin (late 22nd/early 25th dynasty): combining imaging and spectroscopic techniques, spectrochimica acta part a: molecular and biomolecular spectroscopy 145 (2015), pp. 511–522. doi: 10.1016/j.saa.2015.02.052 [16] abdrabou, m. abdallah, i. a. shaheen, h. m. kamal, investigation of an ancient egyptian polychrome wooden statuette by imaging and spectroscopy, internation journal of conservation science, volume 9 (1) (2018), pp. 39-54. [17] t. cavaleri, p. buscaglia, c. caliri, e. ferraris, marco nervo, f. p. romano, below the surface of the coffin lid of neskhonsuennekhy in the museo egizio collection, x-ray spectrom. (2020), pp. 1–14. doi: 10.1002/xrs.3184 [18] a. otte, t. thieme, a. beck, computed tomography alone reveals the secrets of ancient mummies in medical archaeology, hellenic journal of nuclear medicine, 16(2) (2013), pp.148-149. [19] a. re, a. lo giudice, m. nervo, p. buscaglia, p. luciani, m. borla, c. greco, the importance of tomography studying wooden artefacts: a comparison with radiography in the case of a coffin lid from ancient egypt, internation journal of conservation science, volume 7 (2) (2016), pp. 935-944. [20] c. calza, r. p. freitas, a. brancaglion jr., r. t. lopes, analysis of artifacts from ancient egypt using an edxrf portable system, inac 2011, belo horizonte,mg, brazil, october 24-28 (2011). isbn 978-85-99141-04-5 [21] n.m. badr, m. fouad ali, n. m. n. el hadidi, g. abdel naeem, identification of materials used in a wooden coffin lid covered with composite layers dating back to the ptolemaic period in egypt, conservar património 29 (2018), pp. 11-24. doi: 10.14568/cp2017029 [22] m. abdallah, h. m. kamal, a. abdrabou, investigation, preservation and restoration processes of an ancient egyptian wooden offering table, internation journal of conservation science, volume 7 (4) (2016), pp. 1047-1064. [23] a. abdrabou, m. abdallah, h. m. kamal, scientific investigation by technical photography, om, esem, xrf, xrd and ftir of an ancient egyptian polychrome wooden coffin, conservar património 26 (2017), pp. 51-63. doi: 10.14568/cp2017008 [24] h. a.m. afifi, m. a. etman, h. a.m. abdrabbo, h. m. kamal, typological study and non-destructive analytical approches used for dating a polychrome gilded wooden statuette at the grand egyptian museum, scientific culture, vol. 6 (3) (2020), pp. 69-83, doi: 10.5281/zenodo.4007568 [25] j. kahl, a.m. sbriglio, p. del vesco, m. trapani asyut. the excavations of the italian archaeological mission (1906-1913), studi del museo egizio 1, ed. franco cosimo panini, modena 2019, isbn 978-88-570-1577-4 [26] p. del vesco, le tombe di assiut, in p. del vesco and b. moiso, missione egitto 1903–1920: l’avventura archeologica m.a.i. raccontata, ed. franco cosimo panini, modena, 2017, pp. 293301, isbn 8857012654 [27] b. moiso, p. del vesco, b. hucks, l'arrivo degli oggetti al museo e i primi allestimenti, in p. del vesco and b. moiso, missione egitto 1903-1920. l'avventura archeologica m.a.i. raccontata, modena, 2017, pp. 325, isbn 8857012654 [28] g. verri, the use and distribution of egyptian blue: a study by visible-induced luminescence imaging, in k uprichard & a middleton, the nebamun wall paintings, london: archetype publications, pp. 41-50, 2008, isbn 9781904982142 190498214x [29] a. re, f. albertin, c. bortolin, r. brancaccio, p. buscaglia, j. corsi, g. cotto, g. dughera, e. durisi, w. ferrarese, m. gambaccini, a. giovagnoli, n. grassi, a. lo giudice, p. mereu, g. mila, m. nervo, n. pastrone, f. petrucci, f. prino, l. ramello, m. ravera, c. ricci, a. romero, r. sacchi, a. staiano, l. visca, l. zamprotta, results of the italian neu_art project, iop conference series: materials science and engineering 37 (2012). doi: 10.1088/1757-899x/37/1/012007 [30] m. nervo, il progetto neu_art. studi e applicazioni/neutron and x-ray tomography and imaging for cultural heritage, cronache, 4, editris, torino, 2013, isbn 9788889853344 [31] a. lo giudice, j. corsi, g. cotto, g. mila, a. re, c. ricci, r. sacchi, l. visca, l. zamprotta, n. pastrone, f. albertin, r. brancaccio, g. dughera, p. mereu, a. staiano, m. nervo, p. buscaglia, a. giovagnoli, n. grassi, a new digital radiography system for paintings on canvas and on wooden panels of large dimensions, 2017 ieee international instrumentation and measurement technology conference (i2mtc 2017) proceedings (2017). doi: 10.1109/i2mtc.2017.7969985 [32] a. re, f. albertin, c. avataneo, r. brancaccio, j. corsi, g. cotto, s. de blasi, g. dughera, e. durisi, w. ferrarese, a. giovagnoli, n. grassi, a. lo giudice, p. mereu, g. mila, m. nervo, n. pastrone, f. prino, l. ramello, m. ravera, c. ricci, a. romero, r. sacchi, a. staiano, l. visca, l. zamprotta, x-ray tomography of large wooden artworks: the case study of “doppio corpo” by pietro piffetti, heritage science 2 (1) (2014). doi: 10.1186/s40494-014-0019-9 [33] a. re, j. corsi, m. demmelbauer, m. martini, g. mila, c. ricci, x-ray tomography of a soil block: a useful tool for the restoration of archaeological finds, heritage science 3(1) (2015). doi: 10.1186/s40494-015-0033-6 [34] a. c. kak, m. slaney, principles of computerized tomographic imaging, ieee press, 1987, chapter 3, pp. 49-112, isbn 087942-198-3 [35] l. vigorelli, a. lo giudice, t. cavaleri, p. buscaglia, m. nervo, p. del vesco, m. borla, s. grassini, a. re, upgrade of the x-ray imaging set-up at ccr “la venaria reale”: the case study of an egyptian wooden statuette, proceedings of 2020 imeko tc-4 international conference on metrology for archaeology and cultural heritage, trento, italy, october 22-24 (2020), pp..623628, isbn 978-92-990084-9-2 [36] r. newman, m. serpico and r. white, adhesives and binders, in: p.t. nicholson and i.shaw, ancient egyptian materials, cambrige university press, 2000, pp. 475-493, isbn 0-5121-45257 [37] e.f. marocchetti, la scultura in legno al museo egizio di torino. problemi di conservazione e restauro, materiali e strutture. problemi di conservazione. sulla scultura, nuova serie anno vi, numero 11-12, pp. 9-31, 2008, issn 1121-2373 https://doi.org/10.1016/j.saa.2015.02.052 https://doi.org/10.1002/xrs.3184 https://doi.org/10.14568/cp2017029 https://doi.org/10.14568/cp2017008 http://dx.doi.org/10.5281/zenodo.4007568 http://dx.doi.org/10.1088/1757-899x/37/1/012007 https://doi.org/10.1109/i2mtc.2017.7969985 https://doi.org/10.1186/s40494-014-0019-9 https://doi.org/10.1186/s40494-015-0033-6 continuous measurement of stress levels in naturalistic settings using heart rate variability: an experience-sampling study driving a machine learning approach acta imeko issn: 2221-870x december 2021, volume 10, number 4, 239 248 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 239 continuous measurement of stress levels in naturalistic settings using heart rate variability: an experience-sampling study driving a machine learning approach pietro cipresso1,2, silvia serino3, francesca borghesi1,2, gennaro tartarisco4, giuseppe riva2,3, giovanni pioggia4, andrea gaggioli2,3 1 department of psychology, university of turin, turin, italy 2 applied technology for neuro-psychology lab, istituto auxologico italiano, milan, italy 3 università cattolica del sacro cuore, milan, italy 4 institute for biomedical research and innovation (irib), national research council of italy (cnr), messina, italy section: research paper keywords: psychological stress; psychophysiology; psychometrics; signal processing; assessment; experience sampling methods; heart rate variability citation: pietro cipresso, silvia serino, francesca borghesi, gennaro tartarisco, giuseppe riva, giovanni pioggia, andrea gaggioli, continuous measurement of stress levels in naturalistic settings using heart rate variability: an experience-sampling study driving a machine learning approach, acta imeko, vol. 10, no. 4, article 36, december 2021, identifier: imeko-acta-10 (2021)-04-36 section editor: carlo carobbi, university of florence, gian marco revel, università politecnica delle marche and nicola giaquinto, politecnico di bari, italy received october 11, 2021; in final form december 20, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by the european funded project ‘interstress–interreality in the management and treatment of stress-related disorders’, grant number: fp7-247685. corresponding author: pietro cipresso, e-mail: p.cipresso@auxologico.it 1. introduction it is well known that long-term exposure to stress can lead to immunodepression and dysregulation of the immune response, thus significantly enhancing the risk of contracting a disease or altering its course. however, increased symptomatology is not only associated with severe stressors (infrequent major life events), but also with minor daily stressors (i.e. “microstressors”) that are ignored or poorly managed [1]–[4]. defining effective techniques to measure daily stressful episodes in ecological conditions has thus been identified as an important research objective. to address this challenge, several research groups have started investigating the use of wearable sensors solutions to infer stress from continuous biosignal measurements [5] (for a review, see [6]). such systems integrate sensors together with on-body signal conditioning and preelaboration, as well as the management of the energy consumption and wireless communication systems. although preliminary testing of these systems has yielded encouraging results [7], a major limitation of current solutions is that they mostly rely on complex sensor architectures and use labelling abstract developing automatic methods to measure psychological stress in everyday life has become an important research challenge. here, we describe the design and implementation of a personalized mobile system for the detection of psychological stress episodes based on heart-rate variability (hrv) indices. the system’s architecture consists of three main modules: a mobile acquisition module; an analysisdecision module; and a visualization-reporting module. once the stress level is calculated by the mobile system, the visualizationreporting module of the mobile application displays the current stress level of the user. we carried out an experience-sampling study, involving 15 participants, monitored longitudinally, for a total of 561 ecg analyzed, to select the hrv features which best correlate with self-reported stress levels. drawing on these results, a personalized classification system is able to automatically detect stress events from those hrv features, after a training phase in which the system learns from the subjective responses given by the user. finally, the performance of the classification task was evaluated on the empirical dataset using the leave one out cross-validation process. preliminary findings suggest that incorporating self-reported psychological data in the system’s knowledge base allows for a more accurate and personalized definition of the stress response measured by hrv indices. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 240 methods that are often based on evaluation of human coders [8]. other authors have proposed the measurement of hearth rate variability (hrv) [9]–[14] analysis as a potentially effective approach for monitoring stress in mobile settings [15]–[17]. actually, hrv indices can be used to estimate activity of autonomous nervous system (ans) in relation to affective and cognitive states, including mental stress [18]–[20]. however, realtime recognition of stress from hrv measures requires appropriate strategies to i) detect hrv changes using minimallyinvasive ecg equipment; ii) relate these changes to mental stress levels; and iii) control the potential confounding effects of physical activity. in the following, we describe how we addressed these issues in designing and implementing a personalized mobile system for automatic recognition of psychological stress based on hrv indices. the original contribution of the proposed method is that, to our best knowledge, this is the first approach that integrates the detection of hrv features with the groundtruth of subjective perception of stressful events. 2. measuring psychological stress in naturalistic environments according to cohen et al. [21], stress is a biopsychosocial phenomenon in which “environmental demands tax or exceed the adaptive capacity of an organism, resulting in psychological and biological changes that may place a person at risk for disease” (p. 3). this conceptualization suggests that in measuring stress, it is not only necessary to consider environmental demands, but also appraisals of such demands, as well as physiological systems that come into play. consistent with this definition, two main approaches have been introduced to assess psychological stress in naturalistic conditions: the first is based on self-reporting of participants’ subjective experiences and perception of stressful events; the second approach is based on sensing physiological signals associated with the stress response. in the next, we provide a description of these procedures, along with a discussion of their strengths and limitations. 2.1 subjective psychological measures the experience sampling method (esm), also known as ecological momentary assessment (ema), is a naturalistic observation technique that allows capturing participants’ thoughts, feelings, and behaviours at multiple times across a range of situations as they occur in the natural environment [22]. in a typical esm study, participants are asked to fill out a form when prompted by an acoustic signal. thanks to repeated sampling, a number of surveys are collected from each participant throughout the day, thus providing an ecologicallyvalid and highly detailed description of subjective quality of experience. ecological validity is a strong requirement in psychometrics since this is a measure of how a task or test is able to predict behaviours in a real-world setting. esm has been applied to study a wide range of behaviours and experiences, including daily stress [23], [24]. however, this procedure has high costs and places a significant burden on the participant, thus limiting its practical applicability as a stress monitoring technique [10], [25]. a less expensive and time-consuming approach for assessing experience and affect in everyday life is the day reconstruction method (drm), developed by kahneman and colleagues [26]. it involves the retrospective recall of the study period as a continuous sequence of episodes, which are rated on a series of affect scales. drm reports have been validated against experience sampling data, showing that this technique allows identifying changes in affect over the course of the day with almost the same accuracy than esm [26], [27]. however, since drm respondents are asked to reconstruct the previous day by completing a structured self-administered questionnaire, this method is potentially susceptible to recall biases. furthermore, it has been suggested that using retrospective measures as proxies for actual experience may result in weaker or inconsistent results particularly when tested in connection to biologic pathways [28]. 2.2 objective physiological measures an alternative strategy to assess stress in everyday situations is based on the analysis of physiological correlates of this experience. psychological stressors are linked with the activation of two main neuro-physiological pathways, which are involved in the maintenance of homeostasis: the hypothalamic-pituitaryadrenocortical (hpa) axis and the sympathetic-adrenal medullary system (sam). as concerns the first system, one of the most investigated biomarker is salivary cortisol, which, together with catecholamines, is one of the end products of hpa activation [20], [23], [29], [30]. however, due to significant betweenand within individual variation in diurnal secretion of cortisol, the measurement of the magnitude of cortisol response is not an easy procedure, which requires the application of advanced statistical approaches such as multilevel models [31], [32]. with respect to the sam system, hrv, defined as the variation over time of the period between consecutive heartbeats, is increasingly regarded as a potentially convenient and noninvasive marker of autonomic activation associated with psychological stressors [33]. the normal variability in heart rate (hr) is controlled by the balancing activation of the (acceleratory) sympathetic and of the (deceleratory) parasympathetic branches of the autonomic nervous system. however, under stressful events or contexts, there is a trend towards increased sympathetic control and reduced vagal tone, which is associated with decreased hrv [19]. on the other hand, higher hrv has been associated with the availability of context and goal-based control of emotions [34]. based on this preliminary evidence, several authors have been experimenting with wearable heart monitor for the identification of stress levels from hrv, in both healthy and clinical populations. for example, kim and coll. [35]used hrv patterns to discriminate between subjects reporting high and low levels of stress during the day, with an overall accuracy of 66.1 %. in a similar study, melillo et al. [16] compared within-subject variations of shortterm hrv measures using short term ecg recording in students undergoing university examination. by applying linear discriminant analysis on nonlinear features of hrv for automatic stress detection, these authors were able to obtain a total classification accuracy of 90 %. kimhy et al. [17] investigated the relationship between stress and cardiac autonomic regulation in a sample of psychotic patients, using experience sampling in combination with cardiac monitoring. they found that momentary increases of stress were significantly associated with increase in sympathovagal balance and parasympathetic withdrawal. in addition to studies which have examined the association between hrv and stress during waking hours, other recent research has proposed the use of hrv patterns during sleep as supplement to the analysis of subjective assessments and voice messages collected during workday [36], with encouraging, albeit preliminary, results. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 241 2.3 towards an integrated approach for personalized stress recognition in mobile settings as previously discussed, a fundamental issue in the measurement of stress in everyday life is that this response is idiosyncratic, because it depends on individual’s perception of challenges and the skills which he/she can use to face those challenges. as a consequence, any approach aiming at inferring stress levels from “honest” physiological signals should not overlook the role played by the subjective appraisal of the situation. furthermore, since hrv values are characterized by high interindividual variability, it is important that the system is tailored to the individual characteristics [37], [38]. one possible approach to develop adaptative systems for stress recognition has been suggested by morris and guilak [37]. the strategy proposed by these authors involves identifying subject’s baseline and stress threshold in the lab by elicitation of sympathetic and parasympathetic responses, and then using this information to differentiate between stress and nonstress in daily life. a first attempt to implement this approach has been performed by cinaz et al. [38]. these authors measured participants’ sympathetic and parasympathetic responses during three different levels of mental workload (low, medium, and high) in a controlled laboratory setting. then, they investigated whether the data collected in this calibration session were appropriate to discriminate corresponding workload levels occurred during office work. to this end, individual hrv responses of each workload level were used to train the models and test the trained models on the data collected while the subjects performed normal office-work, using a mobile ecg logger. afterward, a multiple regression analysis was applied to model the relationship between relevant hrv features and the subjective ratings of perceived workload: resulting predictions were correct for six out of the seven subjects. in the present work, we propose an experience-sampling approach for incorporating subjective knowledge in the classification of psychological stress from hrv indices (table 1). the methodology consists of three main steps. in the first, experimental phase, we carried out an experiencesampling study to select the hrv features which best correlate with self-reported stress levels (section 3). drawing on these results, a personalized classification system was developed which is able to automatically detect stress events from those hrv features, after a training phase in which the system learns from the subjective responses given by the user (section 4). in the final step, the performance of the classification task was evaluated using the leave one out cross validation process (section 5). 3. method the objectives of this experiment were two-fold: i) to select a subset of hrv features which best correlate with self-reported stress levels collected during everyday activities; ii) to select a subset of self-reported questions about perceived stress levels which can be used as ground truth to train the final system. 3.1 participants participants were 15 healthy subjects (8 males and 7 females, mean age = 23.33 years, st. dev.= 1.49), monitored longitudinally, for a total of 561 ecg analyzed, to select the hrv features which best correlate with self-reported stress levels. participants were recruited through opportunistic sampling. participants filled a questionnaire assessing factors that, in the opinion of the investigators, might interfere with the measures being assessed (i.e., caffeine consumption, smoking, alcohol consumption, exercise, hours of sleep, disease states, and medications). written informed consent was obtained by all subjects matching inclusion criteria (age between 18 and 65 years, generally healthy, absence of major medical conditions, and completion of informed consent). 3.2 materials data were collected through psychlog [39], a mobile experience sampling platform designed for research in mental health, which allows simultaneous collection of psychological, physiological (ecg) and motion activity data. psychological data are collected from surveys that can be simply customized by the experimenter. for the purpose of this study, we used the italian adaptation of the esm questionnaire applied by jacobs et al. [40] for studying the immediate effects of stressors on mood. the survey includes open-ended and closed-ended questions investigating thoughts, current context (activity, persons present, and location), appraisals of the current situation, and mood. all self-assessments were rated on 7-point likert scales. hr and activity data are acquired from a wireless electrocardiogram (shimmer research™) equipped with a three-axial accelerometer. the wearable sensor platform includes a board that allows the transduction, amplification and pre-processing of raw sensor signals, and a bluetooth transmitter to wirelessly send the processed data. the unit is mounted on a soft-textile chest strap designed to seamlessly adapt to the user's body shape, bringing full freedom of movement. sensed data are transmitted to the mobile phone bluetooth receiver and gathered by the psychlog computing module, which stores and process the signals for the extraction of relevant features. 3.3 design and procedure participants received a short briefing about the objective of the experiment and filled the informed consent. then, they were provided with the mobile phone with pre-installed psychlog application, the wearable ecg and accelerometer sensor and a user manual including experimental instructions. the application was pre-programmed to collect data over 7 consecutive days, at random intervals during waking hours. at the end of the experiment, participants returned both the phone and the sensors to the laboratory staff. after, participants were debriefed, thanked for their participation, and dismissed (figure 1). 3.4 data analysis following the procedure suggested by jacobs et al. [40], three different psychological stress measures were computed in order to identify the stressful qualities of daily life experiences. table 1. feature extraction from electrocardiogram (ecg). measure description rr mean mean of all rr intervals avnn average of all nn intervals sdnn standard deviation of all nn intervals rmssd square root of the mean of the squares of differences between adjacent nn intervals nn50 differences between adjacent nn intervals that are greater than 50 ms totpwr total spectral power of all nn intervals up to 0.04 hz lf lf total spectral power of all nn intervals between 0.04 hz and 0.15 hz hf hf total spectral power of all nn intervals between 0.15 hz and 0.4 hz lfbyhf lf/hf ratio of low to high frequency power (sympathovagal balance) acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 242 ongoing “activity-related stress” (ars) was defined as the mean score of the two items ‘‘i would rather be doing something else’’ and ‘‘this activity requires effort’’ (cronbach’s alpha = 0.72). to evaluate social stress, participants rated the social context on two 7-point likert scales ‘‘i don’t like the present company’’ and ‘‘i would rather be alone’’; the “social stress scale” (ss) resulted from the mean of these ratings (cronbach’s alpha = 0.59). finally, for “event-related stress” (evs), subjects reported the most important event that had happened since the previous beep, whether or not it was still ongoing. subjects then rated this event on a 7-point bipolar scale (from 3 very unpleasant to 3 very pleasant, with 0 indicating a neutral event). all positive responses were recorded as 0, and the negative responses were recorded so that higher scores were associated with more unpleasant and potentially stressful events (0 neutral, 3 very unpleasant). in addition to those scales, an item (not included in the original survey by jacobs et al. [40]) asked participants to rate the perceived level of stress on a 10-point likert scale. this item was included as a global subjective measure of stress. given the repeated sampling, likert-type scales data were standardized (mean = 0; st. dev. = 1) on each participant’s weekly mean for every variable before performing the analyses. esm data can be aggregated at the report level (the unit of analysis is the individual diary entry) or at the subject level (the unit of analysis is the participant). in the present study, most of the analyses were conducted using the subject-level aggregation, because this approach avoids problems related to unequal weights and produces more conservative significance tests [41]. 3.5 selection of psycho-physiological features to analyse hrv features, the qrs peaks and rr interval time series recorded and saved on the psychlog application were exported and further processed with the software matlab (version 7.10) in order to compute a set of hrv indexes. to this end, the ecg signal was first elaborated for artifact correction, and then a fast fourier transform was used to compute the power spectrum in the lf (0.04–0.15 hz) and hf (0.15–0.40 hz) bands [42]–[45]. to estimate the effect of hrv indexes (independent variable) on stress level (dependent variable) we applied hierarchical linear analysis, an alternative to multiple regression which is more suitable for our nested data. actually, hierarchical structure of data makes traditional forms of analysis unsuitable, since within-subject data are collected at many points in time during each day, across several days. moreover, traditional repeated-measures designs require the same number of observations for each subject and no missing data. finally, hierarchical linear analysis allows to take into account further dependencies existing in the data. 4. results given the repeated sampling, likert-type scales data were standardized (mean = 0; sd = 1) on each participant’s weekly mean for every variable before performing the analyses. esm data can be aggregated at the report level (the unit of analysis is the individual diary entry) or at the subject level (the unit of analysis is the participant). in the present study, most of the analyses were conducted using the subject-level aggregation, because this approach avoids problems related to unequal weights and produces more conservative significance tests [31]. the following table provides the correlations between stress measures described before. as can be seen from table 1, all scales measuring stress (stress, ars, ss, and ers) are significantly correlated between them. out of 561 “beeps”, participants filled 541 reports (96 %), of which 456 were included in the analysis (84 %). a total of 561 ecg sampling were recorded (100 %), and 374 were included in the analysis (69 %). the following table 2 provides the correlations between stress measures described before. in fact, as can be seen from table 2, all scales measuring stress (stress, ars, ss, and ers) are significantly correlated between them. the hierarchical linear analysis, both aggregation levels (report-level and subject-level) were considered in the model. results indicated a statistical significant hierarchical regression model for rmssd (beta .5350813; st. dev.: .2151596; p < .013), nn50 (beta -1.152351; st. dev.: .5322348; p < .030), lf / hf (beta 1.176422; st. dev.: .5386275; p < .029) (table 3). the rmssd method is preferred to nn50 because it has better statistical properties [42], [43]. findings of this esm experiment allowed to identify a subset of hrv features, which showed best correlations with self-reported psychological stress levels. in the next section, we describe how these psycho-physiological features were implemented in a figure 1. schematic representation of experimental design. table 2. correlations among psychological self-reported measures. zstress zars zss ers zstress r 1 ,312** ,215** ,213** sig. < .001 < .001 < .001 n 540 534 528 456 zars r ,312** 1 ,393** ,146** sig. < .001 < .001 .002 n 534 535 529 457 zss r ,215** ,393** 1 ,188** sig. < .001 < .001 < .001 n 528 529 529 457 ** correlation is significant at the 0.01 level (2-tailed). table 3. summary of hierarchical regression analysis for hrv variables predicting global perceived stress (number of observations = 374). global perceived stress b se b z p >|z| 95 % ci hr 0.51 0.22 2.37 0.02 0.09 0.94 rmssd -0.53 0.21 -2.49 0.01 -0.96 -0.11 nn50 -1.15 0.53 -2.17 0.03 -2.20 -0.11 lf 0.62 .031 2.02 0.04 0.02 1.23 lf/hf 1.18 0.54 2.18 0.03 0.12 2.23 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 243 personalized stress monitoring system, which was designed to learn from individual’s subjective assessments of stressful situations and use this knowledge to detect stress events. the personalized stress monitoring system includes three main components: a) a mobile acquisition/feedback module (for the collection of psycho-physiological and activity data); b) a remote analysis-decision module (for the analysis and classification of stress levels), c) a mobile visualization-reporting module (for the reporting of detected stress events). 4.1 mobile acquisition/feedback module the mobile acquisition/feedback component consists of two elements: a wireless electronic module coupled with a commercial chest band for collecting ecg and motion data; and a smartphone application for the collection of psychological data, the transmission of data to the analysis-decision module and the visualization of stress events detected by the system. 4.1.1 ecg and motion activity data acquisition the electronic acquisition platform, produced by shimmer research™ allows the transduction, amplification and preprocessing of raw ecg signals, and the transmission of elaborated data via bluetooth to the smartphone. the unit is mounted on a soft textile chest strap (model micoach™ by adidas) designed to seamlessly adapt to the user’s body shape, bringing full freedom of movement. a smartphone application, running on android operative system, was developed for preliminary elaboration of sensor data and remote data base (db) archiving. the application processes the ecg and accelerometer signals for the extraction of three relevant parameters: hr, activity index and rr intervals for further hrv analysis provided by the analysis-decision module. the wearable electronic board collects raw sensor data with on-body signal conditioning. the ecg signal is sampled at 256 hz and sent to the smartphone with the tri-axial acceleration data (ax, ay, az). the smartphone application pre-processes user’s physiological signal through a stepwise filtering stage aimed at removing typical ecg artifacts and interferences. in particular, baseline wander due to body movements and respiration artifacts is removed using a cubic spline 3rd order interpolation between the fiducial isoelectric points of the ecg [46]. the power line interference and muscular noise are removed using an infinite impulse response (iir) notch filter at 50 hz and an iir low pass filter at 40 hz. then, the pan-tompkins method is applied [47] to detect the qrs complex and to extract hr and the time series sequence of non-uniform r–r intervals. since variation of the ecg parameters is significantly affected by the activity performed by the user [48], [49], signal magnitude area [50] is also extracted from the three-axis accelerometer signals in order to measure motion activity levels. 4.1.2 psychological data acquisition the acquisition of psychological data is managed by an electronic survey, which is displayed at random times during the day on the application’s screen. the survey includes a subset of likert-type items measured selected from the esm by jacobs et al. [41] described in section 4.1.2. only the esm items which highest correlation with hrv features were included in the final survey: this choice was made in order to reduce as much as possible the burden on the user during the training phase of the system. the final selected items were (listed in the same sequence of the final survey): 1. what is your stress level? (min: 1; max: 10) 2. this activity is a challenge (min: 1; max: 7) 3. this is something i'm good at (min: 1; max: 7) 4. i would rather be doing something else (min: 1; max: 7) 5. it takes me effort (min: 1; max: 7) the average time for the completion of a full questionnaire is about 10-15 seconds. during a typical training week, 4-5 surveys per day are collected. 4.2 analysis-decision module (adm) the adm developed is composed of two main modules: the feature extraction and the classification module. 4.2.1 data exchange the wearable sensors monitor patients and transfer data to web-servers, using a smartphone to collect and pre-elaborate data. in addition, the application allows users to track their own stress levels through a graphical representation (see section 5.3). the chest band and its electronic act as masters that initiate bluetooth communication with the android phone. the bluetooth protocol has a range of approximately 20 m and provides secure data transmissions. the communication between the phone and central db is through wi-fi or 3g networks. the dss makes use of information transmitted by the smartphone and stored within the central db. for each subject is created an user profile able to maintains all history data. in particular the remote db storage physiological data (hr, rr intervals and activity index) and the corresponding stress value extracted with physiological surveys. given the userid, the timestamp, and the session, the adm retrieve from db the physiological data together with the questionnaires filled by the user about the stress level perceived. from these data features are extracted and used to train the classification module (training phase). after training, the adm acts as an expert system and provide to the corresponding user the stresslevel automatically extracted after training of the classification module (testing phase). the adm acts asynchronously in respect to the sensor data collection process. at fixed time intervals, the new sensor data belonging to each subject are collected and a feature extraction process takes place in order to create a structured dataset. classification module is trained and validated with most relevant features for automatic stress assessment. 4.2.2 feature extraction once the data are sent to the remote server, concerning the rr intervals collected over time, the parts of the signals with artifacts are discarded and hrv features are extracted according to the traditional approach proposed by the international guidelines of hrv [42], [43] to estimate cardiac vagal and sympathetic activities as markers of the autonomic interaction using a data exchange module (figure 2). in time domain were figure 2. data exchange module. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 244 extracted statistical indices from rr time series such as mean (mrr), standard deviation (σrr), root mean square of successive differences of intervals (rmssd), difference between the longest and shortest rr intervals, and the number of successive differences of intervals which differ by more than 50 ms (pnn50% expressed as a percentage of the total number of heartbeats analysed), while in frequency domain are extracted parameters for each frequency band, low frequency (lf: 0.030.15 hz) and high frequency (hf: 0.15-0.40 hz), included absolute powers, peak frequencies (max lf and max hf) and the lf/hf power ratio that measure the global sympatheticparasympathetic equilibrium. in particular, above the threshold, the curve reveals sympathetic dominancy, below the threshold, the parasympathetic influence is dominant. these features are extracted using an estimation of the power spectral density (psd) analysis according to the burg spectral estimation [51], where the optimal order p was estimated according to the akaike information criterion [52]. the power of each band is normalized in respect to the total power of the spectrum. also a nonlinear parameter was extracted, i.e. the poincaré plot a useful tool to investigate and combine the differences of the cardiac rhythms during the performed tasks. it is a graphical representation created by plotting all rr(n) on the x-axis versus rr(n+1) on the y-axis. then, the data are fitted using an ellipse projected according the line of identity and extracting the two standard deviations (sd) respectively [53] as shown in figure 3. 4.2.3 classification module the classification module is based on machine learning (ml) models, such as artificial neural networks (ann), based on inductive inference [54]. we decided to use ml model to cope with the non-linear and complex relations between the monitored parameters and the stress level prediction (table 4). artificial neural networks (anns) are particularly suited for solving such problems. they are biologically inspired computational models, which consist of a network composed of artificial neurons. for the implementation of ml for stress level detection a number of steps are needed: • initialization of parameters of implemented ml model. • training: the model is trained with the features extracted and selected to adapt itself to classify the given inputs. the loaded features, along with self-reported stress levels, generate the training set. the self-reported stress levels are collected during the training phase, in which the participant is prompted at random times during the day with a survey including five items (see section 5.1.2) that allows the user to self-evaluate, on a likert scale, perceived levels of stress, following a protocol described in section 4.1.3. by matching this psychological “ground truth” with sensor data, synaptic weights of networks internal connections are modified in order to force the output to minimize the error with the presented example (in this case the stress level obtained from survey). in this step the architecture of the model and its hyper-parameters are optimized. the examples labelled with the stress-level are used to create a personalized stress prediction model. • validation: the model adequately trained, is able to classify the given input in order to present a consequent output value: the value obtained, is the inferred stress level. it is validated in order to guarantee good predictive properties. during the analysis decision module fine-tuning design, we decided to develop a self-organizing maps (som) integrated with fuzzy rules. the som is a network structure which provides a topological mapping [55]-[57]. the main difference with the artificial neural network is that it is based on unsupervised learning. it is composed of two-dimensional layer in which all the inputs are connected to each node in the network (figure 4). a topographic map is autonomously organized by a cyclic process of comparing input patterns to vectors at each node. the node vector to which inputs match is selectively optimized to present an average of the training data. then all the training data are represented by the node vectors of the map. starting with a randomly organized set of nodes, and proceeding to the creation of a feature map representing the prototypes of the input patterns, the training procedure is as follows: 1. initialization of the weights wij(1 ≤ i ≥ nf, 1 ≤ j ≥ m) to small random values, where nf is the total number of selected features (input) and m is the total number of nodes in the map. set the initial radius of the neighbourhood around node j as nj(t). 2. present the inputs x1(t), x2(t) . . . . . xnf(t), where xi(t) is the ith input to node j at time t. 3. calculate the distance dj between the inputs and node j by the euclidean distance to determine j* which minimizes dj: 𝑑j = ||𝑊j(𝑡) − 𝑋(𝑡)|| (1) table 4. features extracted and analysed from the signal. no. features extracted measure of signal 1 mrr mean rr interval rr 2 σrr standard deviation rr interval rr 3 rmssd root mean square of successive differences of intervals rr 4 pnn50% number of successive differences of intervals which differ by more than 50 ms rr 5 lf spectral estimation of low frequency power (0.03-0.15 hz) rr 6 max lf max value of low frequency value rr 7 hf spectral estimation of high frequency power ( hf: 0.15-0.40 hz ) rr 8 min hf max value of high frequency value rr 9 lf/hf spectral estimation of power ratio rr 10 sd1, sd2 standard deviations of poincarè plot rr 11 sma signal magnitude area acc. x,y,z figure 3. sd1 and sd2 of poincarè plot observed for a portion of rr interval analysed. 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 ibi n (s) plot poicaré ib i n + 1 ( s ) sd1=30.3 sd2=71 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 245 every node is examined to calculate which one's weights are most like the input vector. the winning node is commonly known as the best matching unit (bmu). the radius of the neighborhood of the bmu is then calculated. this is a value that starts large, but diminishes each timestep. any nodes found within this radius are deemed to be inside the bmu's neighborhood. 4. update the weights wij of the winning neuron j* and of its neighborhood neurons nj*(t) at the time t, for the input vector x, are modified according to the following equation (2) to make them more like the input vector: 𝑤ij (𝑡) = 𝑤ij(𝑡 − 1) + 𝛼(𝑡)[𝑋(𝑡) − 𝑤ij(𝑡 − 1)] , (2) where α(t) is the learning rate. both α(t) and nj*(t) are controlled so as to decrease in t. if the process reaches the maximum number of iterations, stop; otherwise, go to (2). at the end of the training process, for each input variable xi we generated the fuzzy membership function using triangular functions with the centre in the corresponding weight wij of the map and the corresponding variance vij, where i is the ith input and j represents the jth node of the map. the centers of the triangular membership functions in the ith input are (wi1 wi2 ..... wim). the corresponding regions were set to [wi1-2vi1, wi1+2vi1], [wi2-2vi2, wi2+2vi2],…, [wim-2vim, wim+2vim], where m is the last node of the map. we developed membership functions and fuzzy rules for each hrv parameter, including heart rate and motion activity.in order to reduce the number of fuzzy rules and to improve the system reliability, narrowly separated regions were combined to become a single region. let the positions of the four corners of region j be llj, lhj, rhj and rlj (for a triangular membership function, lhj = rhj). two neighboring regions j-1 and j were merged if they satisfied the following equation (3): 𝑙ℎj + 𝑟ℎj 2 − 𝑙ℎj−1 + 𝑟ℎj−1 2 ≤ 𝑡ℎ𝑟 , (3) where thr is pre-specified threshold (set to 0.1 in our experiments). this process continued until all regions were well separated in terms of the threshold. accordingly, some fuzzy regions had trapezoidal shapes instead of triangular ones as is shown in figure 5. after that, we generated fuzzy rules as a set of associations of the form “if antecedent conditions hold, then consequent conditions hold”. each feature was normalized to the range of [0.0,1.0] and each region of fuzzy membership function was labeled as r1, r2,…rn. an input was assigned to the label of a region where the maximum membership value was obtained. in particular we adopted the method proposed by wang et al. [56] where each training sample produced a fuzzy rule. an example of rule generated is: if feature1 is r1 and feature2 is rn and feature3 is r2 and feature4 is r3 and feature5 is r6, and feature6 is r8 …. and feature m is r3then it is medium stress level. finally, the number of all the fuzzy rules was the same order of the training samples. the problem was that a large number of training patterns may lead to repeated or conflicting rules. to deal with this problem, we recorded the number of rules repeated during the learning process. those rules supported by a large number of examples were saved. a centroid defuzzification formula was used to determine the output for each input pattern: 𝑍 = ∑ 𝐷p i 𝑂iki=1 ∑ 𝐷p ik i=1 , (4) where z is the output, k is the number of rules, oi is the class generated by rule i and dip measures how the input vector fit the ith rule. dip is given by the product of degrees of the pattern in the regions which the ith rule occupies. the output is within [0,5] for numeral recognition of stress level (0=unknown, 1=low stress, 2=mild stress, 3=elevated stress, 4=high stress, 5=severe stress). the output z was adapted taking the nearest smaller integer value. fuzzy rules do not necessarily occupy all fuzzy regions in input space. there could be some regions where no related rule exists. this is the case when the denominator in equation (4) is zero. we label the corresponding input stress level as unknown. after training the model was designed to be able to discriminate until 6 different classes during test phase with the trained fuzzy classifier. for each user, the dss is trained on the basis of the evaluation of the stress level supplied by the surveys, being such data collected by the mobile application and transmitted to the database. as the dss training is completed, the dss acts as an expert system, supplying the stress level information for each patient as new sensor data becomes available within the database (figure 6). figure 4. generation of the fuzzy membership function for the ith input. the number of triangular functions is the equal to the som nodes. figure 5. trapezoidal function obtained for neighboring regions. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 246 4.3 mobile visualization module after data collection, the mobile application can poll the stress level report from the remote database provided by the analysisdecision module (see section 5.2) at definite time intervals, or at user’s request. the stresstracker visualization component of the mobile application provide the user with a graphic visualization of measured stress events. this information is visualized by the stresstracker (figure 7, right) which shows the number of detected stressful events over the course of last day, week, or month respectively. 5. cross validation in order to check the stress detection capability of the personalized model, the performance of the classification task was evaluated using the leave one out cross validation process (loocv), where each fold consists of one session of data acquisition left-out. this method is an iterative process in which one session is recruited into the dataset each time for validation. the som classifier combined with fuzzy rules was trained using the remaining data and validated on the single, left-out validation point. this ensures that the validation is unbiased, because the classifier does not see the validation input sample during its training. one by one, each available session for each subject was recruited for validation. the performances of the system were assessed by using the confusion matrix, in which the generic element i,j indicates how many times (in mean percentage ± sd) a pattern belonging to the class i was classified as belonging to the class j. 5.1 cross validation results the system is able to recognize the presence of stress in the selected population. in particular, the results obtained with three subjects are reported in table 5. the mean ± sd percentages of the confusion matrices obtained with the first subject are reported in table 5a. the model correctly identifies the presence of stress with percentage of correct classifications of 89.0 % and the absence of stress with percentage of correct classifications of 86.6 %. the mean ± sd percentages of the confusion matrices obtained with the second subject are reported in the table 5b. the model correctly identifies the presence of stress with percentage of correct classifications of 66.6 % and the absence of stress with percentage of correct classifications of 70.0 %. the mean ± sd percentages of the confusion matrices obtained with the third subject are reported in table 5c. the model correctly identifies the presence of stress with percentage of correct classifications of 95.7 % and the absence of stress with percentage of correct classifications of 73.0 %. these results demonstrate the high discriminatory power of the system. 6. conclusion we described the design, key functional features and preliminary validation of a personalized system for monitoring stress in naturalistic environments. the original contribution of this work concerns the development of a new methodology, which allows to use subjective evaluation of potentially stressful situations in the calibration and training of the classification system. in particular, incorporating self-reported psychological data in the system knowledge base allows for a more comprehensive and personalized definition of the stress response as measured by hrv indices. an objective of future research is to validate the accuracy of the personalized stress detection model against other physiological markers, i.e. salivary cortisol collected during daily life activities. acknowledgement this work was supported by the european funded project “interstress–interreality in the management and treatment of stress-related disorders”, grant number: fp7-247685. the co-authors would like to thank giovanni pioggia and andrea gaggioli who have equally contributed to the manuscript. figure 6. architecture of the automatic stress classification module. table 5. confusion matrixes. a. the confusion matrix obtained with the first subject: stress no stress stress 89 ± 6.2 11 ± 2.1 no stress 13.3 ± 3.6 86.6 ± 5 b. the confusion matrix obtained with the second subject: stress no stress stress 66.6 ± 11 33.3 ± 6.7 no stress 30 ± 3.1 70 ± 5.9 c. the confusion matrix obtained with the third subject: stress no stress stress 95 ± 3.3 5 ± 4.6 no stress 26 ± 3.1 73 ± 8.2 figure 7. on the left side is reported the current stress level of the user, while on the right side are reported the stressful events detected over the last week. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 247 references [1] y. yang, daily stressor, daily resilience, and daily somatization: the role of trait aggression, personality and individual differences, vol. 165, 2020, 110141. doi: 10.1016/j.paid.2020.110141 [2] s. scholten, k. lavallee, j. velten, x. c. zhang, j. margraf, the brief daily stressors screening tool: an introduction and evaluation, stress and health, vol. 36, no. 5, 2020, pp. 686-692. doi: 10.1002/smi.2965 [3] l. jandorf, e. deblinger, j. m. neale, a. a. stone, daily versus major life events as predictors of symptom frequency: a replication study, journal of general psychology, vol. 113, no. 3, 1986, pp. 205-218. doi: 10.1080/00221309.1986.9711031 [4] p. j. brantley, g. n. jones, daily stress and stress-related disorders, annals of behavioral medicine, vol. 15, no. 1. 1993, pp. 17-25. doi: 10.1093/abm/15.1.17 [5] t. d. tran, j. kim, n.-h. ho, h.-j. yang, s. pant, s.-h. kim, g.s. lee, stress analysis with dimensions of valence and arousal in the wild, applied sciences, vol. 11, no. 11, 2021, 5194. doi: 10.3390/app11115194 [6] m. kusserow, o. amft, g. tröster, monitoring stress-arousal in the wild measuring the intangible, ieee pervasive computing, vol. 12, no. 2, 2013, pp. 28-37. doi: 10.1109/mprv.2012.56 [7] j. a. healey, r. w. picard, detecting stress during real-world driving tasks using physiological sensors, ieee transactions on intelligent transportation systems, vol. 6, no. 2, 2005, pp. 156166. doi: 10.1109/tits.2005.848368 [8] g. tartarisco, g. baldus, d. corda, r. raso, a. arnao, m. ferro, a. gaggioli, g. pioggi, personal health system architecture for stress monitoring and support to clinical decisions, computer communications, vol. 35, no. 11, 2012, pp. 1296-1305. doi: 10.1016/j.comcom.2011.11.015 [9] n. carbonaro, p. cipresso, a. tognetti, g. anania, d. de rossi, a. gaggioli, g. riva, a mobile biosensor to detect cardiorespiratory activity for stress tracking, 3rd international workshop on pervasive computing paradigms for mental health, 2013. doi: 10.4108/icst.pervasivehealth.2013.252357 [10] p. cipresso, a. gaggioli, s. serino, s. raspelli, c. vigna, f. pallavicini, g. riva, inter-reality in the evaluation and treatment of psychological stress disorders: the interstress project, annual review of cybertherapy and telemedicine, vol. 181, 2012, pp. 8-11. doi: 10.3233/978-1-61499-121-2-8 [11] n. carbonaro, p. cipresso, a. tognetti, g. anania, d. de rossi, f. pallavicini, a. gaggioli, g. riva, psychometric assessment of cardio-respiratory activity using a mobile platform, in wearable technologies: concepts, methodologies, tools, and applications, 2018. doi: 10.4018/978-1-5225-5484-4.ch037 [12] a. gaggioli, f. pallavicini, l. morganti, s. serino, c. scaratti, m. briguglio, g. crifaci, n. vetrano, a. giulintano, g. bernava, g. tartarisco, g. pioggia, s. raspelli, p. cipresso, c. vigna, a. grassi, m. baruffi, b. wiederhold, g. riva, experiential virtual scenarios with real-time monitoring (interreality) for the management of psychological stress: a block randomized controlled trial, journal of medical internet research, vol. 16, no. 7, 2014, e167. doi: 10.2196/jmir.3235 [13] a. gaggioli, p. cipresso, s. serino, d. m. campanaro, f. pallavicini, b. k. wiederhold, g. riva, positive technology: a free mobile platform for the self-management of psychological stress, studies in health technology and informatics, vol. 199, 2014, pp. 25-29. doi: 10.3233/978-1-61499-401-5-25 [14] d. giakoumis, a. drosou, p. cipresso, d. tzovaras, g. hassapis, a. gaggioli, g. riva, real-time monitoring of behavioural parameters related to psychological stress, annual review of cyber therapy and telemedicine, vol. 10, 2012, pp. 287-291. doi: 10.3233/978-1-61499-121-2-287 [15] l. salahuddin, j. cho, m. g. jeong, d. kim, ultra short term analysis of heart rate variability for monitoring mental stress in mobile settings, 2007, pp. 4656-4659. doi: 10.1109/iembs.2007.4353378 [16] p. melillo, m. bracale, l. pecchia, nonlinear heart rate variability features for real-life stress detection. case study: students under stress due to university examination, biomedical engineering online, vol. 10, 2011, 96. doi: 10.1186/1475-925x-10-96 [17] d. kimhy, p. delespaul, h. ahn, s. cai, m. shikhman, j. a. lieberman, d. malaspina, r. p. sloan, concurrent measurement of ‘real-world’ stress and arousal in individuals with psychosis: assessing the feasibility and validity of a novel methodology, schizophrenia bulletin, vol. 36, no. 6, 2010, pp. 1131–1139. doi: 10.1093/schbul/sbp028 [18] r. p. sloan, p. a. shapiro, e. bagiella, s. m. boni, m. paik, j. t. bigger jr., r. c. steinman, j. m. gorman, effect of mental stress throughout the day on cardiac autonomic control, biological psychology, vol. 37, no. 2, 1994, pp. 89-99. doi: 10.1016/0301-0511(94)90024-8 [19] g. g. berntson, j. t. cacioppo, heart rate variability: stress and psychiatric conditions, in dynamic electrocardiography, 2007. doi: 10.1002/9780470987483.ch7 [20] p. cipresso, d. colombo, g. riva, computational psychometrics using psychophysiological measures for the assessment of acute mental stress, sensors (switzerland), vol. 19, no. 4, 2019 doi: 10.3390/s19040781 [21] s. cohen, r. c. kessler, l. u. gordon, strategies for measuring stress in studies of psychiatric and physical disorders, measuring stress: a guide for health and social scientists, oxford university press, 1995, pp. 3–26, isbn: 978-0195121209. [22] m. csikszentmihalyi, r. larson, validity and reliability of the experience-sampling method, in flow and the foundations of positive psychology: the collected works of mihaly csikszentmihalyi, 2014, pp 35-54. doi: 10.1007/978-94-017-9088-8_3 [23] m. memar, a. mokaribolhassan, stress level classification using statistical analysis of skin conductance signal while driving, sn applied sciences, vol. 3, no. 1, 2021, 64. doi: 10.1007/s42452-020-04134-7 [24] k. wang, p. guo, an ensemble classification model with unsupervised representation learning for driving stress recognition using physiological signals, ieee transactions on intelligent transportation systems, vol. 22, no. 6, 2021, pp. 33033315. doi: 10.1109/tits.2020.2980555 [25] n. van berkel, d. ferreira, v. kostakos, the experience sampling method on mobile devices, acm computing surveys, vol. 50, no. 6, 2017, pp 1-40. doi: 10.1145/3123988 [26] d. kahneman, a. b. krueger, d. a. schkade, n. schwarz, a. a. stone, a survey method for characterizing daily life experience: the day reconstruction method, science, vol. 306, no. 5702, 2004, pp. 1776-1780. doi: 10.1126/science.1103572 [27] a. a. stone, j. e. schwartz, d. schkade, n. schwarz, a. krueger, d. kahneman, a population approach to the study of emotion: diurnal rhythms of a working day examined with the day reconstruction method, emotion, vol. 6, no. 1, 2006, pp. 139-149. doi: 10.1037/1528-3542.6.1.139 [28] t. s. conner, l. f. barrett, trends in ambulatory self-report: the role of momentary experience in psychosomatic medicine, psychosomatic medicine, vol. 74, no. 4. 2012, pp. 327-337. doi: 10.1097/psy.0b013e3182546f18 https://doi.org/10.1016/j.paid.2020.110141 https://doi.org/10.1002/smi.2965 https://doi.org/10.1080/00221309.1986.9711031 https://doi.org/10.1093/abm/15.1.17 http://dx.doi.org/10.3390/app11115194 https://doi.org/10.1109/mprv.2012.56 https://doi.org/10.1109/tits.2005.848368 https://doi.org/10.1016/j.comcom.2011.11.015 https://doi.org/10.4108/icst.pervasivehealth.2013.252357 https://doi.org/10.3233/978-1-61499-121-2-8 http://dx.doi.org/10.4018/978-1-5225-5484-4.ch037 https://doi.org/10.2196/jmir.3235 https://doi.org/10.3233/978-1-61499-401-5-25 https://doi.org/10.3233/978-1-61499-121-2-287 https://doi.org/10.1109/iembs.2007.4353378 https://doi.org/10.1186/1475-925x-10-96 https://doi.org/10.1093/schbul/sbp028 https://doi.org/10.1016/0301-0511(94)90024-8 https://doi.org/10.1002/9780470987483.ch7 https://doi.org/10.3390/s19040781 https://doi.org/10.1007/978-94-017-9088-8_3 https://doi.org/10.1007/s42452-020-04134-7 https://doi.org/10.1109/tits.2020.2980555 https://doi.org/10.1145/3123988 https://doi.org/10.1126/science.1103572 https://doi.org/10.1037/1528-3542.6.1.139 https://doi.org/10.1097/psy.0b013e3182546f18 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 248 [29] c. kirschbaum, d. h. hellhammer, salivary cortisol in psychobiological research: an overview, neuropsychobiology, vol. 22, no. 3, 1989, pp. 150-169. doi: 10.1159/000118611 [30] g. e. miller, e. chen, e. s. zhou, if it goes up, must it come down? chronic stress and the hypothalamic-pituitaryadrenocortical axis in humans, psychological bulletin, vol. 133, no. 1. 2007, pp. 25-45. doi: 10.1037/0033-2909.133.1.25 [31] d. j. hruschka, b. a. kohrt, c. m. worthman, estimating betweenand within-individual variation in cortisol levels using multilevel models, psychoneuroendocrinology, vol. 30, no. 7, 2005, pp. 698-714. doi: 10.1016/j.psyneuen.2005.03.002 [32] g. h. ice, a. katz-stein, j. himes, r. l. kane, diurnal cycles of salivary cortisol in older adults, psychoneuroendocrinology, vol. 29, no. 3, 2004, pp. 355-370. doi: 10.1016/s0306-4530(03)00034-9 [33] h. mansikka, p. simola, k. virtanen, d. harris, l. oksama, perceived mental stress and reactions in heart rate variabilitya pilot study among employees of an electronics company, international journal of occupational safety and ergonomics, vol. 14, no. 3, 2008, pp. 1344-1352. doi: 10.1080/10803548.2008.11076767 [34] j. f. thayer, f. åhs, m. fredrikson, j. j. sollers, t. d. wager, a meta-analysis of heart rate variability and neuroimaging studies: implications for heart rate variability as a marker of stress and health, neuroscience and biobehavioral reviews, vol. 36, no. 2. 2012, pp. 747-756. doi: 10.1016/j.neubiorev.2011.11.009 [35] d. kim, y. seo, j. cho, c. h. cho, detection of subjects with higher self-reporting stress scores using heart rate variability patterns during the day, 2008, pp. 682-685. doi: 10.1109/iembs.2008.4649244 [36] a. muaremi, b. arnrich, g. tröster, towards measuring stress with smartphones and wearable devices during workday and sleep, bionanoscience, vol. 3, no. 2, 2013, pp. 172-183. doi: 10.1007/s12668-013-0089-2 [37] m. e. morris, q. kathawala, t. k. leen, e. e. gorenstein, f. guilak, w. deleeuw, m. labhard, mobile therapy: case study evaluations of a cell phone application for emotional selfawareness, journal of medical internet research, vol. 12, no. 2, 2010, e10. doi: 10.2196/jmir.1371 [38] b. cinaz, b. arnrich, r. la marca, and g. tröster, monitoring of mental workload levels during an everyday life office-work scenario, personal and ubiquitous computing, vol. 17, no. 2, 2013, pp. 229-239. doi: 10.1007/s00779-011-0466-1 [39] a. gaggioli, g. pioggia, g. tartarisco, g. baldus, d. corda, p. cipresso, g. riva, a mobile data collection platform for mental health research, personal and ubiquitous computing, vol. 17, no. 2, 2013, pp. 241-251. doi: 10.1007/s00779-011-0465-2 [40] n. jacobs, i. myin-germeys, c. derom, p. delespaul, j. van os, n. a. nicolson, a momentary assessment study of the relationship between affective and adrenocortical stress responses in daily life, biological psychology, vol. 74, no. 1, 2007, pp. 60-66. doi: 10.1016/j.biopsycho.2006.07.002 [41] r. larson, p. a. e. g. delespaul, analyzing experience sampling data: a guide book for the perplexed, in the experience of psychopathology, 2010, pp. 58-78. doi: 10.1017/cbo9780511663246.007 [42] m. malik, j. t. bigger, a. j. camm, r. e. kleiger, a. malliani, a. j. moss, p. j. schwartz, heart rate variability. standards of measurement, physiological interpretation, and clinical use, european heart journal, vol. 17, no. 3. 1996, pp. 354–381. doi: 10.1093/oxfordjournals.eurheartj.a014868 [43] m. malik, heart rate variability: standards of measurement, physiological interpretation, and clinical use, circulation, vol. 93, no. 5, 1996, pp. 1043-1065. doi: 10.1161/01.cir.93.5.1043 [44] v. magagnin, m. mauri, p. cipresso, l. mainardi, e. brown, s. cerutti, m. villamira, r. barbieri, heart rate variability and respiratory sinus arrhythmia assessment of affective states by bivariate autoregressive spectral analysis, in computing in cardiology, 2010, vol. 37, pp. 145-148. [45] m. mauri, v. magagnin, p. cipresso, l. mainardi, e. n. brown, s. cerutti, m. villamira, r. barbieri, psychophysiological signals associated with affective states, 2010, pp. 3563-3566. doi: 10.1109/iembs.2010.5627465 [46] r. jane, p. laguna, n. v. thakor, p. caminal, adaptive baseline wander removal in the ecg: comparative analysis with cubic spline technique, 1992, pp. 143-146. doi: 10.1109/cic.1992.269426 [47] j. pan, w. j. tompkins, a real-time qrs detection algorithm, ieee transactions on biomedical engineering, vol. bme-32, no. 3, 1985, pp. 230-236. doi: 10.1109/tbme.1985.325532 [48] j. h. houtveen, p. f. c. groot, e. j. c. de geus, effects of variation in posture and respiration on rsa and pre-ejection period, psychophysiology, vol. 42, no. 6, 2005, pp. 713-719. doi: 10.1111/j.1469-8986.2005.00363.x [49] f. h. wilhelm, p. grossman, m. i. müller, bridging the gap between the laboratory and the real worldintegrative ambulatory psychophysiology, handbook of research methods for studying daily life, 2012, the guilford press. [50] c. v. c. bouten, k. t. m. koekkoek, m. verduin, r. kodde, j. d. janssen, a triaxial accelerometer and portable data processing unit for the assessment of daily physical activity, ieee transactions on biomedical engineering, vol. 44, no. 3, 1997, pp. 136-147. doi: 10.1109/10.554760 [51] e. pardo-igúzquiza and f. j. rodríguez-tovar, maximum entropy spectral analysis, 2021, springer. doi: 10.1007/978-3-030-26050-7_197-1 [52] h. akaike, fitting autoregressive models for prediction, annals of the institute of statistical mathematics, vol. 21, no. 1, 1969, pp. 243–247. doi: 10.1007/bf02532251 [53] m. brennan, m. palaniswami, p. kamen, do existing measures of poincareé plot geometry reflect nonlinear features of heart rate variability?, ieee transactions on biomedical engineering, vol. 48, no. 11, 2001, pp. 1342-1347. doi: 10.1109/10.959330 [54] s. r. sain, v. n. vapnik, the nature of statistical learning theory, technometrics, vol. 38, no. 4, 1996, 409. doi: 10.2307/1271324 [55] t. kohonen, the self-organizing map, neurocomputing, vol. 21, no. 1–3, 1998, pp. 1-6. doi: 10.1016/s0925-2312(98)00030-7 [56] l. x. wang, j. m. mendel, generating fuzzy rules by learning from examples, ieee transactions on systems, man and cybernetics, vol. 22, no. 6, 1992, pp. 1414-1427. doi: 10.1109/21.199466 [57] g. tartarisco, n. carbonaro, a. tonacci, g. m. bernava, a. arnao, g. crifaci, p. cipresso, g. riva, a. gaggioli, d. de rossi, a. tognetti, g. pioggia, neuro-fuzzy physiological computing to assess stress levels in virtual reality therapy, interacting with computers, vol. 27, no. 5, 2015, pp. 521-533. doi: 10.1093/iwc/iwv010 https://doi.org/10.1159/000118611 https://doi.org/10.1037/0033-2909.133.1.25 https://doi.org/10.1016/j.psyneuen.2005.03.002 https://doi.org/10.1016/s0306-4530(03)00034-9 https://doi.org/10.1080/10803548.2008.11076767 https://doi.org/10.1016/j.neubiorev.2011.11.009 https://doi.org/10.1109/iembs.2008.4649244 https://doi.org/10.1007/s12668-013-0089-2 https://doi.org/10.2196/jmir.1371 https://doi.org/10.1007/s00779-011-0466-1 https://doi.org/10.1007/s00779-011-0465-2 https://doi.org/10.1016/j.biopsycho.2006.07.002 https://doi.org/10.1017/cbo9780511663246.007 https://doi.org/10.1093/oxfordjournals.eurheartj.a014868 https://doi.org/10.1161/01.cir.93.5.1043 https://doi.org/10.1109/iembs.2010.5627465 https://doi.org/10.1109/cic.1992.269426 https://doi.org/10.1109/tbme.1985.325532 https://doi.org/10.1111/j.1469-8986.2005.00363.x https://doi.org/10.1109/10.554760 https://doi.org/10.1007/978-3-030-26050-7_197-1 https://doi.org/10.1007/bf02532251 https://doi.org/10.1109/10.959330 https://doi.org/10.2307/1271324 https://doi.org/10.1016/s0925-2312(98)00030-7 https://doi.org/10.1109/21.199466 https://doi.org/10.1093/iwc/iwv010 reduction of gravity effect on the results of low-frequency accelerometer calibration acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 365 acta imeko issn: 2221-870x december 2020, volume 9, number 5, 365 368 reduction of gravity effect on the results of lowfrequency accelerometer calibration g. p. ripper1, c. d. ferreira2, r. s. dias2, g. b. micheli2 1 division of acoustics and vibration metrology diavi, inmetro., brazil, gpripper@inmetro.gov.br 2 vibration laboratory lavib, inmetro, brazil, rsdias@inmetro.gov.br abstract: this paper describes a study on the possible sources of systematic errors during the calibration of accelerometers at low-frequencies. this study was carried out on a primary calibration system that uses an air-bearing vibration exciter aps dynamics 129 and applying the sine-approximation method. tests performed and actions taken to reduce the effect on experimental results are presented. keywords: calibration; vibration; lowfrequency; accelerometer. 1. introduction the warp of the linear guide can cause a tilt with variable angle of the moving table when it is linearly translated. this might cause a variable effect of local gravity on dc-responsive accelerometers as for instance servo-accelerometers. in the key comparison euramet.auv.v-k3 a larger dispersion among the results of participants was evidenced at the lowest frequencies [1]. for the key comparison ccauv.v-k3 [2], inmetro have reported results down to 0.2 hz. later on, some efforts were made to extend the lower limit range to 0.1 hz but the systematic effect was high compared to the desired uncertainty for the service. this was confirmed during a measurement audit carried out during the peer review of lavib. in this opportunity (june 2019) we could calibrate a servoaccelerometer from ptb and quantify the problem. the application of a mathematical correction of the systematic error caused by gravity had been proposed by t. bruns and s gazioch in 2016 [3]. we preferred to attack the problem following a different approach. instead of dealing with the effect, we decided to try to identify and reduce the cause. therefore, we searched the responses for the basic questions: can we clearly identify the cause and consider it is exclusively generated by the warp? can we minimize the cause of the problem to reduce or even exclude the need of a mathematical correction? some experiments were carried out to respond to these questions and the results obtained will be presented in the following sections. 2. description of the work the study started with the establishment of a reference condition that was nothing else than the result of sensitivity obtained for a servoaccelerometer q-flex qa-3000. the calibration of accelerometers is usually carried out using two mountings (0° and 180°). the final sensitivity is the mean of the results obtained at 2 mountings. this basic condition was used as parameter to evaluate the results obtained in our subsequent tests. 2.1 initial tests the influence of gravity on the accelerometer was first tested using different orthogonal mounting position (90° and 270°). despite of the servoaccelerometer q-flex being a based on a cantilever beam design no significant difference was observed. the influence of the distance of the accelerometer from the centre of gravity of the moving table was also tested. a calibration was carried out placing the accelerometer below the mounting plate of the moving table. no significant difference was observed. the influence of tilt on the measuring points used for calibration was evaluated by using different measuring distances between the laser measuring point and the centre axis of the accelerometer. tests after loosening the screws of one end of the guide bar were performed and later after loosening the screws of both ends of the guide bar. change on the behavior of sensitivity magnitude at low frequencies could be observed at these times. 2.2 correction proposed by ptb the correction procedure proposed by ptb have also been tested by inmetro. figure 1 presents corrected sensitivity results obtained according to the method proposed by bruns and gazioch using http://www.imeko.org/ mailto:gpripper@inmetro.gov.br mailto:rsdias@inmetro.gov.br acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 366 the same coefficient published by ptb in [3]; using the coefficient determined by inmetro for its system; original certificate results obtained in 2018 and in 2019; and results with 3 screws loosened at one side the linear guide. figure 1: experimental and corrected results according to method proposed by bruns and gazioch. figure 1 demonstrates that the repeatability of the results from year to year is good, but the systematic error is too high at 0.1 hz. the mathematical correction improves the final result obtained but this can be dependent on specific characteristics of the accelerometer under test. the simple test of just loosening the three screws at one side of the linear guide have shown the possibility of achieving an improvement of the sensitivity response measured with the interferometric system by mechanical means. therefore, it was decided to quantify the effect caused by this process and take actions in order to improve the straightness of the guide. 2.3 straigthness measurements the procedure applied by inmetro to measure the straightness comprised the use of an autocollimator taylor robson mounted on an independent seismic block, which is usually used to position a breadboard with the interferometer. the setup is shown in figure 2. figure 2: setup used to measure the straightness of the linear guide of shaker aps 129. a plane mirror stand was positioned on the top of the moving table and the angle of inclination of this mirror was measured at different positions of the moving table. the zero position was taken as the equilibrium static point with no voltage input to the shaker. this is close to the mid-point of the linear translation guide. a dc-voltage source was used to provide a voltage of approximately 200 mv to the power amplifier. the gain knob of the amplifier was manually set to place the moving table at different x-positions. the measurement of the x-position was made with a laser distance measurer placed on the top of the table, which was pointed to a fixed reference stand on the same seismic block used to support the autocollimator. initially, straightness measurements were made at the mid-point and close to the positive and negative limits of motion used for accelerometer calibrations. absolute angles measured were -14, 12,6 and -11,5 seconds. normalizing these values in relation to the mid-point angle gives us a better view of the difference of the angle measured at different positions of the linear guide. this is shown in figure 3. figure 3: difference of the angle measured at different positions of the moving table. new straightness measurements were made while changing the mechanical system as follows: 1. original measurement conditions 2. measurements after release of the 3 screws that fix the linear guide to the base 3. measurements after release of the 6 screws that fix the linear guide to the base 4. measurements after release of the 6 screws that fix the linear guide to the base and release of the 4 screws that hold the base of the shaker to the seismic block 5. measurements after re-fixture of the 4 screws that hold the base of the shaker to the seismic block 6. measurements after re-fixture of the 6 screws that fix the linear guide to the base after applying the step 6 above, we considered that any stress due to mounting and assembly of the shaker to the seismic block have been released. a more refined straightness evaluation was made using a positioning resolution of approximately 10 mm. the results obtained are presented in figure 4. this graph shows that the http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 367 straightness of the system was highly improved. all measurement values were within ±0.2 sec relative to the angle at the reference point. figure 4: difference of the angle measured at different positions of the moving table after reassembly of shaker and linear guide. 2.4 dynamic calibration of servoaccelerometer primary calibrations of a high quality dcresponse accelerometer were then carried out to determine the influence of the improvement obtained on the straightness of the linear guide. after the reassembly of the system, with all its screws retightened, calibrations of a servoaccelerometer allied signal qa-3000 was calibrated in the frequency range 0.1 hz to 80 hz. the effect to be considered here is the deviation of the sensitivity magnitude response below 0.4 hz. the changes are caused by the combination of longer displacements and change of gravity effect due to different angle of the accelerometer while it moves along the linear guide. the results obtained in the range 0.1 hz to 10 hz for the accelerometer qa-3000 have shown that the frequency response is now almost flat. figure 5 presents the magnitude and phase of sensitivity. (a) sensitivity magnitude (b) phase shift figure 5: sensitivity results measured after reassembly of the shaker aps 129 the relative difference of sensitivity magnitude results are now all within 0,1 % taking as reference the value measured at 1 hz. figure 6: relative difference of magnitude results relative to the sensitivity value at 1 hz results obtained after reassembly of the shaker aps 129 2.5 static calibration of servoaccelerometer as a final check, the calibration of the accelerometer sensitivity at 0 hz was carried out by static rotating the accelerometer in the gravity field. the main sensitivity axis of the accelerometer was positioned at different angles relative to the local gravity field and the electrical output was measured. the range of angles from 0o to 360o was covered using 20o steps. then a sine fit was applied to measured data to obtain the sine amplitude and determine the accelerometer static sensitivity. this was carried out using both a voltmeter agilent 3458a and the daq board ni pci-6115 used in actual low-frequency accelerometer calibrations applying the sine-approximation method. due to the difference of input impedance between these two measuring instruments (voltmeter: 10 gω and daq: 1 mω), the voltmeter measurement results were corrected by approximately 0.5% to reflect the same impedance available at the daq input channel. figure 7 shows that the results obtained statically are in close conformity with the ones obtained http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 368 dynamically, being all within ±0.1%. this demonstrates the improvement obtained in the lowfrequency calibration system. figure 7: relative deviation of magnitude results to the sensitivity value at 0.1 hz: green square dc calibration with daq, red square dc calibration with hp3458a (corrected), blue diamond dynamic calibration results obtained after reassembly of the shaker aps 129. 3. summary small deviations from a purely straight linear motion can cause systematic errors on the calibration of dc-responsive accelerometers at lowfrequencies due to the effect of local gravity. this problem was studied at inmetro, where differences as large as 0.85% were observed on primary calibration of accelerometers at 0.1 hz. the cause identified was a warp of the linear guide, which cause a variable tilting angle of the moving table while it was translated. a procedure was applied to release any possible mechanical stresses that could occur during the mounting of the shaker on the inertial block and try to improve the straightness of the linear guide. this action was successful and allowed us to eliminate the main source of error in calibration below 0.4 hz. significant improvement of the calibration results was achieved and eliminated the need of further data correction. the estimated expanded uncertainty of sensitivity is 0.3% for magnitude and 0.3° for phase shift in the frequency range from 0.1 hz to 10 hz. 4. references [1] bartoli et al, “final report of euramet.auv.vk3”, metrologia 52 09003, 2015. [2] sun qiao et al, “final report of ccauv.v-k3: key comparison in the field of acceleration on the complex charge sensitivity”, metrologia 54 09001, 2017. [3] th bruns and s gazioch, “correction of shaker flatness deviations in very low frequency primary accelerometer calibration”, metrologia 53, 986, 2016. http://www.imeko.org/ journal contacts acta imeko issn: 2221-870x september 2021, volume 10, number 3 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 0 journal contacts about the journal acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are mainly based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. editor‐in‐chief francesco lamonaca, italy founding editor‐in‐chief paul p. l. regtien, netherlands associate editor dirk röske, germany editorial board section editors (vol. 7 10) leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france paolo carbone, italy lorenzo ciani, italy catalin damian, romania pasquale daponte, italy luca de vito, italy luigi ferrigno, italy edoardo fiorucci, italy alistair forbes, united kingdom helena geirinhas ramos, portugal sabrina grassini, italy fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom yasuharu koike, japan francesco lamonaca, italy massimo lazzaroni, italy fabio leccese, italy rosario morello, italy michele norgia, italy franco pavese, italy pedro miguel pinto ramos, portugal nicola pompeo, italy sergio rapuano, italy gustavo ripper, brazil maik rosenberger, germany dirk röske, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy enrico silva, italy krzysztof stepien, poland ronald summers, uk marco tarabini, italy yvan baudoin, belgium piotr bilski, poland francesco bonavolonta, italy giuseppe caravello, italy carlo carobbi, italy marcantonio catelani, italy mauro d’arco, italy egidio de benedeto, italy alessandro depari, italy alessandro germak, italy istván harmati, hungary min-seok kim, korea bálint kiss, hungary momoko kojima, japan koji ogushi, japan vilmos palfi, hungary jeerasak pitakarnnop, thailand jan saliga, slovakia emiliano sisinni, italy ciro spataro, italy oscar tamburis, italy jorge c. torres-guzman, mexico ioan tudosa, italy ian veldman, south africa rugkanawan wongpithayadisai, thailand claudia zoani, italy about imeko the international measurement confederation, imeko, is an international federation of actually 42 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses principal contact prof. francesco lamonaca university of calabria department of computer science, modelling, electronic and system science via p. bucci, 41c, vi floor, arcavacata di rende, 87036 (cs), italy e-mail: editorinchief.actaimeko@hunmeko.org acta imeko bálint kiss, e-mail: bkiss@iit.bme.hu piotr bilski, e-mail: pbilski@ire.pw.edu.pl support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig, germany e-mail: dirk.roeske@ptb.de mailto:editorinchief.actaimeko@hunmeko.org mailto:bkiss@iit.bme.hu mailto:pbilski@ire.pw.edu.pl mailto:dirk.roeske@ptb.de introduction to the special section of the 2019 apmf, the asia pacific measurement forum on mechanical quantities acta imeko issn: 2221-870x march 2021, volume 10, number 1, 5 acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 5 introduction to the special section of the 2019 apmf, the asia pacific measurement forum on mechanical quantities koji ogushi1, momoko kojima1 1 national metrology institute of japan, national institute of advanced industrial science and technology, tsukuba central 3, 1-1-1 umezono, tsukuba, 305-8563 ibaraki, japan section: editorial citation: koji ogushi, momoko kojima, introduction to the special section of the 2019 apmf, the asia pacific measurement forum on mechanical quantities, acta imeko, vol. 10, no. 1, article 2, march 2021, identifier: imeko-acta-10 (2021)-01-02 editor: francesco lamonaca, university of calabria, italy received march 17, 2021; in final form march 17, 2021; published march 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding authors: koji ogushi, e-mail: kji.ogushi@aist.go.jp momoko kojima, e-mail: m.kojima@aist.go.jp dear readers, measurement technology on mass, force, torque constitutes an integral part of intellectual infrastructure for a diverse range of human activities such as quality and safety assurance of industrial products, fair trade, energy-saving, and environmental protection. the asia pacific symposium on measurement of mass, force and torque (apmf), since its initiation in 1992, has been offering participants the opportunity of exchanging the latest information on r&d in these fields. it has been growing steadily as a not-to-miss event for metrologists, researchers, and engineers, especially those actively working in the asia-pacific region. the name "asia pacific measurement forum on mechanical quantities (apmf)" has been considered and changed from "asia pacific symposium on measurement of mass force and torque (apmf)" in 2017. besides, apmf activities' scope has been extended into different mechanical quantities such as density, hardness, pressure, vacuum, and others. the apmf 2019 has been held in niigata, japan, from 17th to 21st november 2019. it has been sponsored by the society of instrument and control engineers (sice), co-sponsored by the international measurement confederation (imeko) and niigata university, and organized by the national metrology institute of japan (nmij), a division of the national institute of advanced industrial science and technology (aist). the successful forum was contributed by more than 140 participants from 14 countries and economies, presenting 71 scientific papers. in this special issue, you can find four articles selected by the international program committee of the apmf, which is considered worthy of publishing in the acta imeko journal after peer-reviewing. the papers included in the special section show recent progress of the research on the measurements in the fields of mass, pressure, and flow in the asia-pacific region. we briefly introduce those papers as follows. in the field of mass metrology, one paper was selected. yuhsin wu and her colleagues presented the combined x-ray fluorescence (xrf) / x-ray photoelectron spectroscopy (xps) surface analysis system for quantitative surface layer analysis of si spheres in order to realize the new kg definition by x-ray crystal density (xrcd) method in cms/itri. ptb cooperated with cms by transmitting the information and technology of the xrcd method. in the field of pressure metrology, two papers are included in the issue. ahmed s. hashad and ptb team reported the evaluation of ptb force-balanced piston gauge (fpg), which is a non-rotating piston gauge. they have compared fpg with three different ptb pressure standards ranging from 3 pa to 15 kpa and confirmed the theoretically obtained effective area. the other is the topic about improvement of yokogawa's silicon resonant pressure transducer reported by hideaki yamashita. the characteristics of the pressure sensors are refined excellently based on the calibration results from nmij and yokogawa, which aimed to be used as a transfer standard in future key comparisons. in the flow field, masanao kaneko has numerically investigated the effect of a single groove on the flow behaviour and loss generation in a linear compressor cascade. analysis was performed by changing the tip clearance, which will be beneficial for the improvement of the compressor aerodynamic performance in the future. we are deeply grateful to all contributors, editors, authors, and reviewers who make this issue possible and hope you will enjoy reading this special section. koji ogushi, momoko kojima guest editors mailto:kji.ogushi@aist.go.jp mailto:m.kojima@aist.go.jp introductory notes for the acta imeko special issue on ‘innovative signal processing and communication techniques for measurement and sensing systems’ acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 3 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 introductory notes for the acta imeko special issue on ‘innovative signal processing and communication techniques for measurement and sensing systems’ md. zia ur rahman1 1 department of electronics and communication engineering, koneru lakshmaiah education eoundation, k l university, green fields, guntur-522303, a.p., india section: editorial citation: md. zia ur rahman, introductory notes for the acta imeko special issue on ‘innovative signal processing and communication techniques for measurement and sensing systems’, acta imeko, vol. 11, no. 1, article 3, march 2022, identifier: imeko-acta-11 (2022)-01-03 received march 30, 2022; in final form march 30, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: md. zia ur rahman, e-mail: mdzr55@gmail.com dear readers, the rapid developments in signal processing and communication techniques enable measurement and sensing systems more accurate and facilitates high resolution sensing data output. this broadens the market prospects and demands, as a result the measurement and sensing systems become a key element in any complex engineering systems. innovations in signal processing and communication methods make the sensing systems fast, intelligent and knowledge-based systems. these innovations enable the measurement and sensing systems higher accuracy, faster detection ability and lower the implementation cost. this has led to the possibility of development of dynamic, smart sensing systems. the scope of this special issue will focus on innovative signal processing and communication techniques for measurement and sensing systems by identifying new perspectives and highlighting potential research issues and challenges. specifically, this special issue will demonstrate how the emerging technologies could be used in future smart sensing systems. the topics of interest in the context of measurement and sensing systems includes: antenna measurements, artificial intelligence, beam forming techniques, body area networks, embedded processors, image sensors and processing, internet of things, knowledge based systems, machine learning algorithms, medical signal analysis, sensor data processing, vlsi architectures and many more thrust areas. this special issue consists of 15 full length papers discussed in the context of measurement and sensing suitable for publishing in acta imeko. in the paper ‘mitigation of spectrum sensing data falsification attack using multilayer perception in cognitive radio networks’ the issue of spectrum scarcity and low spectrum utilization in wireless communication systems is addressed. malicious users receive spectrum sensing data, resulting in inaccurate global decisions about spectrum availability. this work proposes multilayer perception and then measures statistical aspects of incoming signals to identify false data in cooperative spectrum sensing. the paper titled ‘evaluation on effect of alkaline activator on compaction properties of red mud stabilized by ground granulated blast slag’ details about virgin red mud (rm), ground granulated blast slag (ggbs), stabilized rm, and alkaline activated ggbs stabilized rm samples, comprehensive compaction tests were performed. the effect of an alkaline activator on the compaction properties of ggbs stabilized rm is investigated in this research. the compaction curves indicated a substantial difference in maximum dry density and optimal moisture content with the modification of ggbs percentage and varied ratios of naoh to na2sio3 when standard and modified proctor compaction tests were conducted for various combinations of rm and ggbs. the paper ‘multipath routing protocol based on backward routing with optimal fuzzy logic in medical telemetry systems for physiological data measurements,’ by suryadevara kranthi et al., proposes a novel safe multi-way approach for trustworthy data transfer that is dependent on service quality. multi-path routing is also supported by the ad hoc on demand backward routing protocol with optimal fuzzy logic (ofl). by generating rules in ofl and thus choosing an optimal rule, the hybridization of the bat optimization strategy delivers the best route. the final delay, the packet delivery ratio, and other criteria are used to assess the efficiency of the proposed technique. the fourth paper ‘a machine learning based sensing and measurement framework for timing of volcanic eruption and categorization of seismic data’ authored by vijay souri maddila et al., investigates the circumstances and factors that govern the volcanic explosive ejection are unclear, and there is currently no efficient approach mailto:mdzr55@gmail.com acta imeko issn: 2221-870x march 2022, volume 11, number 1, 2 4 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 to predict when a volcanic explosive ejection will terminate. create a decisiveness measure d to analyze the uniformity of the groups supplied by these machine learning models using controlled machine learning approaches such as support vector machine (svm), random forest (rf), logistic regression (lr), and gaussian process classifiers (gpc). the measured end-date derived by seismic information classification for both volcanic systems is two to four months later than the end-dates determined by the earliest instance of visual eruption. the paper ‘fire sm: new dataset for anomaly detection of fire in video surveillance,’ seeks to aid the growth of this particular research by including the fire sm dataset, which is a large and diverse new dataset. furthermore, a precise estimation in early fire detection utilising an indicator, average precision, can yield extra information. in this paper, two existing common methodologies were compared to different anomaly detection methods that give an efficient solution to discover fire incidents. in the paper ‘extended buffer zone algorithm to reduce rerouting time in biotelemetry systems using sensing,’ routing strategies for mobile adhoc networks (manets) with connection breakdowns induced by frequent node movement, measurement, and a dynamic network topology are discussed. many ideas have been presented by researchers to shorten the rerouting time. buffer zone routing (bzr) is one such solution, which splits a node transmission region into a safe zone adjacent to the node and a hazardous zone towards the broadcast range's end. the energy consumption of the nodes is reduced when routing decisions are made promptly. transfer time is lowered and routing efficiency is improved in the wider bzr safe region. it corrects flaws in the current algorithm and fills in gaps, reducing the time necessary for manet rerouting. in the paper ‘multi-input multi-output antenna measurements with super wide bandwidth for wireless applications using isolated t stub and defected ground structure,’ pradeep vinaik kodavanti et al., offers the concept of faulty ground structures for improving the antenna's radiation properties, particularly in a multi-input multi-output (mimo) arrangement. in both single and array configurations, the suggested antenna architecture with slightly flared out feed system is constructed and analyzed with defective ground. the t stub is a t-shaped stub that was employed in this project with a defective ground construction. to improve the mimo configuration features, a t stub is introduced, as well as a faulty ground. simulations are run on an electromagnetic modeling tool, and parameters such as reflection coefficient, voltage standing wave ratio, gain, radiation pattern, and current distribution plots are measured. in the paper ‘analysis of multiband rectangular patch antenna with defected ground structure using reflection coefficient measurement,’ the computer simulation technology (cst) microwave studio software is used to develop and simulate a new quad band antenna that can function in four different frequency bands. various parameters were improved using parametric analysis to improve the antenna's performance. various antenna parameters have to be measured throughout this investigation. in the paper entitled ‘analysis of peak-to-average power ratio in filter bank multicarrier with offset quadrature amplitude modulation systems using partial transmit sequence with shuffled frog leap optimization technique’ addressed about the filter bank multicarrier with offset quadrature amplitude modulation (fbmc-oqam), which tackles the problem of low adjacent channel leakage ratio, has recently stimulated the interest of numerous researchers. however, the fbmc system's energy measuring efficiency is harmed by the problem of high peak-to-average power ratio (papr) measurement. this paper proposes the partial transmit sequence (pts) with shuffled frog leap (sfl) phase optimization method to reduce the larger papr measurement, which is a major drawback of the filter bank multicarrier with offset quadrature amplitude modulation (fbmc-oqam) system. matlab is used to measure the experimental parameters and assess the results. the paper ‘beamforming in cognitive radio networks using partial update adaptive learning algorithm’ investigates cognitive radio technology as a means of increasing bandwidth efficiency. frequency that isn't used in any way will be employed in this cognitive radio by utilising some of the most powerful resources. one of the key advantages of cognitive radio signals is that they can identify different channels in the spectrum and change the frequencies that are often used. in this research, cognitive radio was developed utilising the beamforming approach, with power allocation as a strategy for the unlicensed transmitter that is completely based on sensing results. it is based on the status of the principal user in a different cognitive radio network, whereas an unlicensed transmitter uses a single antenna and changes the power transmitted. in the paper, ‘efficient deep learning based data augmentation techniques for enhanced learning on inadequate medical imaging data,’ a unique strategy to data augmentation for medical imaging was developed, which could partially solve the problem of limited availability of chest x-ray data. on the original data, a preprocessing step was performed to reduce the image size from 1024x1024x1 to 128x128x1. from datasets generated by a simple generative adversarial network (gan) and a transfer learning gan, the cnn learnt considerably faster and had improved accuracy, and this could be a one-stop solution for the limited availability of chest x-ray data. the paper ‘image reconstruction using refined res-unet model for mir,’ proposes reconstruction of the input medical image, a unique content-based res-unet framework is proposed, which performs an efficient image retrieval task. resnet50 is used as an encoder in the proposed work to conduct feature vector encoding. the proposed model's performance is assessed using the two benchmark datasets, ild and via/elcap-ct. the suggested model outperforms traditional approaches, as evidenced by the comparison findings. in the paper, ‘multilayer feature fusion using covariance for remote sensing scene classification,’ stacked covariance is a new technique for scene categorization utilising remote sensing data that combines visual information from various layers of a cnn. in the current staked covariance (sc) based classification framework, feature extraction is conducted first using a pretrained cnn model, followed by feature fusion using covariance. each feature is the covariance of two separate feature maps, and these features are used to classify data using svm. the proposed sc approach regularly outperforms other classification methods and delivers better results. acta imeko issn: 2221-870x march 2022, volume 11, number 1, 3 5 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 university of bridgeport prof. navarun gupta presented a paper entitled, ‘spectrum sensing using energy measurement in wireless telemetry networks using logarithmic adaptive learning,’ the spectrum sensing method is used to identify primary user signals in cognitive radios. the least logarithmic absolute difference (llad) algorithm, in which noise strengths are modified at licenced users' sensing points, is proposed to avoid interferences between primary and secondary users. estimated noise signals are removed using the proposed approach. to determine the threshold value, the probability of detection (pod) and probability of false alarm (pofa) are assessed. by sharing the un-used spectrum, the proposed energy measurement-based spectrum sensing method is effective in remote health care monitoring and medical telemetry applications. finally, the paper ‘classification of brain tumours using artificial neural networks,’ deals with magnetic resonance (mr) brain image for medical analysis and diagnosis. these images are typically measured in radiology departments to assess images of anatomy as well as the human body's general physiological processes. magnetic resonance imaging measurements are employed using a strong magnetic field, its gradients, also radio waves to create images of human organs in this process. blood clots or damaged blood vessels in the brain can also be detected using an mr brain imaging. artificial neural networks (ann) are used to classify whether an mr brain image contains a benign or malignant tumor. finally, ann is used to classify the information into benign and malignant tumors. the major goal and purpose of the study is to determine if the tumors are benign or malignant. we thank all the authors who contributed to this special issue, as well as all the reviewers, and special thanks and gratitude for prof. francesco lamonaca, acta imeko's editor in chief, for his tireless and patient assistance in making this special issue possible. at the same time, our sincere thanks are extended to dr. dirk röske, associate editor, for his assistance and contribution at various levels in the production process. i am honoured to have served as guest editor for this issue, and i hope that by doing so, we will be able to bring the recent advancements in the singnal processing and communication techniques in the contest of measurement and sensing systems. md. zia ur rahman, guest editor. low-power and high-speed approximate multiplier using higher order compressors for measurement systems acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 6 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 low-power and high-speed approximate multiplier using higher order compressors for measurement systems m. v. s. ram prasad, b. kushwanth, p. r. d. bharadwaj, p. t. sai teja 1 department of eece, gitam (deemed to be university), visakhapatnam, ap, india section: research paper keywords: approximate computing; approximate compressors; digital circuits; partial products; measurement systems citation: m. v. s. ram prasad, b. kushwanth, p. r. d. bharadwaj, p. t. sai teja, low-power and high-speed approximate multiplier using higher order compressors for measurement systems, acta imeko, vol. 11, no. 2, article 36, june 2022, identifier: imeko-acta-11 (2022)-02-36 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received january 30, 2022; in final form april 22, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: m. v. s. ram prasad, e-mail: mvsrprasadgitam@gmail.com 1. introduction in digital signal processing applications multipliers plays an important role in complex arithmetic operations. here, the approximate computing methods are used to lessen the power consumption. hence approximate multipliers came into the existence and these approximate multipliers are widely used in the digital signal processing applications to lessen the power consumption [1]-[4]. the complex multipliers used in the dsp applications are replaced with these approximate multipliers. these can perform multiple operations like filtering, convolution, and correlation of the digital signals. to perform these complex multiplications multipliers, adders and shifters are widely used. here, designing of the multiplier is the hardest part in the design of the dsp. these multipliers consume more power compared with the remaining adders and shifters. in multiplication process there is a propagation of partial products, alignment of partial products, and lessens the partial products finally addition all these partial products. reducing the partial products count requires more time and power consuming. multiple techniques are implemented to overcome this issue. the approximate computing technique gives the better results compared to all the previous techniques. hence approximate adders came into existence and then compressors are designed for the addition of multiple bits [5]-[8]. higher order compressors are required and lessen delay of addition process. with the use of these approximate compressors approximate multipliers are designed to improve the performance of the digital signal processing applications. the exact multipliers are consuming high speed and require huge delay to obtain exact outputs. due to these exact multipliers, there is only one major defect is that it can’t optimize further while using multiple techniques [9]-[11]. hence for the image processing and signal processing applications accept the errors data and gives the modulated signals. hence approximate compressors and multipliers came into existence and lessen the power consumption along with delay reduction due to the carry bits in the addition process. due to these approximate multipliers approximate results are obtained and these are sufficient for the abstract at present, approximate multipliers are used in the image processing applications. these approximate multipliers are designed with the help of higher order compressors to decrease the number of addition stages involved for the lessening stages. the approximate computing is the best technique to improve the power efficiency and reduce delay path. with the use of approximate computing multiple compressors are designed. in this paper, 10:2 compressors are designed and implemented in the 32-bit multiplier and compared with the exact 32-bit multipliers. the proposed higher bit compressors along with the lower bit compressors are implemented to reduce the delay of the design. this type of digital circuits has much significance in measurement technologies, for enabling fast and accurate measurements. with the use of approximate compressors, the result may be ineffective, but the power consumption and delay are getting reduced. hence, these proposed multipliers are only implemented the digital signal processing applications, where there is need for combining two or more signals. the proposed multiplier is used for implementing fir filter resulted 27 ns delay which is far better than the exact multiplier having 119 ns. these multipliers also used in image processing applications and psnr of image has been employed. mailto:mvsrprasadgitam@gmail.com acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 combining of the two signals. the approximate compressors are designed to reduce the computation occurred in the addition process [12]-[15]. hence a greater number of inputs are added to produce only two outputs called as sum and carry bits and another one more output is also attained called as carry out. in general, 4:2 compressors are widely used, and these give the better results when compared to the previous architectures. hence with the use of these approximate compressors the partial products are gets reduced and the adder count (gate count) is also reduced. the approximate compressors widely used [16], [17] in multipliers to reduce the power to improve the performance of the circuit. hence these approximate multipliers need some extra compressors to improve its performance. then the higher order compressors came into the existence and 10:2 compressors are designed. the lower order compressors require more power consumption of the higher order multipliers is also yields better results [18], [19] when compared to the approximate multipliers using with the lower order compressors. in this paper, a new 32bit approximate multiplier is designed with the use of higher order compressors achieves low power consumption and lesser delay along with the less error rate. 2. design of compressors a compressor is a combination circuit having multiple inputs and multiple outputs, the outputs consist of one sum bit and one carry bit along with these multiple carry propagating bits are also generated according to their input bit length [6] as shown in figure 1. here, the compressors are adding multiple bits which have same bit length and add the inputs of different bit lengths. these compressors are used in the decrease the partial products generated in multiplication process and these are implemented to add the multiple partial products into two partial outputs along with a carry propagating bit. according to the multiplier bit length these compressors are designed to reduce the partial product count [9]. due to these approximate compressors the addition of partial products is done easily. 2.1. 4:2 compressor the basic 4:2 compressor consists of 4 inputs, and it gives only two outputs known as sum and carry bit along with these input and output pins one more input pin is added to the compressor called as carry in and it gives one more output called are carry out is shown in figure 2. the carry out generated from the compressor is propagated to the next bit positions. hence these compressors generally have multiple input and multiple outputs 𝑋1 + 𝑋2 + 𝑋3 + 𝑋4 + 𝐶𝐼𝑁 = 𝑆𝑢𝑚 + 2 ∗ (𝐶𝑎𝑟𝑟𝑦 + 𝐶𝑖𝑛) . (1) 2.2. design of 10:2 compressor the higher order compressor is implemented in the proposed approximate multipliers for the partial product reduction and adder count. in general, huge adder count and produce high power consumption [20]. here, in the proposed higher order compressors xor gates and mux are used to reduce the operation time along with the power consumption and is shown in figure 3. the higher order compressors are replaced with the normal adders without any disturbance in their truth tables. in approximate compressors the output is obtained based on the input combinations. here the output is either directly taken from the input or a small calculation is used. in the exact compressors the output is calculated exactly based on the multiple gates and equations. hence in the approximate compressors the performance is improved in terms of delay and power. with the same approach for 4:2 compressor has been designed as shown in figure 4. the 4:2 compressors are the basic compressor used in designing every multiplier. in the proposed system the delay is getting reduced due to the termination of carry signals and the carry is not propagated to the next bits, due to this the delay is getting reduced in the proposed system. the proposed design is implemented with the xor gates and multiplexers to reduce the delay and power consumption. the 8:2 approximate compressors are implemented. the xor-mux implementation of 8:2 compressor is shown in figure 5. figure 1. n:2 compressors schematic diagram. figure 2. 4:2 compressor schematic diagram. figure 3. conventional implementation of 3:2 compressors. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 while using the higher order compressors a greater number of bits are calculated at the same time, while calculating greater number of bits at the same the power consumption is getting reduced which leads to improve the performance of the circuits. with the use of approximate computing there is an occurrence of error is very high due to the reduction of the carry bits. hence the lower order compressors are consuming high power although they are producing error outputs. while adding multiple bits at the same time a greater number of lower order circuits are used which consumes more power consumption and delay is also increased. to overcome this issue higher order compressor are designed to perform the multi-bit addition. the 10:2 compressor is having only 10 inputs and it gives only 2 outputs [22], [23]. this compressor has fourteen xor gates and six multiplexers. each xor gate receives two primary inputs and it generates single output. outputs of the xor gates are propagating to the multiplexers in which the final multiplexer will generate the carry bit and the final xor gate generate the sum output. 3. design of 32-bit approximate multiplier to obtain the exact multiplication there is a use of exact compressors in the multiplication design process. for the approximate multipliers approximate compressors are utilized where complex multiplication process is implemented. these approximate compressors are implemented in the dct applications like image processing and signal processing. here, 32-bit approximate multipliers are implemented. in this paper higher order compressor means a 10:2 compressor are designed and implemented in the 32-bit approximate multipliers. design of 32-bit approximate multipliers both higher order and lower order compressors are implemented to reduce the adder count and delay of the multipliers. in exact multipliers the output is calculated by using the normal adders which gives the accurate results which consume more power consumption. in the approximate multipliers higher order circuits are used to reduce the partial product generation and reduce the delay by neglecting the carry propagation to the next bits. hence the power consumption is reduced in figure 4. block diagram of 4:2 compressor using xor-mux. figure 5. block diagram of proposed 10:2 compressor based on xor-mux modules. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 approximate multipliers compared with the exact multipliers. 8bit approximate multiplier with reduction stages is shown in figure 6. the proposed higher order approximate multiplier is having higher compressor like 10:2 compressor along with these lower order compressors like 8:2, 4:2 compressors are also used in this design. the proposed higher order compressors are designed with multiplexers and xor gates only. the proposed multiplier is having only 3 addition stages which are very less compared to the design of multipliers with the use of lower order compressors. hence, it is clearly showing that the previous approximate and exact multipliers are not sufficient to design the fir filters. 4. simulation results in this section, the proposed multiplier is implemented in xilinx ise design suite with the help of verilog hdl. figure 7 shows the simulation results obtained in the xilinx isim simulator where the inputs are a and b and the output of the multiplier is obtained at the y signal of the exact multiplier. figure 6. 8-bit approximate multiplier with reduction stages. figure 7. simulation result of the 32-bit exact multiplier. figure 8. technology schematic of the 32-bit exact multiplier. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 5 the proposed multiplier in which the logic of the proposed design is converted into the lut’s (look up tables) which are already pre-existed in the xilinx tools it shows that how many number of lut’s and iob’s are utilized for the design. the proposed multipliers are used in the fir filters to multiply the message signal and carrier signal. for this process generally exact multipliers are used, due to the exact multipliers the power consumption and delay are increased and there is a reduction of excess terms to remove all these extra processes the proposed approximate multipliers are used in the fir filter applications. technology schematic of the 32-bit exact multiplier is shown in figure 8. figure 9 show that the simulation results of the proposed 32bit approximate multiplier, where these results are quite different from the exact multipliers due to the usage of the approximate compressors in the proposed design. the proposed approximate multiplier is completely designed based on the approximate computing. in the approximate computing the output is calculated approximately to reduce the power consumption and delay. table 1 shows a comparison of 16and 32-bit exact and approximate multipliers and table 2 shows a comparison of 32-bit exact and approximate multipliers in fir filters. 5. applications in this project work a modified 10:2 compressor based on xor-mux modules are implemented. using this proposed compressor, a 32-bit multiplier is designed. the proposed design obtains better results. the proposed multiplier used in multimedia processing for the purposed of 2 different image multiplications and then peak signal to noise ratio (psnr) is employed. the data transmission in digital signal processing applications requires the convolution and correlation process instead of actual multiplication. hence the proposed approximate multiplier does not achieve the 100 % accuracy, but these proposed approximate multipliers are suitable for the convolution and correlation processes. 6. conclusion approximate 32-bit multipliers are implemented using the modified 10:2 compressor. approximate multipliers provide better results when compared with the exact multipliers. the proposed multiplier achieves better psnr values with the previous designs. the accuracy of the image those are processed shows that our proposed multipliers are very effective compared to the exact multipliers. the proposed approximate multiplier is best for the multimedia applications where there is a need of combining two or more signals without any exact output. these proposed approximate multipliers are only used in the multimedia and signal processing applications only. hence the proposed approximate multiplier achieves better results compared to the previous architectures. references [1] j. han, m. orshansky, approximate computing: an emerging paradigm for energy-efficient design, 2013 18th ieee european test symposium (ets), 2013, pp. 1-6, doi: 10.1109/ets.2013.6569370 [2] k. roy, a. raghunathan, approximate computing: an energyefficient computing technique for error resilient applications, 2015 ieee computer society annual symposium on vlsi, 2015, pp. 473-475, doi: 10.1109/isvlsi.2015.130 [3] s. surekha, m. z. ur rahman, n. gupta, a low complex spectrum sensing technique for medical telemetry system, journal of scientific and industrial research, vol. 80, no. 5, 2021, pp. 449456. online [accessed 27 june 2022] http://op.niscpr.res.in/index.php/jsir/article/view/41462 [4] v. sze, y.-h. chin, t.-j. yang, j. emer, efficient processing of deep neural networks: a tutorial and survey, proc. of the ieee, vol. 15, no. 12, 2017. doi: 10.1109/jproc.2017.2761740 [5] j. emer, v. sze, y.-h. chen, hardware architectures for neural networks, tutorial. international symposium on computer architecture, 2017. online [accessed 27 june 2022] http://eyeriss.mit.edu/tutorial-previous.html [6] mantravadi, nagesh, md zia ur rahman, sala surekha, navarun guptha, spectrum sensing using energy measurement in wireless telemetry networks using logarithmic adaptive learning, acta imeko, vol. 11, no. 1, 2022, pp. 1-7. doi: 10.21014/acta_imeko.v11i1.1231 [7] i. qiqieh, r. shafik, g. tarawneh, d. sokolov, a. yakovlev, energy-efficient approximate multiplier design using bit significance driven logic compression, 21st design, automation & test in europe conference & exhibition, lausanne, switzerland, 27-31 march 2017. doi: 10.23919/date.2017.7926950 [8] t. yang, t. ukezono, t. sato, a low-power high-speed accuracy controllable approximate multiplier design, 23rd asia and south figure 9. simulation result of the 32-bit approximate multiplier. table 1. comparison of 8, 16 and 32-bit exact and approximate multipliers. multiplier delay (ns) lut count exact 8-bit 7.664 124 approximate 8-bit 2.969 84 exact 16-bit 12.007 551 approximate 16bit 4.689 242 exact 32-bit 20.55 2255 approximate 32-bit 18.898 1705 table 2. comparison of 32-bit exact and approximate multipliers in fir filters. filter delay accurate 119 approximate 27 https://doi.org/10.1109/ets.2013.6569370 https://doi.org/10.1109/isvlsi.2015.130 http://op.niscpr.res.in/index.php/jsir/article/view/41462 https://doi.org/10.1109/jproc.2017.2761740 http://eyeriss.mit.edu/tutorial-previous.html https://doi.org/10.21014/acta_imeko.v11i1.1231 https://doi.org/10.23919/date.2017.7926950 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 6 pacific design automation conference, 2018. doi: 10.1109/aspdac.2018.8297389 [9] d. esposito, d. de caro, e. napoli, n. petra, a. g. m. strollo, on the use of approximate adders in carry-save multiplieraccumulators, international symposium on circuits and systems, 2017. doi: 10.1109/iscas.2017 .8050437 [10] m. z. u. rahman, s. surekha, k. p. satamraju, s. s. mirza, a. layekuakille, a collateral sensor data sharing framework for decentralized healthcare systems, ieee sensors journal, vol. 21, no. 124, pp. 227848-27857, 2021. doi: 10.1109/jsen.2021.3125529 [11] r. marimuthu, d. bansal, s. balamurugan, p. s. mallick, design of 8:4 and 9:4 compressor for high speed multiplication. american journal of applied sciences, 10(8), 2013, p. 893. doi: 10.3844/ajassp.2013.893.900 [12] b. silveira, g. paiim, b. abreu, m. grellert, c. m. diniz, e. a. c. da costa, s. bampii, 2017. power efficient sum of absolute differences architecture using adder compressors for integer motion estimation design. ieee transactions on circuits and systems i: regular papers, 64(112), pp. 326-337. doi: 10.1109/tcsi.2017.2728802 [13] t. schiiavon, g. paiim, m. fonsieca, e. costta, s. almeiida, exploiting adder’s compressor for power-efficient 2-d approximate dct. in 2016 ieee 7th latin american symposium on circuits & systems (lascas), 2016, pp. 3183-3186. doi: 10.1109/lascas.2016.7451090 [14] m. v. s. ramprasad, pradeep vinaik kodavanti, design of high speed and area efficient 16-bit mac architecture using hybrid adders for sustainable applications, jge, vol. 10, issue 11, nov. 2020 [15] m. v. s. ramprasad, b. suribabu naick, zaamin zainuddin aarif, design and implementation of high speed 16-bit approximate multiplier, ijitee, vol. 8, issue 4, feb. 2019. [16] a. momeni, j. han, p. montuschi, f. lombardi, design and analysis of approximate compressors for multiplication, ieee trans. on computers, vol. 64, no. 4, pp. 984-994, 2015. doi: 10.1109/tc.2014.2308214 [17] c. liu, j. han, f. lombardi, a low-power, high-performance approximate multiplier with configurable partial error recovery, proc. of ieee design, automation & test in europe conference &exhibition (date), 2014. doi: 10.7873/date.2014.108 [18] g. zervakis, k. tsoumanis, s. xydis, d. soudris, k. pekmestzi, design-efficient approximate multiplication circuits through partial product perforation, ieee trans. on very large scale integration (vlsi) systems, vol.24, no.10, 2016, pp. 3105-3117. doi: 10.1109/tvlsi.2016.2535398 [19] a. tarannum, m. z. ur rahman, t. srinivasulu, an efficient multi-mode three phase biometric data security framework for cloud computing-based servers, international journal of engineering trends and technology, 68 (9), 2020, pp. 10-17. doi: 10.14445/22315381/ijett-v68i9p203 [20] a. cilardo, d. de caro, n. petra, f. caserta, n. mazzocca, a. g. m. strollo, e. napoli, high speed speculative multipliers based on speculative carry-save tree, ieee transactions in circuits and systems i, vol. 61, no. 12, 2014. doi: 10.1109/tcsi.2014.2337231 [21] j. liang, j. han, f. lombardi, new metrics for the reliability of approximate and probabilistic adders, ieee trans. on computers, vol. 62, no. 9, 2013, pp.1760-1771. doi: 10.1109/tc.2012.146 [22] p. kulkarni, p. gupta, m. d. ercegovac, trading accuracy for power in a multiplier architecture, j. low power electron., vol. 7, no. 4, 2011, pp. 490–501. doi: 10.1109/vlsid.2011.51 [23] c.-h. lin, c. lin, high accuracy approximate multiplier with error correction, in proc. ieee 31st int. conf. computer. design, sep. 2013, pp. 33–38. doi: 10.1109/iccd.2013.6657022 https://doi.org/10.1109/aspdac.2018.8297389 https://doi.org/10.1109/iscas.2017%20.8050437 https://doi.org/10.1109/jsen.2021.3125529 https://doi.org/10.3844/ajassp.2013.893.900 https://doi.org/10.1109/tcsi.2017.2728802 https://doi.org/10.1109/lascas.2016.7451090 https://doi.org/10.1109/tc.2014.2308214 https://doi.org/10.7873/date.2014.108 https://doi.org/10.1109/tvlsi.2016.2535398 https://doi.org/10.14445/22315381/ijett-v68i9p203 https://doi.org/10.1109/tcsi.2014.2337231 https://doi.org/10.1109/tc.2012.146 https://doi.org/10.1109/vlsid.2011.51 https://doi.org/10.1109/iccd.2013.6657022 how to stretch system reliability exploiting mission constraints: a practical roadmap for industries acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 6 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 how to stretch system reliability exploiting mission constraints: a practical roadmap for industries marco mugnaini1, ada fort1 1 university of siena diism department, via roma 56, siena, italy section: research paper keywords: reliability design; mission; reliability assessment; reliability enhancement citation: marco mugnaini, ada fort, how to stretch system reliability exploiting mission constraints: a practical roadmap for industries, acta imeko, vol. 11, no. 4, article 17, december 2022, identifier: imeko-acta-11 (2022)-04-17 section editor: francesco lamonaca, university of calabria, italy received august 14, 2022; in final form november 19, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: marco mugnaini, e-mail: marco.mugnaini@unisi.it 1. introduction in the literature it is possible to find many examples discussing on advance re-search on reliability assessment and analysis. most of these papers discuss on advanced methods for reliability allocation and improvement. nevertheless, such methods are often lacking practical implementation and may result for actual implementation tricky or require a set of information not always available in practical cases. for example, in [1] the authors provide an approach based on baysan analysis for parameter estimation presenting an interesting approach for process parameter evaluation which embeds a priori information to solve lack of data. in [2] the authors present a paper addressing reliability prediction modelling based on kalman filtering applied to ion batteries while in [3] ai approaches are used to solve reliability problems on oil&gas contexts. other papers such as [4]-[5] describe general method for reliability assessment based on the most commonly used database such as mil-hdbk217f, oreda or others, as function of temperature and environment. in some other papers, instead, there are fine analysis on predefined structures as in [6] where the advantages of different hardware solutions are compared with the aim to show how small changes on the practical implementation may lead to different results. usually, such achievements are obtained by means of different architectures without comprising the implications of mission changes [6]-[12]. on practical basis, moreover, the systematic lack of confidence bounds in presenting results and the impossibility for companies to provide additional analytic description in addition to synthetic results like the over-used mean time to first failure or mean time between failures (mttf, mtbf) to their evaluation make the transmission of information very difficult to direct customers or other realities [13]-[20]. the correct reliability design can be successfully approached by means of theoretical analysis if the design is followed since the very beginning phases of product development [20]-[21]. unfortunately, especially in small companies where resources are very limited, the designers usually underestimate the reliability allocation problem demanding such analysis to a subsequent phase. in general, it is not easy to find in the literature a practical guide for companies which is able to embed both the theoretical and the actual application implications in a suitable way [22]. some applications on the contrary, address the thrust of measurements without taking into consideration hardware and software reliability even in industrial contexts [23]-[25]. in this paper the authors would like to show on a practical way how implications on mission definition can be exploited for abstract reliability analysis can be committed to companies by customers willing to verify whether their products comply with the major international standards or simply to verify the design prior of market deployment. nevertheless, these analyses may be required at the very preliminary stages of design or when the design is already in progress due to low organizational capabilities or simple delay in the project implementation process. the results sometime maybe be far from the market or customer target with a subsequent need to redesign the whole asset. of course, not all the cases fall in the worst scenario and maybe with some additional consideration on mission definition it is possible to comply with the proposed reliability targets. in this paper the author will provide an overview on the approach which could be followed to achieve the reliability target even when the project is still on-going providing a practical case study. mailto:marco.mugnaini@unisi.it acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 reliability evaluations. in section i there’s an introduction describing the critical aspects of reliability assessment and design com-pared to the present literature. in section ii there’s a description of the formal approach used in describing borderline conditions where predictions maybe far from desired results. in section iii a case study about a general electronic board design is treated where it is possible to see how reliability targets which seems far from original design maybe matched just introducing considerations on mission profile. finally in section iv the conclusions are discussed. 2. models and methods reliability design is always following well established rules from an academic standpoint. starting from a problem definition and a mission description a design flow diagram and subsequent reliability block diagram can be approached and built. nevertheless, the most complicated part of the reliability evaluation and function description is the choice of the proper failure rate or probability density function for the components describing the item to be designed. an easy and reasonable approach for companies is to rely on their a priori knowledge and build bayesian based models. as an alternative companies may exploit internal or external database with the risk of selecting components with similar failure models but used in very different context resulting therefore in too conservative evaluations or completely wrong forecast. another issue is the overall absence of confidence bounds in companies forecast which makes the result outcome useless. mission profile definition if one of the most critical things among the ones previously cited which make reliability forecast subject to interpretation. as it can be easily understood the same item used in a different environment or with a different time apportionment or duty cycle may have, as a final result, very different reliability prediction. on the other hand, mission definition is very often neglected as a powerful tool to stretch system, subsystem or item reliability tailoring the application to a more realistic scenario. in figure 1 there’s the flow diagram of a commonly used approach in industry design concerning reliability aspects. this simplified version of the design is often considered in a generalized way which may lead to underestimates of specific important details. two important aspects should be underlined concerning the fact that not always field data are available on similar project mission profiles and databases maybe used in an unproper manner obtaining too conservative results or too optimistic forecasts. mission definition is another aspect which is often not investigated properly with the limitation to provide a general mission description avoiding subdividing it in a set of several submission according to the whole lifecycle of the item which is to be designed. figure 2 represent a more detailed view of this approach which should be taken into consideration by companies in the general development phase whenever reliability (and more in general reliability availability safety and maintainability rams) aspects are involved. 3. about illustrations and tables let’s consider a system designed to drive some signaling infrastructure in the railway context. such example nicely fits the scope of this paper from the moment that the safety and reliability requirements are so tight that not considering the mission profile could lead to several system further redesign. in figure 3 the general architecture composed by a power sartup system, a vital power source, a set of configuration memories a couple of microprocessors implementing the 2oo2 architecture an optional third cpu for managing external communication, a set of auxiliary electronics, a drive output system and a set of actuators is represented. sample equations for specific components are available on reliability standards for discrete and semiconductors and they are in the general form as equation (1): 𝜆𝑐 = 𝜆𝑏 ∙ ∏ 𝛱𝑖 𝑛 𝑖=1 , (1) where 𝜆𝑐 is the overall failure rate, 𝜆𝑏 is the basic failure rate without any correction factor but the ones due to temperature, stress and inner model and 𝛱𝑖 are the corrective factors depending on the specific model characteristics, environment and quality. the generalized form of the mission for this kind of general architecture 2oo2 for a signalling system can be summarized with the following sentence: “being able to function safely for at least 40000 hours”. companies general approach is to try to design an architecture complying the safety integrity level (sil) 4 safety standard (which is a must in such context) and the mtbf requirements as described above with a blind approach. additional requirements may include the use of a specific figure 1. general flow diagram used in companied when designing a project to fulfil reliability requirements. figure 2. project reliability definition embedding operative conditions and mission definition. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 3 database for components failure rate which can be the milhdbk-217f with the part stress approach. such new information lower our first estimation to the following mtbf (we are now neglecting the confidence bounds just to deliver the information on how such figure can be transformed changing slightly the scenario). an additional consideration could be added embedding in the analysis the environment and the enclosure temperature which for such application in middle europe can be standardized as 40 °c and ground fixed (according to the selected database). if now the designers would like to move far from the preliminary design results of mtbf several options come into the field. component with improved quality could be considered even if such information as a matter of fact is not described anywhere in any component datasheet. at least the military handbook classification cannot be easily found and therefore such approach requires an effort which is not selected from designers practically. as an alternative it is possible to evaluate for standard discrete components (especially capacitors or resistors) the derating factors which reduce most of time to just voltage ratio or pawer ratio. again, this approach may result in a generalized one due to lack or resources and time considering the number of such components present in a huge electronic project like a signalling system for railway applications. a more effective option, which is often neglected by most designers, is the possibility to allocate to each subsystem composing the system a different mission profile or a different duty cycle. these two concepts are intrinsically embedded in the preliminary design phase as well as in the detailed one. nevertheless, the possibility to match a better design exploiting such features is barely underestimated during assessment phase. considering figure 4, it is possible to see that the three configuration memories and the start up power unit have for sure a different mission with respect to the overall design and additionally for sure that may have a different duty cycle. once such considerations have been taken to the design if the target mtbf is still not achieved further improvement may pass through component reduction or alternative redundant configurations. if the first approach could be followed it would be preferred because these applications usually imply safety considerations as well. in figure 5 an additional improvement due to a resizing of the cpu capabilities is shown. in such diagram the additional cpu c has been embedded in the others removing in this way the additional configuration memory and correspondent circuitry. it is therefore possible to represent such units out of the original schematics and to modify the reliability block diagram (rbd) accordingly as shown in figure 6. simulation results can be accomplished exploiting some commercial software. in this case relyence part calculator has been exploited to compare different configurations outcomes. in table 1 the system not comprising any mission impact on subsystem and exploiting mil hdbk 217f database is considered and results shown. in table 2 a mission consideration as well as power and voltage derating have been comprised in the evaluation. the configuration memories as well as cpu c and some ancillary electronics have been used with 1 % duty cycle figure 3. sample rough electro-mechanical project of a system for railway applications with reliability and safety requirements to be interfaced with signaling system. figure 4. reviewed functional block of the interfacing system for railway applications where mission contribution is included. figure 5. reduced system design to minimize the impact of low duty cycle component on the original design improving the system reliability and not affecting the system architecture. figure 6. comparison of the rough rbd of the original signalling interfacing system a) and the reduced one b) embedding consideration on the mission definition. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 reflecting the actual use on a 24 h real timescale. these new considerations have brought to results shown in table 2 where significant improvement on the overall mtbf have been achieved. if the result is still not acceptable according to the design another approach could be to negotiate a new reference database. one of the most accepted one is the ansi vita one. in such database the components coming from the mil hdbk 217f have been actualized considering the advancement of the technology. for example, microcontroller which were not present in the previous version can be now considered as subset of microprocessors. the drawback is that this database is not an independent one and has been derived with the contribution of several companies. results of this significant improvement are shown in table 3. finally in table 4 it is possible to compare the three different kind of results which can be obtained just applying these different deviations from a standard approach. a final overview on the different kind of implementation depending on the environmental variation can be shown in figure 7 and figure 8. three different environments ground benign, ground fixed and ground mobile (gb, gf and gm) are compared in such figures showing the different contributions in terms of failures per million hours (fpmh) depending on the environment selected and on the used database. in figure 7 the mil hdbk 217f database has been exploited while in figure 8 the ansi vita one has been used. it is important to highlight how differences in results are kept even in table 1. results of simulation not embedding mission impact on subsystem and exploiting mil hdbk 217f database. name failure rate in f / (106 h) mtbf in h failure rate in % main board 18.708 53453.07 100.00 configuration memories 6.03 165837.5 32.23 cpu2oo2 2.204 453720.5 11.78 clocks 0.574 1742160 3.07 eth1 0.203 4926108 1.09 eth2 0.203 4926108 1.09 start power up 0.506 1976285 2.70 gen vit 2oo2 0.715 1398601 3.82 general electronics 1.147 871839.6 6.13 cpu c 2.237 447027.3 11.96 vital power 1.884 530785.6 10.07 pwr start up 0.841 1189061 4.50 actuator 2.164 462107.2 11.57 table 2. results of simulation embedding mission profile and duty cycle on subsystem and exploiting mil hdbk 217f database. name failure rate in f / (106 h) mtbf in h failure rate in % main board 8.052 124192.75 100.00 configuration memories 0 0 cpu2oo2 0.575 1739130.4 7.15 clocks 2.204 453720.51 27.37 eth1 0.203 4926108.4 2.52 eth2 0.203 4926108.4 2.52 start power up 0 0 gen vit 2oo2 0.715 1398601.4 8.88 general electronics 1.147 871839.58 14.24 cpu c 0 0 vital power 0 0 0 pwr start up 0.841 1189060.6 10.44 actuator 2.164 462107.21 26.88 table 3. results of simulation embedding mission profile and duty cycle on subsystem and exploiting ansi vita database. name failure rate in f / (106 h) mtbf in h failure rate in % main board 4.816 207641.196 100.00 configuration memories 0 0 0 cpu2oo2 2.263 441891.295 46.99 clocks 2.127 470145.745 44.17 eth1 0.024 41666666.7 0.50 eth2 0.024 41666666.7 0.50 start power up 0 0 gen vit 2oo2 0.67 1492537.31 13.91 general electronics 0.995 1005025.13 20.66 cpu c 0 0 0 vital power 0 0 0 pwr start up 0.099 10101010.1 2.06 actuator 0.614 1628664.5 12.75 table 4. comparison of the mtbf and failure rates of the three improvements proposed in the analysis. confidence bounds are 95 %. name failure rate in f / (106 h) mtbf in h main board ansi vita 4.815 748 207 652.05 main board mhdbk 217 8.052 190 124 189.81 main board mhdbk 217 nm 18.708 161 53 452.61 figure 7. system behaviour under three different environment ground benign (gb), ground fixed (gf), ground mobile (gm) according to mil hdbk 217f. figure 8. system behaviour under three different environment ground benign (gb), ground fixed (gf), ground mobile (gm) according to ansi vita. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 5 the changes of environment making this latter database more attractive for non-fully conservative and more modern approaches. the process which should be followed in assessing reliability requirements after preliminary design should be the one described in figure 9. the first step consists in negotiating with the final customer the exploitable database from the moment that results are really affected by such choice. then after the bill of materials (bom) has been defined and the mission correctly described, it is crucial to identify which subsystem are subject to duty cycle modification. components quality level even if important is not crucial especially if referred to mil hdbk 217f from the moment that such information is difficult to be gathered from any commercial supplier. temperature information instead is vital as well as operative environment due to the fact that great part of final analysis outcome depends on the. once this step has been accomplished. 4. conclusions in this paper the author tried to analyze a recurring problem during design phases. usually, the producers of electromechanical reliability assembly are more focused on general performance than on reliability verification. this approach inevitably implies a huge effort in final redesign and long negotiations with final customers due to the lack in the procedural design. the authors tied to highlight that sometimes no redesign is needed but a more precise and detailed description of the system mission may allow a redefinition of the reliability evaluation criteria. as a general concept these aspects should fall in the optimized and best practices of item design but as a matter of fact it may happen that due to the high volume or project complexity such aspects may be neglected. it has been shown on an actual example how by following some basic steps, as suggested in the manuscript, it is possible to minimize the impact of redesign and achieve satisfying results. this paper tries to fill a gap which is not described often in the literature because it has to deal with some peculiar aspect of the specific design but whose general rules can be applied almost in any engineering project. the analysis shows moreover the possibility to exploit different database which are not selected in common projects usually to the lack of information on their exploitability even when there are changes in the operating environment. acknowledgement this study has been conducted exploiting a research licence of part analysis provided by relyence software. references [1] m. mugnaini, m. catelani, g. ceschini, a. masi, f. nocentini, pseudo time-variant parameters in centrifugal compressor availability studies by means of markov models, microelectronics reliability, vol. 42, 2002, pp. 1373-1376. doi: 10.1016/s0026-2714(02)00152-x [2] van-trinh hoang, n. julien, p. berruet, design under constraints of availability and energy for sensor node in wireless sensor network, conference on design and architectures for signal and image processing (dasip), karlsruhe, germany, 23-25 october 2012, pp. 1-8. [3] s. b. guedria, j.-f. frigon, b. sanso, an intelligent high availability amc design, ieee radio and wireless symposium, santa clara, ca, usa, 15-18 january 2012, pp. 159-162. doi: 10.1109/rws.2012.6175334 [4] q. weiwei, j. jingshan, j. yazhou, research on the numerical control system reliability model using censored data, 16th int. conference on industrial engineering and engineering management (ie&em '09), beijing, china, 21-23 october 2009, pp. 1204 – 1207. doi: 10.1109/icieem.2009.5344468 [5] k. zhao, d. steffey, analysis of field performance using intervalcensored incident data, annual reliability and maintainability symposium (rams), fort worth, tx, usa, 26-29 january 2009, pp. 43-46. doi: 10.1109/rams.2009.4914647 [6] t. addabbo, a. fort, m. mugnaini, v.vignoli, e. simoni, m. mancini, availability and reliability modeling of multicore controlled ups for datacenter applications, reliability engineering and system safety 149, may 2016, pp. 56-62 doi: 10.1016/j.ress.2015.12.010 [7] s. j. briggs, m. bartos, r. arno, reliability and availability assessment of electrical and mechanical systems, thirtieth ias annual meeting industry applications conference ias '95, conference record of the 1995 ieee, vol. 3, pp. 2273 – 2281. doi: 10.1109/ias.1995.530592 [8] g. ceschini, m. mugnaini, a. masi, a reliability study for a submarine compression application, microelectronics reliability, vol. 42, september–november 2002, pp. 1377-1380. doi: 10.1016/s0026-2714(02)00153-1 [9] m. catelani, m. mugnaini, r. singuaroli, effects of test sequences on the degradation analysis in high speed connectors, microelectronics reliability, vol. 40, august–october 2000, p. 1461-1465. doi: 10.1016/s0026-2714(00)00150-5 [10] chun su, jinyun shen, a novel multi-hidden semi-markov model for degradation state identification and remaining useful life estimation, quality and reliability engineering international journal, vol. 29, issue 8, 8 october 2012 pp. 1181-1192. doi: 10.1002/qre.1469 [11] d. zamalieva, a. yilmaz, t. aldemir, online scenario labeling using a hidden markov model for assessment of nuclear plant state, reliability engineering and system safety, vol. 110, february 2013, pp. 1-13. doi: 10.1016/j.ress.2012.09.002 [12] diego alejandro tobon-mejia, kamal medjaher, noureddine zerhouni, g. tripot, a data-driven failure prognostics method based on mixture of gaussians hidden markov models, ieee transaction on reliability, vol. 61, issue 2, pp. 491-503. doi: 10.1109/tr.2012.2194177 [13] l. e. baum, t. petrie, statistical inference for probabilistic functions of finite state markov chains, ann. math. stat., vol. 37, no. 6, december 1966, pp. 1554-1563. [14] l. e. baum, g. r. sell, growth functions for transformations on manifolds, pacific j. math., vol. 27, no. 2, 1968, pp. 211227. [15] l. rabiner, a tutorial a tutorial on hidden markov models and selected applications in speech recognition, proc. of the ieee, vol. 77, no. 2, february 1989 doi: 10.1109/5.18626 figure 9. cap general designers’ roadmap to simplify reliability requirements achievements tion. https://doi.org/10.1016/s0026-2714(02)00152-x http://dx.doi.org/10.1109/rws.2012.6175334 http://ieeexplore.ieee.org/xpl/mostrecentissue.jsp?punumber=5339384 http://ieeexplore.ieee.org/xpl/mostrecentissue.jsp?punumber=5339384 http://ieeexplore.ieee.org/xpl/mostrecentissue.jsp?punumber=5339384 https://doi.org/10.1109/icieem.2009.5344468 https://doi.org/10.1109/rams.2009.4914647 https://doi.org/10.1016/j.ress.2015.12.010 https://doi.org/10.1109/ias.1995.530592 https://doi.org/10.1016/s0026-2714(02)00153-1 https://doi.org/10.1016/s0026-2714(00)00150-5 https://doi.org/10.1002/qre.1469 https://doi.org/10.1016/j.ress.2012.09.002 https://doi.org/10.1109/tr.2012.2194177 https://doi.org/10.1109/5.18626 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 6 [16] m. bicego, c. acosta-munoz, m. orozco-alzate, classification of seismic volcanic signals using hidden-markov-model-based, generative embeddings, ieee transactions on geoscience and remote sensing, vol. 51, issue 6, june 2013, pp. 3400-3409. doi: 10.1109/tgrs.2012.2220370 [17] a. ben salem, a. muller, p. weber, dynamic bayesian networks in system reliability analysis, ifac proceedings volumes, vol. 39, issue 13, 2006, pp. 444-449. doi: 10.3182/20060829-4-cn-2909.00073 [18] p. vrignat, m. avila, f. duculty, s. aupetit, m. slimane, f. kratz, maintenance policy: degradation laws versus hidden markov model availability indicator, proc. of the institution of mechanical engineers, part o: journal of risk and reliability, vol. 226, issue 2, 2012, pp. 137-155. doi: 10.1177/1748006x11406335 [19] y. f. li, r. peng, availability modeling and optimization of dynamic multi-state series-parallel systems with random reconfiguration, reliability engineering and system safety, vol. 127, july 2014, pp. 47-57. doi: 10.1016/j.ress.2014.03.005 [20] s. s. rao, reliability engineering design, mcgraw hill. [21] a. birolini, reliability engineering: theory and practice, springer 6th edition 2010. [22] a. fort, f. bertocci, m. mugnaini, v. vignoli v. gaggii, a. galasso, m. pieralli, availability modeling of a safe communication system for rolling stock applications, proc. of the ieee i2mtc2013 conference, minneapolis, mn, us, 06-09 may 2013, pp. 427-430. doi: 10.1109/i2mtc.2013.6555453 [23] t. addabbo, a. fort, r. biondi, s. cioncolini, m. mugnaini, s. rocchi, v. vignoli, measurement of angular vibrations in rotating shafts: effects of the measurement setup nonidealities, ieee transactions on instrumentation and measurement, 2013, 62(3), pp. 532–543. doi: 10.1109/tim.2012.2218691 [24] a. lay-ekuakille, s. ikezawa, m. mugnaini, r. morello, c. de capua, detection of specific macro and micropollutants in air monitoring: review of methods and techniques, measurement, vol. 98, february 2017, pp. 49–59. doi: 10.1016/j.measurement.2016.10.055 [25] t. addabbo, a. fort, m. mugnaini, l. parri, s. parrino, a. pozzebon, v. vignoli, an iot framework for the pervasive monitoring of chemical emissions in industrial plants, proc. of the workshop on metrology for industry 4.0 and iot, brescia, italy, 16-18 april 2018, pp. 269–273. doi: 10.1109/metroi4.2018.8428325 http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=p_authors:.qt.bicego,%20m..qt.&newsearch=true http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=p_authors:.qt.acosta-munoz,%20c..qt.&newsearch=true http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=p_authors:.qt.orozco-alzate,%20m..qt.&newsearch=true https://doi.org/10.1109/tgrs.2012.2220370 https://doi.org/10.3182/20060829-4-cn-2909.00073 https://doi.org/10.1177/1748006x11406335 http://www.scopus.com/source/sourceinfo.url?sourceid=13853&origin=recordpage https://doi.org/10.1016/j.ress.2014.03.005 https://doi.org/10.1109/i2mtc.2013.6555453 https://doi.org/10.1109/tim.2012.2218691 https://doi.org/10.1016/j.measurement.2016.10.055 https://doi.org/10.1109/metroi4.2018.8428325 uncertainty of factor z in the gravimetric volume measurement acta imeko issn: 2221-870x september 2021, volume 10, number 3, 198 201 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 198 uncertainty of factor z in gravimetric volume measurement mar lar win1 1 national institute of metrology (myanmar) nimm, yangon, myanmar section: research paper keywords: density of air; density of water; value of factor z; uncertainty of factor z; gravimetric volume measurement citation: mar lar win, uncertainty of factor z in the gravimetric volume measurement, acta imeko, vol. 10, no. 3, article 27, september 2021, identifier: imeko-acta-10 (2021)-03-27 section editor: andy knott, national physical laboratory, united kingdom received: may 7, 2021; in final form: august 13, 2021; published: september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: mar lar win, e-mail: marlarwin99@gmail.com 1. introduction the iso 4787 standard describes the general equation for the calculation of the volume at the reference temperature of 20 °c, 𝑉20, from the apparent mass of the water, be it contained or delivered, as follows from [1], b.1: 𝑉20 = (𝐼l − 𝐼e) ( 1 𝜌w − 𝜌a ) (1 − 𝜌a 𝜌b ) [1 − 𝛾(𝑡 − 20)] , (1) where 𝐼l is the balance reading of the vessel containing water (g), 𝐼e is the balance reading of empty vessel (g), 𝜌w is the density (g/ml) of the water at temperature 𝑡, 𝜌a is the density of air (g/ml), 𝜌b is the actual density of the reference weights (8 g/ml), 𝛾 is the coefficient of the cubic thermal expansion of the material (°c−1), and 𝑡 is the temperature of the water used in the testing (°c). since this equation is somewhat complicated to work with, the factor 𝑍 is provided to simplify the volume calculation: 𝑉20 = 𝑚 𝑍 [1 − 𝛾(𝑡 − 20)] , (2) where 𝑚 is the net weighed values of 𝐼l − 𝐼e and 𝑍 is the conversion factor of the mass of the water measured in terms of volume at the measurement temperature. 2. determination of factor z the factor 𝑍 depends on the air density, water density, water temperature, and the density of the reference weights. the factor can be derived from (1) as follows: 𝑍 = ( 1 𝜌w − 𝜌a ) (1 − 𝜌a 𝜌b ) . (3) to determine the value of factor 𝑍, the following parameters must be taken into account: • density of air 𝜌a in relation to atmospheric pressure, temperature and a relative humidity of 40 % – 90 %; • density of water 𝜌w in relation to temperature; • density of the reference weights 𝜌b of the balance used. 2.1. determining the air density the air density 𝜌a in kg m −3 can be determined with sufficient uncertainty from the following cipm approximation formula for air density [2], e.3-1: 𝜌a = 0.348 48 𝑃 − 0.009 ℎ𝑟 × 𝑒0.061×𝑡 273.15 + 𝑡 , (4) where 𝑃 is the atmospheric pressure (hpa), ℎ𝑟 is the relative humidity (%), and 𝑡 is the atmospheric temperature (°c). abstract in the gravimetric volume measurement method, the factor z is generally used to facilitate an easy conversion from the apparent mass obtained using a balance to the liquid volume. the uncertainty of the measurement used for the liquid volume can be divided into two specific contributions: one from the components related to the mass measurements and one from those related to the mass-to-volume conversion. however, some iso standards and calibration guides have suggested that the uncertainty due to the factor z is generally neglected in the uncertainty calculation pertaining to gravimetric volume measurement. this paper describes the combined effects of the density of the water, the density of the reference weights, and the air buoyancy on the uncertainty of factor z in terms of how they subsequently affect the uncertainty of the measurement results. mailto:marlarwin99@gmail.com acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 199 table 1 presents the air density to temperature relationship at 1,013.25 hpa and a 50 % relative humidity. 2.2 determining the water density the density of air-free water 𝜌w′ at a pressure of 1,013.25 hpa and a temperature 𝑡 between 0 °c and 40 °c expressed in terms of the its-90 is given by tanaka et al. [3] as follows: 𝜌w′ = 𝑎5 [1 − (𝑡 + 𝑎1) 2(𝑡 + 𝑎2) 𝑎3(𝑡 + 𝑎4) ] , (5) where 𝑎1 = (− 3.983 035 ± 0.000 67) c 𝑎2 = 301.797 c 𝑎3 = 522 528.9 c 2 𝑎4 = 69.348 81 c 𝑎5 = (0.999 974 950 ± 0.000 000 84) g/ml 𝑡 = water temperature in c . since the pure water that is used in the laboratory is generally air-saturated, the density must be corrected. to adjust the air-free water density as described in (5) to air-saturated water between 0 °c and 25 °c (the standard laboratory condition), we can use the following [3], 5.2: ∆𝜌 = 𝑠0 + 𝑠1 × 𝑡 , (6) where 𝑠0 kg m−3 = − 4.612 × 10−3 𝑠1 kg m−3 °c−1 = 0.106 × 10−3 . moreover, since water is slightly compressible, a small correction may be required under typical laboratory conditions. the compressibility factor 𝐹c [3], 5.3, for the water density used at different pressures is as follows: 𝐹c = 1 + [(𝑘0 + 𝑘1 𝑡 + 𝑘2 𝑡 2) ∆𝑃] , (7) where ∆𝑃 = 𝑃 − (101 325 pa) 𝑘0 = 50.74 × 10 −11 pa−1 𝑘1 = −0.326 × 10 −11 pa−1 c−1 𝑘2 = 0.004 16 × 10 −11 pa−1 c−2 𝜌w = (𝜌w´ × 𝐹𝑐 + ∆𝜌) with 𝜌w = the density of air-saturated water and 𝜌w´= the density of air-free water. (8) table 2 presents examples of water density as a function of water temperature for distilled water at 1,013.25 hpa after correction for any dissolved gasses in the water. meanwhile, table 3 presents examples of calculated airsaturated water density as a function of water temperature and pressure after applying corrections for the dissolved gasses in the water. 2.3 density of the reference weights the density of the reference weights/mass pieces 𝜌b that are used for the balance calibration is normally presented in the calibration certificate of the weights. alternatively, the uncertainties corresponding to the used weight class according to oiml r 111-1 [2] can be used. if the density of the reference weights are not known, 8.0 g/ml is generally used. 2.4 determining the factor z values the values of the conversion factor 𝑍 (ml/g) as a function of temperature and pressure as given in the existing literature mainly pertain to distilled water. when liquids other than distilled water are used, the correction factors (factor 𝑍) for the specific liquid must be determined. table 4 presents the factor z values in relation to temperature and pressure. the calculations were based on (3) and (8). 3. the uncertainty of factor z 3.1 sources of uncertainty in factor z having identified the input quantities of factor z using (3), it is possible to determine the sources of uncertainty pertaining to the different input quantities, which are: table 1. air density values as a function of temperature at 1,013.25 hpa and a 50 % relative humidity. temperature in °c density of air in g/ml 15 0.001 225 20 0.001 204 25 0.001 184 table 2. water density values after correction for air-saturated distilled water. water temperature (°c) air-free water density (g/ml) correction for air saturation (g/ml) air-saturated water density (g/ml) 15 0.999 102 57 0.000 003 02 0.999 099 55 20 0.998 206 75 0.000 002 49 0.998 204 25 25 0.997 047 02 0.000 001 96 0.997 045 06 table 3. air-saturated water density as a function of water temperature and pressure. water temperature (°c) air-saturated water density (g/ml) at atmospheric pressure of 900 hpa 1,013.25 hpa 1,050 hpa 15 0.999 094 26 0.999 099 55 0.999 101 27 20 0.998 199 07 0.998 204 25 0.998 205 94 25 0.997 039 96 0.997 045 06 0.997 046 72 table 4. factor z values as a function of water temperature and pressure. water temperature (°c) value of factor z (ml/g) at atmospheric pressure of 900 hpa 1,013.25 hpa 1,050 hpa 15 1.001 845 1.001 958 1.001 995 20 1.002 745 1.002 858 1. 002 895 25 1.003 912 1.004 026 1.004 062 27 1.004 448 1.004 562 1.004 599 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 200 • water density • air density • density of the reference weights. the uncertainty of factor z can be determined using the gum law of propagation of uncertainty [4] as follows: 𝑢(𝑍) = √( 𝜕𝑍 𝜕𝜌a 𝑢(𝜌a)) 2 + ( 𝜕𝑍 𝜕𝜌w 𝑢(𝜌w)) 2 + ( 𝜕𝑍 𝜕𝜌b 𝑢(𝜌b)) 2 (9) where 𝑢(𝜌a) is the standard uncertainty of the air density (g/ml), 𝑢(𝜌w) is the standard uncertainty of the water density (g/ml), and 𝑢(𝜌b) is the standard uncertainty of the reference weights (g/ml). 3.2 uncertainty of air density at a relative humidity of hr = 0.5 (50 %), a temperature of 20 °c and a pressure of 1,013.25 hpa, the uncertainty of the air density can be estimated according to oiml r111 [2], c.6.3-3, as follows: 𝑢(𝜌a) 𝜌a = √ (1 × 10−4)2 + (−3.4 × 10−3 × 𝑢(𝑡)) 2 +(1 × 10−3 × 𝑢(𝑃))2 + (−10−2 × 𝑢(ℎ𝑟))2 , (10) where 𝑢(𝑡) is the uncertainty of the air temperature 𝑡 (°c), 𝑢(𝑃) is the uncertainty of the air pressure 𝑃 (hpa), and 𝑢(ℎ𝑟) is the uncertainty of the relative humidity ℎ𝑟 (%). for the uncertainty values (k = 2) from the calibration of the pressure, temperature and humidity sensors, the uncertainty due to the temperature of the air is 0.1 °c, the uncertainty due to the barometric pressure is 0.1 hpa, and the uncertainty due to the relative humidity is 1 %. this yields the following: 𝑢(𝜌a) = 0.0012 g ml⁄ × √ (1 × 10−4)2 + (−3.4 × 10−3 × 0.1 2 ) 2 + (1 × 10−3 × 0.1 2 ) 2 + (−1 × 10−2 × 0.01 2 ) 2 = 2.52 × 10−7 g ml⁄ . 3.3 uncertainty of water density the uncertainty of the water density can be evaluated from the formulation given by euramet cg 19 [5], 7.3.3, as follows: 𝑢(𝜌w) = √𝑢 2(𝜌w,f) + 𝑢 2(𝜌w,𝑡) + 𝑢 2(𝛿𝜌w) , (11) where 𝑢(𝜌w,𝑡) = 𝑢(𝑡w) × 𝛽 × 𝜌w(𝑡w) 𝑢(𝑡w) = [( 𝑈(ther) 𝑘 ) 2 + 𝑢(res)2] 0.5 𝑢(𝑡w) = [( 0.01 °c 2 ) 2 + ( 0.01 °c 2√3 ) 2 ] 0.5 𝑢(𝑡w) = 0.006 °c . the expansion coefficient can be estimated as follows: 𝛽 = (−0.117 6 𝑡2 + 15.846 𝑡 − 62.677) × 10−6 ℃−1 𝛽 = (−0.117 6 × 202 + 15.846 × 20 − 62.677) × 10−6 ℃−1 𝛽 = 0.000 21 °c−1 𝜌w(𝑡w) = 0.998 204 25 g/ml (from table 3) 𝑢(𝜌w,𝑡) = 0.006 × 0.000 21 × 0.998 204 25 𝑢(𝜌w,𝑡) = 1.26 × 10 −6g/ml~ 1 × 10−6g/ml . the standard uncertainty related to 𝑢(𝛿𝜌w) could range from a few ppm for highly pure water (measured by means of a highresolution density meter) to 10 ppm for distilled or de-ionised water, provided that the conductivity is less than 5 µs/cm, meaning it is assumed to be 5 × 10−6 g/ml. meanwhile, 𝑢(𝜌w,f) is the estimated standard uncertainty of the formulation (4.5 × 10−7 g/ml), 𝑢(𝜌w,𝑡) is the uncertainty due to the temperature of the water (1 × 10−6 g/ml), and 𝑢(𝛿𝜌w) is the uncertainty due to the purity of the water (5 × 10−6 g/ml) [5], 8.2.3. this yields the following: 𝑢(𝜌w) = √(4.5 × 10 −7)2 + (1 × 10−6)2 + (5 × 10−6)2 g/ml = 5.12 × 10−6 g/ml . (12) 3.4 uncertainty of the reference weights the uncertainty of the density of the weights/mass pieces is presented in the calibration certificate of the weights, or alternatively, according to oiml r111-1, the uncertainty of stainless-steel weights class e2 is 140 kg/m3 (k = 2). the uncertainty of the density of the reference weights is normally presented in the calibration certificate of the weights. for example, the value presented in the calibration certificate of the set of weights used is 0.06 g/ml (k = 2): 𝑢(𝜌b) = 𝜌r 2 = 0.06 2 g ml⁄ = 0.03 g ml⁄ . (13) alternatively, the uncertainty of the density of the weights, as presented in oiml r111-1, can also be used. 3.5 calculating the uncertainty of factor z table 5 presents example data for the calculation of the uncertainty of factor z for air-saturated distilled water at a temperature of 20 °c and a barometric pressure of 1,013.25 hpa. sensitivity coefficient of air density 𝜕𝑍 𝜕𝜌a = ( 1 𝜌w − 𝜌a ) (− 1 𝜌b ) + (1 − 𝜌a 𝜌b ) ( 1 (𝜌w − 𝜌a) 2 ) = 0.880 500 ml²/g2 sensitivity coefficient of water density 𝜕𝑍 𝜕𝜌w = (1 − 𝜌a 𝜌b ) ( −1 (𝜌w − 𝜌a) 2 ) = −1.005 876 ml²/g2 sensitivity coefficient of density of reference weights 𝜕𝑍 𝜕𝜌b = ( 1 𝜌w − 𝜌a ) ( 𝜌a 𝜌b 2 ) = 1.887 6 × 10−5 ml²/g2 table 5. example data. parameter value uncertainty (k = 1) air density 0.001 204 g/ml 2.52 × 10−7 g/ml water density 0.998 204 g/ml 5.12 × 10−6 g/ml mass density 8.0 g/ml 0.03 g/ml factor z 1.002 858 ml/g (refer to the conditions in table 4) acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 201 uncertainty of factor z (k = 1) 𝑢(𝑍) = √( 𝜕𝑍 𝜕𝜌a 𝑢(𝜌a)) 2 + ( 𝜕𝑍 𝜕𝜌w 𝑢(𝜌w)) 2 + ( 𝜕𝑍 𝜕𝜌b 𝑢(𝜌b)) 2 = √ (0.880 500 × 2.52 × 10−7)2 +(−1.005 876 × 5.12 × 10−6)2 +(1.887 6 × 10−5 × 0.03)2 ml/g = 5.2 × 10−6 ml/g . 4. discussion as shown in table 4, the factor z variation in distilled water due to water temperature is approximately 2.0 × 10−3 ml/g or 2.0 × 10-4 ml/(g °c) between 15 °c and 25 °c. this implies that when the uncertainty of the temperature measurement of distilled water is 0.1 °c, this will present an uncertainty of approximately 0.002 % following the conversion to volume. meanwhile, the influence of barometric pressure on the factor z is approximately 1 × 10−4 ml/g or 0.01 % per pressure change of 100 hpa between 900 hpa and 1,050 hpa. the uncertainty of the pressure measurement in the laboratory is typically 5 hpa. this will cause the uncertainty to the conversion of volume by less than 0.000 5 %. 5. conclusions the factor z does not only depend on the density of the liquid compensated for water temperature and pressure but also on the air density and the density of the weights used for the calibration of the balance. however, the contribution to the evaluation of uncertainty resulting from the factor z for gravimetric volume measurement is extremely small compared to that resulting from the operator process (i.e., meniscus reading) [5], 8.2.7, and the mass of water determination, provided that the measuring instruments (balance, barometer, thermometer, etc.) are used in accordance with the specifications given in the standard operating procedure (sop). thus, it is good practice to neglect the uncertainty of the factor z in the evaluation of uncertainty and to give only the components related to the mass measurement process [6]. acknowledgement this paper would not have been possible without the exceptional support of bunjob suktat, my former expert from physikalisch-technische bundesanstalt (ptb) on the ‘ptbnimm strengthening myanmar quality infrastructure project’. his enthusiasm, knowledge, and minute attention to detail were an inspiration and ensured my work remained on track. as my teacher and mentor, he has taught me more than i could ever give him credit for here. references [1] iso 4787 (2010) laboratory glassware volumetric instruments methods for testing of capacity and for use. [2] oiml r111weights of classes e1, e2, f1, f2, m1, m2, m3 metrological and technical requirement, 2004. [3] m. tanaka, g. girard, r. davis, a. peuto, n. bignell, recommended table for the density of water between 0 °c and 40 °c based on recent experimental reports, metrologia 38 (2001) pp. 301-309. doi: 10.1088/0026-1394/38/4/3 [4] jcgm 100:2008 (gum), evaluation of measurement data – guide to the expression of uncertainty in measurement. [5] euramet cg-19, version 3.0 (09/2018), guidelines of determination of uncertainty in gravimetric volume calibration. [6] iso 8655-6:2002(e), international standard, piston-operated volumetric apparatus, part 6: gravimetric methods for the determination of measurement error. https://doi.org/10.1088/0026-1394/38/4/3 acta imeko  september 2014, volume 3, number 3, 68 – 72  www.imeko.org    acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 68  comparison of milligram scale deadweights to electrostatic  forces  sheng‐jui chen, sheau‐shi pan, yi‐ching lin   center for measurement standards, industrial technology research institute, hsinchu, taiwan 300, r.o.c.      section: research paper   keywords: deadweight force standard; electrostatic force actuation; capacitive position sensing; force balance  citation: sheng‐jui chen, sheau‐shi pan, yi‐ching lin, comparison of milligram scale deadweight forces to electrostatic forces, acta imeko, vol. 3, no. 3,  article 14, september 2014, identifier: imeko‐acta‐03 (2014)‐03‐14  editor: paolo carbone, university of perugia   received may 13 th , 2013; in final form august 26 th , 2014; published september 2014  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by the bureau of standards, metrology and inspection (bsmi), taiwan, r.o.c.  corresponding author: sheng‐jui chen, e‐mail: sj.chen@itri.org.tw    1. introduction  microand nano-force measurement is of great interest in recent years among several national measurement institutes (nmis) [1-6]. the center for measurement standards (cms) of the industrial technology research institute (itri) has established a force measurement system based on electrostatic sensing and actuation techniques. the system is capable of measuring vertical forces up to 200 n based on a force balance method. the system mainly consists of a flexure stage, a three-electrode capacitor and a digital controller [7]. the schematic drawing of the system is shown in figure 1. the three-electrode capacitor is used simultaneously as a capacitive position sensor and an electrostatic force actuator. the position of the center electrode is measured by comparing the capacitances between upper capacitor c1 and lower capacitor c2 formed within the three electrodes (see figure 2). the differential capacitance was detected using an inductivecapacitive resonant bridge circuit. the position detection is performed at a radio frequency (rf), say, 100 khz, a frequency depending on the capacitance values and the design of the sensing bridge circuit. for electrostatic force actuation, the top and bottom electrodes are applied with two high voltage, audio frequency sinusoidal signals to generate a compensation electrostatic force fe to balance the force under measurement fm. the balance condition fm = fe is maintained by the digital controller by keeping the flexure stage at its zero deflection position. some parts of the force measurement system were upgraded for performance improvements. a new design of figure 1. schematic drawing of the force measurement system.   abstract  this paper presents a comparison of milligram scale deadweights to electrostatic forces via an electrostatic sensing & actuating force  measurement system.  the electrostatic sensing & actuating force measurement system is designed for measuring force below 200 n  with an uncertainty of few nanonewton.   the force measurement system consists of three main components: a monolithic flexure  stage, a three‐electrode capacitor for position sensing and actuating and a digital controller.  the principle of force measurement used  in this system  is a static force balance,  i.e. a force to be measured  is balanced by a precisely controlled, electrostatic force. four  weights  of  1  mg  to  10  mg  were  tested  in  this  comparison.  the  results  of  the  comparison  showed  that  there  exist  extra  stray  electrostatic forces between the test weights and the force measurement system. this extra electrostatic force adds a bias force to the  measurement result, and was different for each weight. in principle, this stray electrostatic force can be eliminated by  installing a  metal  housing  to  isolate  the  test  weight  from  the  system.  in  the  first  section,  we  briefly  introduce  the  electrostatic  sensing  and  actuating force measurement system, and then we describe the experimental setup for the comparison and the results. finally, we  give a discussion and outlook.  acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 69  copper-beryllium flexure stage was installed in the system, which has a counter weight balance mechanism and a lower stiffness of 13.08 n/m. figure 2 shows a picture of the new flexure stage, where the counter weight and the gold-plated cube flexure stage are visible. a new set of gold-plated, polished electrodes was assembled as a three-electrode capacitor and put into operation. the capacitance gradient for the new threeelectrode capacitor was measured. 2. experimental setup  in this experiment, the compensation electrostatic force is compared to the deadweight by weighing a weight with the electrostatic sensing and actuating force measurement system. 2.1. deadweight  we used five wire weights with nominal mass values and shapes of 1 mg-triangle, 2 mg-square, 5 mg-pentagon and 10 mg-triangle to generate vertical downward forces. these weights meet the metrological requirement of the oiml class e1 and were calibrated against standard weights using a mass comparator balance. the calibration results are compiled in table 1. the forces can be derived from the calibrated mass values and the local acceleration of gravitation g = 9.78914 m/s2 as fw = m (1-a/w) g, where a and w are densities of the air and the weight, respectively. these weights were loaded and unloaded by a dc motor actuated linear translation stage. 2.2. electrostatic sensing & actuating force measurement system  as shown in figure 3, the compensation electrostatic force fe generated by the force measurement system is determined by the following equation: 2 22 2 11 2 1 2 1 vsvsfe  (1) where s1, s2 are the capacitance gradients of the top and the bottom capacitors c1, c2 and v1, v2 are voltage potentials between the top and the bottom capacitors, respectively. using the parallel-plate capacitor as the model for capacitor c1 and c2,                               432 00 1 1)( d x d x d x d x d a xd a xc  (2)                               432 00 2 1)( d x d x d x d x d a xd a xc  (3) where 0 is the vacuum permittivity, a is the effective area of the electrode and d is the gap distance between electrodes when the center electrode is vertically centered. the capacitance gradients s1 and s2 can be expressed as                          432 0 1 1 54321)( d x d x d x d x s dx dc xs (4)                             432 0 2 2 54321)( d x d x d x d x s dx dc xs (5) where s0 = 0a/d2 is the capacitance gradient at x=0. the electrostatic force can be written as )()( 2 1 )( 22 2 10 2 2 2 10 vvs d x vvsxf e  (6) the voltages v1 and v2 contain the rf detection signal vdsindt, audio frequency high voltage actuation voltages va1sinat, va2sinat and the electrodes’ surface potentials vs1, vs2, namely 111 sinsin saadd vtvtvv   (7) 222 sinsin saadd vtvtvv   (8) the high voltage actuation signals are provided by a full range 10 v 16-bit resolution digital-to-analog converter within the digital controller and an ultra low-noise highvoltage amplifier. to make the electrostatic force linearly proportional to a control voltage vc, we set )(11 cba vvav  (9) )(22 cba vvav  (10) figure 2. picture of the new flexure stage.  table 1. mass calibration result. nominal mass (mg) conventional mass (mg) uncertainty, 95% confidence (mg) 1 1.00096 0.0003 2 2.00116 0.0003 5 5.00124 0.00065 10 10.0021 0.00048 c1 c2 d  x d + x v1 v2 c1 c2 d  x d + x v1v1 v2v2 figure 3. three‐electrode capacitor for electrostatic force actuation.   acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 70  where a1, a2 are amplification factors of the high-voltage amplifier. the term vb is a constant and determined by the value of s0 and the upper limit of the force measurement range. taking the gain difference between channels of the high-voltage amplifier, substituting equations (7)-(10) for v1 and v2 in equation (6), we obtain an equation for the electrostatic force fe     terms)(ac )( 220 222 00 2 00   dsbccbe bvvsvvbaasvvasf , (11) where a is the gain difference fraction, i.e. a=(a1a2)/(a1+a2), a0 is the mean gain factor, b is the offset fraction x/d, vs2 = (vs12-vs22)/2 and vc is the control voltage. the high frequency ac terms at audio and rf frequencies can be omitted because they cause only negligible ac displacement modulations on the flexure stage. parameter a can be tuned to very close to zero by adjusting the gain of the dac within a software program. after the tuning, parameter a was measured to be smaller than 5105 contributing to a negligible force uncertainty. instead of using an optical interferometer, the position of the center electrode is measured by the difference between c1 and c2 with a differential capacitance bridge circuit [7]. hence, any deviation of the center electrode from the vertical center position can be detected by the bridge circuit. with a commercially available optical interferometer, the offset adjustment could be quite difficult and ambiguous. the effect of parameter (a+b) can be tested by setting vc = 0, modulating vb with a square wave profile and observing the displacement signal of the flexure stage. for vb=2.0, we did not observe the displacement due to the modulated vb. the remaining factors s0, vc and vs dominate the uncertainty of the electrostatic force fe. the capacitance gradient s0 was measured using a weight of 1 mg and a set of optical interferometer. the weight of 1 mg was cyclically loaded and unloaded to the system by a motorized linear stage to produce a deflection modulation. the deflection was measured by the optical interferometer and the corresponding capacitance variation was measured by a calibrated precision capacitance bridge. to reduce the effect from seismic noise and drift noise from the optical interferometer or the flexure stage itself, both deflection x and capacitance variation c are measured from the difference between average values of mass loaded data and two adjacent mass unloaded data. the capacitance gradient s0 was obtained by calculating the ratio of c/x which is shown in figure 3. using (2) and (3), the capacitance gradient estimated by c/x can be expressed as )1( 1 )( 0 32 2 0 0 d x s d x d x d x d a x cxc s                                 (12) from (12), s1 deviates from s0 by a small portion of s0x/d. using the nominal design value of d = 0.5 mm, the ratio x/d is 0.15 %. this ratio can be reduced by using a smaller x for measuring the capacitance gradient. the measured capacitance gradient s has a mean value of s = 2.87610-8 f/m and a standard deviation of s = 0.00810-8 f/m. therefore, the standard uncertainty of the capacitance gradient is 12104)(   n su s  f/m with n = 369 in this measurement. the uncertainty u(vc) of the control voltage vc is calculated using the dac resolution of 0.3 mv as 088.0)32/(3.0)( cvu mv which contributes 1 nn. for the surface potential noise vs, the current actuation scheme prevents the surface potential effect from being coupled to and amplified by the control voltage vc as the case in the previous electrostatic actuation scheme [7] where vs was amplified as svcvs. the surface potential is reported to range from 20 mv to 180 mv [8, 9]. taking vs = 0.18 v for example and s = 2.876  10-8 f/m, the surface potential induced electrostatic force is about 0.9 nn. 2.3. null deflection control  the force under measurement fm is balanced by fe by the null deflection control. figure 5 shows the block diagram of the null deflection control. the transfer functions of the main components, namely the flexure stage, capacitive position sensor, loop filter and the electrostatic force actuator, are represented by g, h, d and a respectively. the term xn represents a deflection noise which may be contributed by the seismic vibration noise and the thermal noise of the flexure stage itself. the relation between fe and fm appears to be )( )(1 )( )( )(1 )( sf st st sx st hda sf mse     (13) where t(s) = gdha is the open-loop transfer function of the control system, and fe(s), xs(s) and fm(s) are the laplace transforms of fe, xs and fm, respectively. within the control bandwidth, i.e. for t(s) >> 1, the relation between fe and fm can be approximated as )( mne fkxf  (14) 0 50 100 150 200 250 300 350 400 -2.91 -2.9 -2.89 -2.88 -2.87 -2.86 -2.85 -2.84 x 10-8 measurement index c ap ac it an ce g ra di en t (f /m ) 0 50 100 150 200 250 300 350 400 -2.91 -2.9 -2.89 -2.88 -2.87 -2.86 -2.85 -2.84 x 10-8 0 50 100 150 200 250 300 350 400 -2.91 -2.9 -2.89 -2.88 -2.87 -2.86 -2.85 -2.84 x 10-8 measurement index c ap ac it an ce g ra di en t (f /m ) figure  4.  capacitance  gradient  calculated  from  c/x.  the  mean  capacitance gradient s0 = 2.87610 ‐8  f/m, standard deviation s = 0.00810 ‐ 8  f/m and standard deviation of the mean  12104  n s  f/m (n = 369 in  this measurement).   acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 71  where k is the stiffness of the flexure stage. equation (11) shows that the null deflection control automatically generates a compensation force fe to balance the force under measurement fm. to reduce the influence form the noise xn, fm is measured in a short period of time by comparing fe(t0) before fm is applied and fe(t1) after fm is applied: mnneee ftxtxktftff  )]()([)()( 0101 thus ntem kxff  . (15) the term xnt represents the temporal variation of xn during the measurement time frame. from one deflection measurement data set taken for 8-hr, using a window of 300 s to evaluate xnt, we obtained a standard deviation of 0.33 nm for xnt. with a measured value k of 13.0 n/m, the standard deviation of the xnt equivalent force noise is 4.3 nn. table 2 lists the main sources of uncertainty of the measured fm. 2.4. weighing process  each weight was loaded for 100 seconds and unloaded for 100 seconds. the compensation electrostatic force was calculated from the control voltage vc. figure 6 shows the control voltage vc acquired during one weighing cycle. the voltage difference vc was determined from one weight loaded segment and its two adjacent weight unloaded segments as 22 21 caca cbc vv vv  . (16) the weighing cycle was repeated for a long period of time in order to evaluate the stability and uncertainty of the system. 3. results  figure 7 shows the result of one weighing run for the weight of 1 mg. the measurement was done during three days. for this run, the measured electrostatic force was fe = (9,782.6  6.7) nn, where the given uncertainty is one standard deviation. the forces produced from the weights are estimated as fw = mg (1-air/mass), where the air buoyancy was taken into consideration. the comparison results are compiled in table 3. in general, the electrostatic force has a smaller value than the deadweight. for comparisons of weights 1 mg and 10 mg, the force differences defined as fe-fw are similar and close to 10 nn, and they both are in triangle shapes with similar dimensions. for comparisons of the weight 2 mg and 5 mg, the force differences are rather larger, and they are in shapes of square and pentagon, respectively. the weight of 5 mg has the largest force difference of about 200 nn (20 g), and it is the biggest weight in terms of wire length and shape area dimensions. a possible explanation for this force difference is that there might be some extra electrostatic or magnetic force between the weight and its surroundings. due to the size of the weight of 5 mg, it has the shortest distances to and possibly experiences the strongest electrostatic/magnetic interactions with its surroundings. 1 d a h g  flexure stage (m/n) capacitive position sensor (v/m) loop filter (v/v) electrostatic force driver (n/v) fmxn     vc fe v(x) 1 d a h g  flexure stage (m/n) capacitive position sensor (v/m) loop filter (v/v) electrostatic force driver (n/v) fmxn     vc fe v(x) figure 5. block diagram of the null deflection control. some noise sources are omitted for simplicity.   figure  6.  capacitive  displacement  and  control  voltage  vc  during  one  weighing cycle.  table 2. uncertainty budget for measured fm  source of uncertainty standard uncertainty (n) capacitance gradient s0 ef 4104.1 16-bit dac resolution 9101  surface potential vs 9108.1  displacement noise xnt 9103.4  combined standard uncertainty: 2429 )104.1()108.4()( em ffu   n figure 7. a data run for 1 mg weighing.  acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 72  4. discussion and outlook  a force measurement system based on the electrostatic sensing and actuation techniques has been built and upgraded. the system is enclosed by a vacuum chamber which resides on a passive low frequency vibration isolation platform. the voltage actuation scheme has been modified to allow the decoupling between the surface potential vs and the actuation voltage leading to a reduction in the drift and bias of the compensation electrostatic force. the system is stable over a long period of time. however, the cause of the extra electrostatic/magnetic force observed in the weighing test is still unclear and investigation to that is underway. a new design of the apparatus’s housing is being fabricated, it was designed to isolate most of the apparatus from its surroundings and expose only the force loading area. in addition, other parameters such as alignment factors, the capacitance gradient and its frequency dependence will also be re-verified and studied further to find out the cause for the force difference. acknowledgement  this work was supported by the bureau of standards, metrology and inspection (bsmi), taiwan, r.o.c. references  [1] newell d b, kramar j a, pratt j r, smith d t and williams e r, “the nist microforce realization and measurement project”, ieee trans. instrum. meas. 52 (2003) 508. [2] kim m-s, choi j-h, park y-k and kim j-h, “atomic force microscope cantilever calibration device for quantified force meterology at microor nano-scale regime: the nano force calibrator (nfc)”, metrologia 43 (2006) 389-395. [3] leach r, chetwynd d, blunt l, haycocks j, harris p, jackson k, oldfield s and reilly s, “recent advances in traceable nanoscale dimension and force metrology in the uk”, meas. sci. technol. 17 (2006) 467-476. [4] choi j-h, kim m-s, park y-k and choi m-s, “quantum-based mechanical force realization in piconewton range”, appl. phys. lett., 90 (2007) 073117. [5] nesterov v, “facility and methods for the measurement of micro and nano forces in the range below 10-5 n with a resolution of 10-12 n (development concept)”, meas. sci. technol., 18 (2007) 360366. [6] m-s kim, j.r. pratt, u. brand and c.w. jones, “report on the first international comparison of small force facilities: a pilot study at the micronewton level”, metrologia, 49 (2012), 70 [7] s-j chen and s-s pan, “a force measurement system based on an electrostatic sensing and actuating technique for calibrating force in a micronewton range with a resolution of nanonewton scale”, meas. sci. technol., 22 (2011), 045104 [8] j.r. pratt and j.a. kramar, “si realization of small forces using an electrostatic force balance”, proc. 18th imeko world congress, (17-22 september 2006, rio de janeiro, brazil) [9] s.e. pollack, s. schlamminger and j.h. gundlach, “temporal extent of surface potentials between closely spaced metals”, phys. rev. lett. 101 (2008), 071101 table 3. comparison results, unit in nn.  1 mg 2 mg 5 mg 10 mg fw 9797.12.9 19586.72.9 48950.56.4 97897.34.7 fe 9782.66.7 19527.04.1 48751.48.2 97886.416.5 e-fw -14.5 -59.7 -199.1 -10.9 linear regression analysis and the gum: example of temperature influence on force transfer transducers acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 407 acta imeko issn: 2221-870x december 2020, volume 9, number 5, 407 413 linear regression analysis and the gum: example of temperature influence on force transfer transducers d. röske1 1 physikalisch technische bundesanstalt (ptb), braunschweig, germany, dirk.roeske@ptb.de abstract: the deflection of strain-gauge force and torque transducers (the zero-reduced output signal for a given mechanical load) is dependent on the ambient temperature. this is also true of high-precision force transfer transducers used to compare force standard machines. to estimate the extent to which temperature deviations during measurements on different machines affect comparison results and – if necessary – to correct for such deviations, it is important to know the influence of the temperature on the deflection. this effect is usually investigated in special temperature chambers in which the transducer is exposed to various temperatures within a given temperature range while the machine is operated under unchanged laboratory conditions. the regression analysis of the results allows the temperature coefficient to be determined, including an uncertainty analysis. this was done for five force transfer transducers used for a bilateral comparison between npl (uk) and ptb (germany). keywords: comparison; temperature influence; regression; uncertainty 1. introduction the manufacturers of strain-gauge force and torque transducers devote significant effort to minimising the influence of the ambient temperature on the measurement results of their devices. the best transducers from this group are very well compensated for temperature influences. however, no measurements of the transducer’s temperature behaviour are taken and only the upper limit of the absolute value of the sensitivity’s temperature coefficient (the relative change of the deflection with temperature) is given in the transducer’s specifications. for example, a range between 0.02 %/(10 k) and +0.02 %/(10 k) is given for the hbm z30a transducer [1], while a range between 0.01 %/(10 k) and +0.01 %/(10 k) is given for the gtm ktn transducer [2]. this means that the absolute value of the temperature coefficient should be below 1 … 2 · 10-5 k-1. for a 1 k temperature change, this is more than the standard uncertainty of the best force standard machines. 2. measurement results for a comparison measurement between ptb’s new 200 kn force standard machine [3] and the relevant standard machines of the npl, five highprecision compression force transfer transducers (one z30a, four ktns) were selected. these transducers were investigated with respect to their temperature behaviour in a temperature chamber inside the 200 kn force standard machine. the temperature range was defined as 20 °c … 25 °c with additional measurement points beyond these threshold values at 21 °c, 22 °c and 23 °c. the resulting deflections 𝑑𝑖 in mv/v for the 50 kn ktn transducer are given in table 1 for the five temperatures 𝑇𝑖 and for two load steps (20 kn and 50 kn). table 1: deflections 𝑑𝑖 in mv/v for the 50 kn transducer at different temperatures 𝑇𝑖 and for two load steps 𝑻𝒊 in °c 𝒅𝒊 at 20 kn in mv/v 𝒅𝒊 at 50 kn in mv/v 20.03 0.800 996 2.003 036 21.00 0.800 992 2.003 028 21.99 0.800 988 2.003 011 23.06 0.800 984 2.003 002 24.90 0.800 978 2.002 981 3. preliminary evaluation in the evaluation, a linear dependency between the temperature 𝑇 and the deflection 𝑑 is assumed to exist within a sufficiently small temperature range (and possibly beyond this range). in the first approach, the data can be evaluated using the linear fit functions of programming languages such as python and r or standard software programs such as excel, all of which are based on the least-squares method and require a very small number of code lines, see figure 1. excel even contains a function wizard that can find the right function and defines the necessary arguments. the results agree well at a relative level of 10-12 and below. the limitation is caused by the floating-point http://www.imeko.org/ mailto:dirk.roeske@ptb.de acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 408 accuracy of the processor, operating system and software used; however, this is not a problem for the uncertainty in question in this investigation. figure 1: parameters (for the 20 kn force in table 1) of a linear fit in python (top), r (middle) and excel (bottom, german user interface) most linear-fit packages and functions contain additional information about the result such as the residual sum of squares (rss) of the least-squares fit in python and excel and the average variation of points around the fitted regression line, the residual standard error (rse), in r. these values can be used to check how well the model fits the data; this step is necessary to ensure that the right model is used for the data. the residual sum of squares of the least-squares fit in excel yields a value of 6.07 · 10-13 (mv/v)2 for the data in table 1 (for 20 kn). to improve comparability, this value is linked to another value – namely, the sum of squared deviations (ssd) from the mean. the result is 1.95 · 10-10 (mv/v)2 and the relative deviation (ssd rss) / ssd = 0.997 is a measure of the model quality: the closer this value is to 1, the better the model is. nevertheless, this calculation does not consider the uncertainty of the input data. it is expected that the resulting parameters will also have an associated uncertainty and that this value will depend on the uncertainty of the input data. to obtain the uncertainty of the results of the fitting procedure, the following regression analysis was carried out using a linear regression model and a least-squares approach in combination with the gum, the guide to the expression of uncertainty in measurement [4]. 4. regression analysis we consider five pairs of input (𝑇𝑖 ) and output (𝑑𝑖 ) values with their associated uncertainties (𝑢) ((𝑇𝑖 ; 𝑢(𝑇𝑖 )), (𝑑𝑖 ; 𝑢(𝑑𝑖 ))) 𝑖 = 1, 2, … 5 . (1) here, the aim is to find a linear approximation function �̃�(𝑇) that best describes the data �̃�(𝑇) = 𝑞 ∙ 𝑇 + 𝑟 (2) with the coefficients 𝑞 (slope) and 𝑟 (intercept). the values of these coefficients should be determined in such a way that a minimum condition is met. in the context of the model considered here, we require the sum of the squared deviations between the measured outputs and the corresponding outputs calculated in accordance with (2) to be a minimum (“least squares”) ∑ (𝑑𝑖 − �̃�(𝑇𝑖 )) 2 𝑁 𝑖=1 → min . (3) with (2), equation (3) can be also written as ∑(𝑑𝑖 − 𝑞 ∙ 𝑇𝑖 − 𝑟) 2 𝑁 𝑖=1 → min . (4) the condition necessary for the minimum is that the partial derivates of the given term to the unknown values 𝑞 and 𝑟 be zero. this yields 2 ∑(𝑑𝑖 − 𝑞 ∙ 𝑇𝑖 − 𝑟)(−𝑇𝑖 ) 𝑁 𝑖=1 = 0 −2 ∑(𝑑𝑖 − 𝑞 ∙ 𝑇𝑖 − 𝑟) 𝑁 𝑖=1 = 0 , (5) a system of two equations via which the values of the coefficients can be determined. a more compact representation of (5) is 𝑞 ∙ 𝑇2̅̅ ̅ + 𝑟 ∙ �̅� = 𝑇 ∙ 𝑑̅̅ ̅̅ ̅̅ 𝑞 ∙ �̅� + 𝑟 = �̅� (6) with 𝑇𝑘̅̅̅̅ = 1 𝑁 ∙ ∑ 𝑇𝑖 𝑘 𝑁 𝑖=1 , �̅� = 1 𝑁 ∙ ∑ 𝑑𝑖 𝑁 𝑖=1 (7) and 𝑇 ∙ 𝑑̅̅ ̅̅ ̅̅ = 1 𝑁 ∙ ∑ 𝑇𝑖 ∙ 𝑑𝑖 𝑁 𝑖=1 . (8) the condition sufficient for the minimum value is that the second derivative be positive. equation (4) yields 𝜕2 𝜕𝑞2 ∑(𝑑𝑖 − 𝑞 ∙ 𝑇𝑖 + 𝑟) 2 𝑁 𝑖=1 = 2 ∑ 𝑇𝑖 2 𝑁 𝑖=1 > 0 , (9) http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 409 𝜕2 𝜕𝑟2 ∑(𝑑𝑖 − 𝑞 ∙ 𝑇𝑖 + 𝑟) 2 𝑁 𝑖=1 = 2 𝑁 > 0 showing that the condition sufficient for the minimum value is fulfilled independently of the parameter values. from (6), the coefficients 𝑞 and 𝑟 can be calculated. they are functions of the 𝑇𝑘̅̅̅̅ (𝑘 𝜖 (1, 2)) and 𝑇 ∙ 𝑑̅̅ ̅̅ ̅̅ and, due to (7) and (8), functions of the 𝑇𝑖 and 𝑑𝑖 . this means that their uncertainties can, in principle, be calculated by applying the standard methods of the gum. the solution of (6) can be written as 𝑞 = 𝑇 ∙ 𝑑̅̅ ̅̅ ̅̅ − �̅� ∙ �̅� 𝑇2̅̅ ̅ − �̅� 2 , 𝑟 = �̅� − 𝑞 ∙ �̅� . (10) it must be noted that the slope of the linear function is the same over the whole temperature range, whereas the intercept depends on the value taken for zero. if, instead of the °c scale used in table 1, the kelvin scale is used, the value of the intercept calculated will change, see figure 2. figure 2: parameters (for the 20 kn force in table 1) of a linear fit in excel with kelvin temperatures force transfer standards are usually stored and operated (and, if possible, transported) in a narrow temperature range from 18 °c to 28 °c (for key comparison measurements, a much smaller interval such as 20 °c ± 0.5 k may be required). this means that the behaviour of the force transducer near 0 °c is not of interest; it is therefore not important that it be known for temperatures close to 0 k. it should be sufficient to describe the transducer’s behaviour in the narrow temperature interval where it was investigated. usually, the reference temperature 𝑇ref for comparison measurements is agreed in advance when the technical protocol is compiled. then, the calculation can be carried out with the new temperatures 𝑇′𝑖 defined as 𝑇′𝑖 = 𝑇𝑖 − 𝑇ref . then, (10) will be written as 𝑞 = 𝑇′ ∙ 𝑑̅̅ ̅̅ ̅̅ ̅ − �̅� ∙ 𝑇′̅ 𝑇′2̅̅ ̅̅ − 𝑇′̅2 , 𝑟 = �̅� − 𝑞 ∙ 𝑇′̅ . (11) in a special case in which the reference temperature 𝑇ref equals the mean temperature �̅�, the mean of the new temperatures becomes zero: 𝑇′̅ = 1 𝑁 ∙ ∑ 𝑇𝑖 𝑁 𝑖=1 − �̅� = �̅� − �̅� = 0 . (12) in this case, (11) is simplified to 𝑞 = 𝑇′ ∙ 𝑑̅̅ ̅̅ ̅̅ ̅ 𝑇′2̅̅ ̅̅ , 𝑟 = �̅� . (13) it must be noted that the single temperature points may change in different measurement campaigns; for example, a new value of 24 °c may be defined instead of or in addition to 23 °c. for better comparability of results, it could be beneficial to define the number of temperature points for all such investigations in advance. the single points may then be chosen in such a way that their mean equals the reference temperature agreed. although it may be difficult to reproduce all single temperatures very accurately, the remaining deviations of 0.1 … 0.2 k should be small enough to be neglected. with (7) and (8), equations (13) can now be rewritten as 𝑞 = ∑ 𝑇′𝑖 ∙ 𝑑𝑖 𝑁 𝑖=1 ∑ 𝑇′𝑖 2 𝑁 𝑖=1 ⁄ , 𝑟 = 1 𝑁 ∙ ∑ 𝑑𝑖 𝑁 𝑖=1 . (14) depending on the reference temperature chosen, (11), respectively (14), are the model functions to which the gum should be applied to find the uncertainties 𝑢(𝑞) and 𝑢(𝑟). here, the aim is to find the regression function �̃�(𝑇) and the uncertainties of the fitted values, preferably also given by a function 𝑢(�̃�). in this work, the more common approach (11) was used because the reference temperature 𝑇ref did not match the mean of the temperatures 𝑇𝑖. 5. results the calculations were carried out under the assumption that no correlations existed between the input values 𝑇′𝑖 , 𝑑𝑖 , which were treated as uncorrelated quantities. the equations (11) for the determination of 𝑞 and 𝑟 are quite simple, whereas those for the determination of 𝑢(𝑞) and 𝑢(𝑟) are rather complex. following the gum procedure, partial derivatives to each of the ten input variables must be calculated. maxima, a computer algebra system [5], was used to carry out this calculation analytically, yielding terms with several hundred parts as the result for the uncertainties. although this software could have been used to calculate the complete result 𝑞, 𝑢(𝑞), 𝑟, 𝑢(𝑟) for all the different transducers, this would have been time-consuming. a better way was to obtain the analytical result in maxima and to use this formula in excel. the known formula was then applied as a spreadsheet function in all subsequent calculations with the same input data scheme (five pairs of values as in table 1). http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 410 the results show that, if a temperature outside the interval of interest is taken as zero, the uncertainty of the intercept 𝑟 will be larger than it would be if a value from the given interval were taken as a reference temperature 𝑇ref, see table 2. moreover, the uncertainty of the intercept is minimal if the reference temperature chosen is the arithmetic mean of the temperature values 𝑇𝑖. table 2: values and associated standard uncertainties of the slope and intercept of a linear fit calculated for different reference temperatures 𝑻𝐫𝐞𝐟 in °c 𝒒, 𝒖(𝒒) in (mv/v)/k 𝒓, 𝒖(𝒓) in mv/v -273.15 -3.699 · 10-6, 0.798 · 10-6 0.802 080, 0.000 236 0.00 -3.699 · 10-6, 0.798 · 10-6 0.801 070, 0.000 017 8 20.50 -3.699 · 10-6, 0.798 · 10-6 0.800 994, 0.000 001 9 22.20 -3.699 · 10-6, 0.798 · 10-6 0.800 988, 0.000 001 4 24.00 -3.699 · 10-6, 0.798 · 10-6 0.800 981, 0.000 002 0 the uncertainty of the slope 𝑞 is not affected by the reference temperature selected; due to the linear function (1), its contribution to the uncertainty of the fitted value �̃�(𝑇′𝑖 ) is calculated with the temperature 𝑇′𝑖 as a sensitivity coefficient. this means that the uncertainty contribution of the slope will be lower the closer the reference temperature 𝑇ref and the temperatures 𝑇𝑖 are to each other. by means of the known coefficients, the regression function �̃�(𝑇) can be calculated. the known standard uncertainties of the coefficients also allow other functions to be determined – namely, functions defining a 1-σ band along the approximation function. expanded uncertainties with 𝑘 = 2 yield a 2-σ band. figure 3 and figure 4 show these results for the given 50 kn transducer at the 20 kn load step with the measured values (blue symbols) and with standard uncertainty bars for 𝑇 and 𝑑, the fit function (red line) and the 2-σ bands (dashed red lines). the standard uncertainty of the temperature was calculated from a rectangular distribution with a half-width of 0.1 k; the standard uncertainty of the deflection was 0.000 003 mv/v. the reference temperature was 20.5 °c. figure 5 shows how the results change if another reference temperature is chosen. if the reference temperature 𝑇ref is the arithmetic mean of the temperatures 𝑇𝑖 at which the investigation is carried out, the uncertainties become lower and their minimum value is at the reference temperature. figure 3: result of the regression analysis for the 20 kn force step of the 50 kn transducer (details see text above) figure 4: result of figure 3 when the uncertainty of the deflection is increased to 0.000 005 mv/v (top) and when the half-width of the temperature distribution is increased to 0.5 k (bottom) figure 5: result of figure 3 when the arithmetic mean of the temperature values 𝑇𝑖 (22.196 k) is taken as the reference temperature 𝑇ref http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 411 the results and figures shown above are an example of a transducer with a very linear deflection dependency on the temperature; the relative deviation between rss and ssd is 0.997. in addition, the slope value of -3.7 · 10-6 (mv/v)/k at a 0.8 mv/v signal is very low and a good value for a transfer transducer. the results for the other transducers are shown as examples in figure 6 to figure 8. figure 6: result of the 20 kn transducer at a force of 20 kn and a reference temperature of 20.5 °c (further details as in figure 3) figure 7: result of the 100 kn transducer at a force of 50 kn and a reference temperature of 20.5 °c (further details as in figure 3) figure 8: result of the 200 kn transducer at a force of 200 kn and a reference temperature of 20.5 °c (further details as in figure 3) the 20 kn transducer (figure 6) has a slope of -3 · 10-5 (mv/v)/k at 2 mv/v (20 kn force). the absolute value is larger than that of the 50 kn transducer. the linearity of the measured values is not as perfect as that of the 50 kn transducer in figure 3; the relative deviation between rss and ssd is 0.970. a special feature of the 100 kn transducer (figure 7) is the positive slope of the deflection/temperature function, whereas the 20 kn and 50 kn transducers have a negative slope. the absolute value of the slope is slightly larger than that of the 20 kn transducer and amounts to 2.5 · 10-5 (mv/v)/k at 1 mv/v (50 kn force) and 3.9 · 10-5 (mv/v)/k at 2 mv/v (100 kn force). apart from the 23 °c result, the values are very linear. although the reason for the deviation of the 23 °c result has not yet been found, this single value had no significant effect on the fit function, as can be seen in figure 7. on the other hand, this result indicates that temperature coefficients should not be determined from measurements at only two different temperatures. finally, the deflection of the 200 kn transducer (figure 8) did not show good temperature sensitivity. the variations of the deflection appear to be random. on the other hand, the overall relative span of the deflection values in the temperature range measured is 21 … 24 ppm; this is comparable with the corresponding value of the 50 kn transducer, which showed the lowest temperature sensitivity. nevertheless, the behaviour of the 200 kn transducer should be investigated further. the results obtained to date would not be sufficient to calculate the corrections. the last step in these calculations is the determination of the function 𝑢(�̃�), which describes the standard uncertainties associated with the fit function �̃�. this can be realised by applying higher-order regression methods such as cubic regression to the uncertainty values calculated, see figure 9. 6. application the temperature influence measured on a single transducer from different measurement campaigns or even on different transducers can be compared by determining (as the final step) the regression function �̃�(𝑇) and the associated uncertainty 𝑢(�̃�) = 𝑢 (�̃�(𝑇)) = 𝑢(𝑇) . for the example in figure 3 with 𝑇ref = 20.5 °c , we obtained the following functions (𝑇 in °c) �̃�(𝑇) mv v⁄ = −3.7 ∙ 10−6 ∙ 𝑇 °c + 0.801070 , 𝑢(𝑇) mv v⁄ = −1.45 ∙ 10−8 ∙ ( 𝑇 °c ) 3 + 1.061 ∙ 10−6 ∙ ( 𝑇 °c ) 2 − 2.524 ∙ 10−5 ∙ 𝑇 °c + 0.0001981 . (15) http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 412 figure 9: standard uncertainty (1 σ) part of the result from figure 3, third-order regression function 𝑢(�̃�) (dashed line) based on the calculated values 𝑢(�̃�𝑖 ) = 𝑢 (�̃�(𝑇𝑖 )) (blue dots) for reference temperatures of 20.5 °c (top) and 22.196 °c (bottom) however, in most cases, the results will be applied in order to correct comparison measurement results 𝑑, 𝑢(𝑑) obtained at deviating temperatures 𝑇. usually, the corrected values �̂� are calculated in accordance with �̂�(𝑇ref) = 𝑑(𝑇) + 𝑞 ∙ ∆𝑇, ∆𝑇 = 𝑇ref − 𝑇 (16) where the intercept 𝑟 does not appear. when a measurement result is corrected, the regression function is defined in such a way that it runs through the uncorrected result, meaning that the intercept is no longer a free parameter. nevertheless, the regression function �̃�(𝑇) has an associated uncertainty based on the uncertainties of the slope 𝑢(𝑞) and the intercept 𝑢(𝑟). the formal application of the gum to (16) would yield a result that contains no contribution of 𝑢(𝑟) 𝑢2(�̂�) = 𝑢2(𝑑) + 𝑞2 ∙ 𝑢2(𝑇) + (𝑇ref − 𝑇) 2𝑢2(𝑞). (17) therefore, another model is proposed instead of (16) – namely, �̂�(𝑇ref) = 𝑑(𝑇) + ∆�̃�(𝑇ref − 𝑇) 𝑢2(�̂�) = 𝑢2(𝑑) + 𝑢2(∆�̃�) , (18) where ∆�̃�(𝑇ref − 𝑇) = �̃�(𝑇ref) − �̃�(𝑇) 𝑢2(∆�̃�) = 𝑢2 (�̃�(𝑇ref)) + 𝑢 2 (�̃�(𝑇)) . (19) this means that, instead of forcing the regression function to run through the uncorrected result, a correction value ∆�̃� is added to this result. this value can be calculated from the increase (or decrease for a negative slope) of the regression function related to the temperature deviation 𝑇ref − 𝑇 . this increase (or decrease) of the function is uncertain and has a value 𝑢(∆�̃�). the uncertainties calculated according to (19) are larger than those yielded by (18), an effect caused mainly by the uncertainty of the intercept. for the example in figure 3, the standard uncertainty of the corrected deflection (∆𝑇 = 0.5 k) in accordance with (18) is 3.03 · 10-6 mv/v, whereas the same value calculated in accordance with (19) amounts to 4.08 · 10-6 mv/v. 7. summary the application of standard uncertainty calculation methods to the linear regression of measurement results using the least-squares approach was investigated. the method proposed can be used to calculate the uncertainty of a linear regression function based on the uncertainty of its slope and its intercept. the method can be applied to the correction of measurement results obtained at deviating temperatures but is not limited to force transducers: it can also be applied, for example, to torque or pressure transducers. 8. acknowledgements this work is part of the project 18sib08, funded by the empir programme. i would like to thank my colleague norbert tetzlaff for his accurate measurements and calibrations in the 200 kn force standard machine at ptb. 9. references [1] hbm: data sheet of the z30a force transducers. online [accessed 20200108]: https://www.hbm.com/fileadmin/mediapool/hbmd oc/technical/b02075.pdf (registration necessary) [2] gtm: data sheet of the ktn-d force transducers. online [accessed 20200108]: https://www.gtmgmbh.com/fileadmin/media/dokumente/produkte/ datenblaetter/de/datenblatt_serie_ktnd_20170419.pdf [3] r. kumme, h. kahmann, f. tegtmeier, n. tetzlaff, d. röske, ptb’s new 200 kn deadweight force standard machine, proc. of the imeko 23rd tc3, 13th tc5 and 4th tc22 international conference, 30 may to 1 june 2017, helsinki, finland. online [accessed 20200108]: http://www.imeko.org/ https://www.hbm.com/fileadmin/mediapool/hbmdoc/technical/b02075.pdf https://www.hbm.com/fileadmin/mediapool/hbmdoc/technical/b02075.pdf https://www.gtm-gmbh.com/fileadmin/media/dokumente/produkte/datenblaetter/de/datenblatt_serie_ktn-d_20170419.pdf https://www.gtm-gmbh.com/fileadmin/media/dokumente/produkte/datenblaetter/de/datenblatt_serie_ktn-d_20170419.pdf https://www.gtm-gmbh.com/fileadmin/media/dokumente/produkte/datenblaetter/de/datenblatt_serie_ktn-d_20170419.pdf https://www.gtm-gmbh.com/fileadmin/media/dokumente/produkte/datenblaetter/de/datenblatt_serie_ktn-d_20170419.pdf acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 413 https://www.imeko.org/publications/tc32017/imeko-tc3-2017-032.pdf [4] iso/iec guide 98-3:2008, uncertainty of measurement part 3: guide to the expression of uncertainty in measurement (gum:1995). online [accessed 20200108]: https://www.iso.org/standard/50461.html pdf version: https://www.bipm.org/utils/common/documents/jc gm/jcgm_100_2008_e.pdf [5] maxima, a computer algebra system. online [accessed 20200614]: http://maxima.sourceforge.net http://www.imeko.org/ https://www.imeko.org/publications/tc3-2017/imeko-tc3-2017-032.pdf https://www.imeko.org/publications/tc3-2017/imeko-tc3-2017-032.pdf https://www.iso.org/standard/50461.html https://www.iso.org/standard/50461.html https://www.bipm.org/utils/common/documents/jcgm/jcgm_100_2008_e.pdf https://www.bipm.org/utils/common/documents/jcgm/jcgm_100_2008_e.pdf http://maxima.sourceforge.net/ validation of a measurement procedure for the assessment of the safety of buildings in urgent technical rescue operations acta imeko issn: 2221-870x december 2021, volume 10, number 4, 140 146 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 140 validation of a measurement procedure for the assessment of the safety of buildings in urgent technical rescue operations maria alicandro1, giulio d’emilia2, donatella dominici1, antonella gaspari3, stefano marsella4, marcello marzoli4, emanuela natale2, sara zollini1 1 diceaa, università dell’aquila, via g. gronchi 18, 67100 l’aquila, italy 2 diiie, università dell’aquila, via g. gronchi 1867100, l’aquila, italy 3 dmmm, politecnico di bari, via orabona 4, 70125 bari, italy 4 dipartimento dei vigili del fuoco del soccorso pubblico e della difesa civile, ministero dell’interno, piazzale viminale 1, 00184 roma, italy section: research paper keywords: validation; measurement uncertainty; calibration; total station; building monitoring; technical rescue citation: maria alicandro, giulio d’emilia, donatella dominici, antonella gaspari, stefano marsella, marcello marzoli, emanuela natale, sara zollini, validation of a measurement procedure for the assessment of the safety of buildings in urgent technical rescue operations, acta imeko, vol. 10, no. 4, article 23, december 2021, identifier: imeko-acta-10 (2021)-04-23 section editor: roberto montanini, università di messina and alfredo cigada, politecnico di milano, italy received july 26, 2021; in final form december 6, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: emanuela natale, e-mail: emanuela.natale@univaq.it 1. introduction seismic-damage prevention is one of the main goals of researchers in the management of historical buildings, and several authors have dealt with the appraisal and the inventory of the building’s heritage [1]-[7]. a particular application concerns the evaluations required during urgent technical rescue operations. besides the primary target to search and rescue survivors, soon after an earthquake most resources are spent to evaluate the level of damages suffered by buildings and infrastructures, both in the immediate aftermaths of the event, to support the logistics of the rescue itself, to assess the level of safety of the rescue operations and the road network, and in the following phases, to implement provisional measures able to secure buildings and, in particular, the cultural heritage [8]-[12]. such assessment is quite challenging and critical, both from the technical and the logistic point of view, due to the high number of buildings (up to thousands), good part of which could be listed as cultural heritage. moreover, recent earthquakes were followed by 1-3 months aftershocks with similar intensity, causing a waste of most of the spent effort, which imposed to repeat the assessments anew. up to now, in italy as well as worldwide, this task has been carried out thanks to the expertise of fire fighters or other technicians, who had to assess the residual safety of the buildings on the only basis of a visual inspection. such assessment is obviously subjective, even if carried out by applying severe operational procedures, which foresee the analysis of the damages against well-defined schemas. abstract this work would like to provide a preliminary contribution to the draft of standard procedures for the adoption of total stations by rescuers in emergency situations, so as to offer a reliable and effective support to their assessment activities. in particular, some considerations will be made regarding the effect of the number and positioning of monitoring points on the tilt determination of a building façade, in order to set up simplified procedures, which are quick and easy to implement in emergency situations, at the same time guaranteeing the reliability of the results. two types of building will be taken into account as test cases, which have different characteristics in terms of height, distance and angle with respect to the total station. some considerations will be made about the aspects to be explored in future work, for the calibration of the method as a whole and the definition of all the steps of a procedure for the evaluation of the safety of a building. mailto:emanuela.natale@univaq.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 141 to improve the efficiency and reduce the subjectivity of such assessments, the authors propose to employ a dedicated survey system designed to support rescue operations and the implementation of provisional measures. such system is now possible, thanks to the recent availability of dedicated, userfriendly human-machine interfaces, able to hide the complexity of the employed tools, while maintaining the scientific value of the retrieved data. in fact, such user-friendly interfaces make them accessible by untrained first responders, which are employed on the field in the first phases of the rescue operations [8]-[12]. systems and tools make less subjective the assessment of building residual safety, and, in particular, could decisively improve the following: • fast execution of accurate surveys of damaged buildings, so as to reduce the exposure risk of rescuers in the first phase of the emergency; • detailed design of provisional measures; • quantitative monitoring of building damage evolution over time; • quantitative monitoring of provisional measures evolution over time, to assess their residual efficacy, in particular following aftershocks. available technologies offer several instruments able to reach the aim. e.g., most common methods to monitor and assess cultural heritage building damage are based on satellite systems, photogrammetry, laser scanning, infrared thermography [13][17]. 3d reconstruction techniques of buildings, based on unmanned aerial vehicle (uav) tilt photography, have the advantages of multi-angle and three-dimensional, but they are very time-consuming and high levels of accuracy are not easy to achieve [18]-[21]. approaches are proposed, for identifying intact and collapsed buildings via multi-scale morphological profiles with multi-structuring elements from post-earthquake satellite imagery, or using synthetic aperture radar (sar) techniques; however, these methodologies do not examine in detail the structural characteristics of individual buildings [22]-[24]. total stations are remote sensing tools, which can be easily used both indoor and outdoor, thanks to the easy installation and real-time output, rain and wind-proof, wide range of operating temperatures and insensitivity to light conditions. as for total stations used without reflective targets, point distance is measured remotely, at safe distance from the target building, so that surveys are completely not-invasive and safe for the operators. as such, they consent to reach a good balance between precision and data quality, easy-to-use, cost, easy to realtime processing. there are studies in the literature that refer to indoor calibration of total stations [25]. to obtain reliable outcomes, it is crucial to standardise procedures which take into proper account the constraints imposed by the deployment on the field in emergency situations. in fact, such environment implies numerous operational issues, due to fast-evolving scenarios, with unpredictable intrusions by rescuers and vehicles. in particular: • the impossibility to measure some target points, especially the lowest, due to the interposition of obstacles (rescuers and rescue vehicles) between instrument and target; • the need to deploy as fast as possible, and to survey the minimal number of target points, needed to obtain outcomes with sufficient accuracy; • the possibility of accidental or voluntary movements of the instruments (e.g., when caterpillars have to move in between), for which it is necessary to define stable reference points, in order to correctly acquire series of data carried out from different positions. in the process of simplifying and speeding up procedures for using instrumentation in specific applications, validation techniques are needed to ensure the reliability of the results [26] [29]. in this paper the authors, as a preliminary contribution, will report their considerations regarding the impact on the tilt determination of a building façade, of the number and positioning of reference and monitoring points. this work would like to contribute to the draft of standard procedures for the adoption of total stations by rescuers in emergency situations, so as to offer a reliable and effective support to their assessment activities. section 2 will describe the instrumentation used for these tests and the lay-out of the site; the position of the measuring points is also discussed and the measuring set-up able to reduce step by step the monitoring points on which data processing is carried out. in section 3, the results are presented and analysed with reference to procedure simplification purposes. conclusions and future work will end the paper. 2. materials and methods the test area is located in fossa, a small village next to l’aquila, in central italy (figure 1), hits by 2009 earthquake [15]. the area is a square surrounded by three buildings and a tower. reference targets have been materialised on two buildings, while monitoring points have been located on the other buildings, the “house” and the “tower” in figure 2. these buildings have been chosen as test cases for the analyses, as they have different characteristics from the point of view of height, that implies different distance and inclination angle of the highest points with respect to the position of the measurement system, which is more or less halfway between the two buildings. the safer system has been used to perform the measurements. the safer system has been designed to estimate and monitoring any critical movements of structures and civil works. the operating modes and functions have been developed to make the system as suitable as possible for the emergency activities of the fire fighters. it is composed by (figure 3): • a total station leica geosystems (tps), model ts16 imaging, tool used extensively in monitoring activities by measuring azimuthal and zenithal angles and distances from the instrumental centre to the measured point with high precision. these measures (polar figure 1. test area location [image credit: google earth]. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 142 coordinates) are transformed by the system into cartesian coordinates (x, y, z); • software safer, in order to manage the total station; • a tablet pc, with the software installed on it, able to communicate with the leica total station (tps) both by wireless and with dedicated cabling. by measuring the 3d coordinates, it is possible to carry out monitoring and control activities of slow or relatively fast movements over time both in external and internal environments, during day and night. all measurements are carried out automatically in order to exclude any errors by the operators. the measurement results are immediately processed in the field and displayed in graphical and tabular form, helping the operator to fast interpretate the phenomenon in progress and make immediate decisions. the safer system is designed to provide spatial information (3d coordinates) of punctual points and the operational scene is schematised in figure 4. the points on the building considered to be stable are called reference points while the ones on the building to be monitored are called monitoring points. the safer system is positioned at the centre in order to have maximum visibility. all the preliminary operations, required by the standard procedures for using this kind of equipment (fixing the tripod to the ground, levelling), have been correctly carried out. the measurement by the total station can be carried out in two different ways, by using: infrared ray laser beam. surveying by infrared ray needs a reflective target, the prism. using the prism allows more accurate measurements and guarantees the achievement of greater distances. the survey by laser beam is adopted to detect points that are difficult to access by the operator, who must physically place the prisms. in this case, in fact, it is not necessary to adopt any external reflecting system. reference points have to be defined to have a stable reference, and they must be fixed according to an optimal geometry (not too close or aligned) and on fixed points, not susceptible to movement: these aspects will be taken into account in the procedure definition. seven reference points have been placed in independent positions with respect to the monitored buildings. at the reference points, prisms are used; at the monitoring points, white/black targets (figure 5) are positioned using epoxy resins, to have reference positions, easily identifiable, on which to impose, in perspective, known displacements for the evaluation of the metrological characteristics of the instrument and to obtain useful information for the optimization of the procedure. the latter targets have reflectivity characteristics similar to those of a common building wall. a coordinate system has been defined with reference to the first building, the house, in such a way that the x-axis direction is obtained by projecting on the horizontal plane the points taken on the façade, the z-axis is vertical and its origin is on the horizontal plane at the height of the instrument, the y-axis enters the wall of the house. figure 6 shows the point clouds acquired on the façades of the two buildings and the defined reference systems. for each monitoring point, 20 repeated measurements have been carried out. the monitoring points have been named as indicated in figure 7. on the basis of the acquired measurements, after elimination of outliers, the following analysis are carried out, using the matlab software: a) b) figure 2. monitored buildings: a) house; b) tower. figure 3. safer system used for the analysis. figure 4. safer system operational scene [30]. a) b) figure 5. targets used for monitoring and reference points, respectively: a) black/white target; b) prism. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 143 1. standard deviations of the measured values of the coordinates of the monitoring points are calculated, on the basis of 20 repeated measurements. 2. the selected points (ml 1-12 for the house, ml 14-19, for the tower) are processed by means of a least squares regression (first degree polynomial model) and the inclination angles of the façades are determined with respect to the horizontal plane. these values are considered as a reference, with respect to which to evaluate other configurations of the monitoring points. 3. the angle of inclination is calculated with reference to the configurations described in figure 8 and figure 9, that is: excluding points along vertical lines from the analysis (configurations b, c, d and e for house; configurations b’ and c’ for tower) excluding the points along the lowest line, only in the case of house (configuration f) considering only extreme points (configuration g and h for house; configuration d’ for tower). 3. results figure 10 and figure 11 show the calculated standard deviations for the monitoring points on the house and the tower: these values do not exceed 5 mm. the selected points (ml 1-12 for the house, ml 14-19, for the tower) are processed by means of a least squares regression, according to a first degree polynomial model, and the inclination angles of the façades are determined with respect to the horizontal plane. the obtained results, in terms of inclination angles and signs of the direction cosines (cos(ry) and cos(rz)) of the normal to the plane, are summarized in table 1. the sign of the direction cosines indicates if the façade is inclined towards the outside or inside of the building: taking into account figure 6, in the case of the house, if the signs of cos(rz) and cos(ry) are concordant, the façade is inclined toward the outside, if discordant toward the inside; in the case of tower, on the contrary, if the signs are concordant, the façade is inclined toward the inside. the least squares regression has been also performed for the configurations of figure 8 and figure 9, and the inclination differences with respect to the reference case have been calculated (figure 12 and figure 13). furthermore, the displacement in the horizontal direction of the highest point in both the two buildings has been evaluated (figure 14 and figure 15). the following observations can be made: • for the house, variations in the angle of inclination up to 0.18°, compared to the reference case, may result, by changing the number of points chosen for processing; the displacements evaluated in the horizontal direction, at the height of the highest point, can reach about 20 mm. • for the tower, variations in the angle of inclination up to 0.062°, compared to the reference case, may result, by changing the number of points chosen for processing; the displacements evaluated in the horizontal direction, at the height of the highest point, can reach about 25 mm. • the results show, for both buildings, that the acquisition of only the extreme monitoring points allows to obtain results comparable to those of the reference case. these results appear promising from the point of view of the possibility of simplification, also considering that the standard deviation of the measurements is lower with respect to the calculated displacements. • it must be noticed that the standard deviation of the measurements is greater in some areas of the façade of the house, and this does not seem to be directly related to parameters such as distance and angle of positioning of the total station with respect to the building. this aspect will need to be explored in future work. figure 6. point clouds acquired on the façades of the buildings: in red the points on the house; in blue the points on the tower. a) b) figure 7. identification codes of the monitoring points for: a) house; b) tower. table 1. results of the least squares regression. house tower inclination angle 89.99 ° 89.93 ° signs of cos(ry) and cos(rz) concordant concordant acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 144 a) b) c) d) e) f) g) h) figure 8. configurations of measuring point for the house: a) “ref” (reference); b) “config 2”; c) “config 3”; d) “config 4”; e) “config 5”; f) “config 6”; g) “config 7”; h) “config 8”. a) b) c) d) figure 9. configurations of measuring point for the tower: a’) “ref” (reference); b’) “config 2’”; c’) “config 3’”; d’) “config 4’”. figure 10. standard deviation of the coordinates of the monitoring points for the house. figure 11. standard deviation of the coordinates of the monitoring points for the tower. figure 12. inclination difference with respect to the reference for the house. figure 13. inclination difference with respect to the reference for the tower. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 145 4. conclusions in this paper some considerations have been made regarding the effect on the tilt determination of a building façade of the number and positioning of the monitoring points. the study has been conducted considering two buildings with different characteristics as test cases. the results show, for both buildings, that the acquisition of only part of the monitoring points can allow to obtain results comparable to those of the reference case. these results appear promising from the point of view of the possibility of simplification, also considering that the standard deviation of the measurements do not exceed 5 mm, and it is lower than the calculated displacements (20 mm – 25 mm). in future work monitoring targets will be subject to known displacements to calibrate the method on the whole. furthermore, the causes of increased standard deviation of the measurements carried out in specific areas will be investigated. acknowledgments work carried out with the co-financing of the storm research and development project, funded by the european commission for the horizon 2020 program. thanks are due to angelo celano and luca macerola, of leica geosystems s.p.a., for the technical support. references [1] r. s. olivito, s. porzio, c. scuro, d. l.carnì, f. lamonaca, inventory and monitoring of historical cultural heritage buildings on a territorial scale: a preliminary study of structural health monitoring based on the cartis approach, acta imeko, 10 (2021) 1, pp. 57-69. doi: 10.21014/acta_imeko.v10i1.820 [2] v. sangiorgio, martiradonna, f. fatiguso, g. uva, historical masonry churches diagnosis supported by an analytic-hierarchyprocess-based decision support system, acta imeko, 10 (2021) 1, pp. 6-14. doi: 10.21014/acta_imeko.v10i1.793 [3] r. spallone, g. bertola, f. ronco, digital strategies for the valorisation of archival heritage, acta imeko, 10 (2021) 1, pp. 224-233. doi: 10.21014/acta_imeko.v10i1.883 [4] v. croce, g. caroti, a. piemonte, m.g. bevilacqua, from survey to semantic representation for cultural heritage: the 3d modeling of recurring architectural elements, acta imeko, 10 (2021) 1, pp. 98-108. doi: 10.21014/acta_imeko.v10i1.842 [5] i. roselli, a. tatì, v. fioriti, i. bellagamba, m. mongelli, r. romano, g. de canio, m. barbera, m. m. cianetti, integrated approach to structural diagnosis by non-destructive techniques: the case of the temple of minerva medica, acta imeko, 7 (2018) 3, pp. 13-19. doi: 10.21014/acta_imeko.v7i3.558 [6] a. malagnino, g. mangialardi, g. zavarise, a. corallo, process modeling for historical buildings restoration: an innovation in the management of cultural heritage, acta imeko, 7 (2018) 3, pp. 95-103. doi: 10.21014/acta_imeko.v7i3.602 [7] f. lamonaca, c. scuro, p.f. sciammarella, r.s. olivito, d. grimaldi, d.l. carnì, a layered iot-based architecture for a distributed structural health monitoring system, acta imeko, 8 (2019) 2, pp. 45-52. doi: 10.21014/acta_imeko.v8i2.640 [8] y. hashem, j. ambani, a story of change. published by the international centre for the study of the preservation and restoration of cultural property (iccrom), rome, italy, 2021. [9] s. marsella, m. marzoli, l‘uso di tecnologie innovative nella valutazione speditiva del rischio, convegno internazionale di studi “monitoraggio e manutenzione nelle aree archeologiche. cambiamenti climatici, dissesto idrogeologico, degrado chimicoambientale”, l'erma di bretschneider, bibliotheca archaeologica, 2020, pp. 165-170, isbn: 9788891322029. [10] a. utkin, l. argyriou, v. bexiga, f. boldrighini, g. cantoro, p. chaves, f. cakır, advanced sensing and information technologies for timely artefact diagnosis, pisa university press, 2019, isbn 978-88-3339-240-0. [11] methodologies for quick assessment of building stability in the eu storm project (2018, september). proceedings of the xxi international nkf conference, reykjavík, iceland. [12] s. marsella, m. marzoli, l. palestini, results of h2020 storm project in the assessment of damage to cultural heritage buildings following seismic events, (2020). [13] s. zollini, m. alicandro, d. dominici, r. quaresima, m. giallonardo, uav photogrammetry for concrete bridge inspection using object-based image analysis (obia), remote sens., 12 (2020) 3180. doi: 10.13140/rg.2.2.35754.85441 04/07/2020 [14] d. dominici, e. rosciano, m. alicandro, m. elaiopoulos, s. trigliozzi, v. massimi, cultural heritage documentation using geomatic techniques: case study: san basilio's monastery, l'aquila. in 2013 digital heritage international congress (digitalheritage) 1, (2013) pp. 211-214. doi: 10.1109/digitalheritage.2013.6743735 [15] dominici, d., galeota, d., gregori, a., rosciano, e., alicandro, m. elaiopoulos, integrating geomatics and structural investigation in post-earthquake monitoring of ancient monumental buildings, j. appl. geod., 8 (2014), pp. 141-154. doi: 10.1515/jag-2012-0008 [16] f. yang, x. wen, x. wang, x. li, z. li, a model study of building seismic damage information extraction and analysis on ground-based lidar data, adv. civ. eng., 2021 (2021), 5542012. doi: 10.1155/2021/5542012 figure 14. horizontal displacement of the higher point for the house. figure 15. horizontal displacement of the higher point for the tower. https://doi.org/10.21014/acta_imeko.v10i1.820 https://doi.org/10.21014/acta_imeko.v10i1.793 https://doi.org/10.21014/acta_imeko.v10i1.883 https://doi.org/10.21014/acta_imeko.v10i1.842 https://doi.org/10.21014/acta_imeko.v7i3.558 https://doi.org/10.21014/acta_imeko.v7i3.602 https://doi.org/10.21014/acta_imeko.v8i2.640 https://doi.org/10.13140/rg.2.2.35754.85441%2004/07/2020 https://doi.org/10.1109/digitalheritage.2013.6743735 https://doi.org/10.1515/jag-2012-0008 https://doi.org/10.1155/2021/5542012 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 146 [17] i. m. e. zaragoza, g. caroti, a. piemonte, the use of image and laser scanner survey archives for cultural heritage 3d modelling and change analysis, acta imeko, 10 (2021) 1, pp. 114-121. doi: 10.21014/acta_imeko.v10i1.847 [18] x. chen, j. lin, x. zhao, s. xiao, research on uav oblique photography route planning in the investigation of building damage after the earthquake, iop conference series: earth and environmental science, 783 (2021), 012081. doi: 10.1088/1755-1315/783/1/012081 [19] r. zhan, h. li, k. duan, s. you, k. liu, f. wang, y. hu, automatic detection of earthquake-damaged buildings by integrating uav oblique photography and infrared thermal imaging, remote sens., 12 (2020), 2621. doi: 10.3390/rs12162621 [20] m.g. d’urso, v. manzari, s. lucidi, f. cuzzocrea, rescue management and assessment of structural damage by uav in post-seismic, isprs ann. photogramm. remote sens. spat. inf. sci., 5 (2020), pp. 61 – 703. doi: 10.5194/isprs-annals-v-5-2020-61-2020 [21] c.c. chuang, j. y. rau, m.k. lai, c.l. shi, combining unmanned aerial vehicles, and internet protocol cameras to reconstruct 3d disaster scenes during rescue operations, prehosp. emerg. care, 23 (2019), pp. 479–4844. doi: 10.1080/10903127.2018.1528323 [22] r. zhang, k. duan, s. you, f. wang, s. tan, a novel remote sensing detection method for buildings damaged by earthquake based on multiscale adaptive multiple feature fusion, geomat. nat. hazards risk, 11 (2020), pp. 1912–19381. doi: 10.1080/19475705.2020.1818637 [23] b. wang, x. tan, d. song, l. zhang, rapid identification of postearthquake collapsed buildings via multi-scale morphological profiles with multi-structuring elements, ieee access, 8 (2020), pp. 122036 – 1220562020. doi: 10.1109/access.2020.3007255 [24] x. xiao, w. zhai, z. liu, building damage information extraction from fully polarimetric sar images based on variogram texture features, acrs 2020 41st asian conference on remote sensing2020 41st asian conference on remote sensing, acrs 2020, 9 11 november 2020. doi: 10.5194/isprs-archives-xliii-b1-2020-587-2020 [25] l. siaudinyte, modelling of linear test bench for short distance measurements. acta imeko, 4 (2015) 2, pp. 68-71. doi: 10.21014/acta_imeko.v4i2.229 [26] g. d’emilia, s. lucci, e. natale, f. pizzicannella, validation of a method for composition measurement of a non-standard liquid fuel for emission factor evaluation, measurement, 44 (2011), pp. 18-23. doi: 10.1016/j.measurement.2010.08.016 [27] g. d'emilia, a. gaspari, e. natale, how simplifying a condition monitoring procedure affects its performances. in 2021 ieee international instrumentation and measurement technology conference (i2mtc) (2021), pp. 1-5. ieee. doi: 10.1109/i2mtc50364.2021.9459924 [28] g. d'emilia, d. di gasbarro, a. gaspari, e. natale, managing the uncertainty of conformity assessment in environmental testing by machine learning. measurement, 124 (2018) pp. 560-567. doi: 10.1016/j.measurement.2017.12.034 [29] g. d'emilia, a. gaspari, e. natale, dynamic calibration uncertainty of three-axis low frequency accelerometers, acta imeko, 4 (2015) 4, pp. 75-81. doi: 10.21014/acta_imeko.v4i4.239 [30] leica geosystems ag part of hexagon. online [accessed 14 december 2021] https://leica-geosystems.com/ https://doi.org/10.21014/acta_imeko.v10i1.847 https://doi.org/10.1088/1755-1315/783/1/012081 https://doi.org/10.3390/rs12162621 https://doi.org/10.5194/isprs-annals-v-5-2020-61-2020 https://doi.org/10.1080/10903127.2018.1528323 https://doi.org/10.1080/19475705.2020.1818637 https://doi.org/10.1109/access.2020.3007255 https://doi.org/10.5194/isprs-archives-xliii-b1-2020-587-2020 https://doi.org/10.21014/acta_imeko.v4i4.239 https://doi.org/10.21014/acta_imeko.v4i4.239 https://doi.org/10.1016/j.measurement.2010.08.016 https://doi.org/10.1109/i2mtc50364.2021.9459924 https://doi.org/10.1016/j.measurement.2017.12.034 https://doi.org/10.21014/acta_imeko.v4i4.239 https://leica-geosystems.com/ classification of brain tumours using artificial neural networks acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 7 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 classification of brain tumours using artificial neural networks b. v. subba rao1, raja kondaveti2, r. v. v. s. v. prasad2, v. shanmukha rao3, k. b. s. sastry4, bh. dasaradharam5 1 department of information technology, pvp siddhartha institute of technology, vijayawada 520007, india 2 department of it, swarnandra college of engineering and technology, narasapuram, india 3 department of information technology, andhra loyola college of engineering and technology, vijayawada 520008, india 4 department of computer science, andhra loyola college of engineering and technology, vijayawada 520008, india 5 department of cse, nri institute of technology, agiripalli 521212, andhra pradesh india section: research paper keywords: artificial neural networks; brain tumour; classification; magnetic resonance brain image; wavelet transform citation: b. v. subba rao, raja kondaveti, r. v. v. s. v. prasad, v. shanmukha rao, k. b. s. sastry, bh. dasaradharam, classification of brain tumours using artificial neural networks, acta imeko, vol. 11, no. 1, article 35, march 2022, identifier: imeko-acta-11 (2022)-01-35 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received december 28, 2021; in final form february 19, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: b. v. subba rao, e-mail: bvsubbarao90@gmail.com 1. introduction if any person having a brain tumour, the doctor may recommend a number of tests and procedures to identify the tumour which are present in the brain or it may be spreads into any parts of the body. if the tumour has found in the brain the doctor takes the biopsy and collecting the sample tissue and conduct the examination. in certain situations, the person may be affected to paralysis of their body. in this situation before testing of biopsy magnetic resonance (mr) [1] brain images were taken to study whether the tumour may be benign tumour or malignant tumour. there are two different types of tumours mainly found in the mr brain image those are benign tumour and malignant tumour [2] [3]. the stages of the study for our proposed work are magnetic resonance imaging (mri), feature extraction and classification. in the following sub sections, we have described the benign and malignant tumours. abstract magnetic resonance (mr) brain image is very important for medial analysis and diagnosis. these images are generally measured in radiology department to measure images of anatomy as well as the general physiological process of the human body. in this process magnetic resonance imaging measurement are used with a heavy magnetic field, its gradients along with radio waves to produce the pictures of human organs. mr brain image is also used to identify any blood clots or damaged blood veins in the brain. a counterfeit neural organization is a nonlinear information handling model that have been effectively used preparation models for tackling administered design acknowledgment assignments because of its capacity to sum up this present reality issues. artificial neural networks (ann) is used to classify the given mr brain image having benign or malignant tumour in the brain. benign tumours are generally not cancerous tumours. these are also not able to grow or spread in the human body. in very rare cases t hey may grow very slowly. once it is eliminated, they do not come again. on the other hand, malignant tumours are cancer tumours. these tumour cells are grown and also easily spread to other parts of the human body. benign also known as harmless. these are not destructive. they either can't spread or develop, or they do as such leisurely. on the off chance that a specialist eliminates them, they don't by and large return. premalignant in these growths, the cells are not yet harmful, however they can possibly become threatening. malignant also known as threatening. malignant growths are destructive. the cells can develop and spread to different pieces of the body. in our proposed framework initially, it distinguishes wavelet transform to separate the highlights from the picture. subsequent to separating the highlights it incorporates tumour shape and power attributes just as surface highlights are distinguished. finally, ann to group the information highlights set into benign or malignant tumour. the main purpose as well as the objective is to identifying the tumours weather it belongs to benign or malignant. mailto:bvsubbarao90@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 1.1. benign tumour a tumour is an irregular improvement of cells that fills no need. a caring tumour is positively not a destructive tumour, which is harmful development. it doesn't assault near to tissue or spread to various bits of the body the way here harmful development can. generally speaking, the stance with obliging tumours is superb. a harmless growth is certainly not a dangerous growth, which is disease. it doesn't attack close by tissue or spread to different pieces of the body the manner in which disease can. as a rule, the standpoint with harmless growths is excellent. be that as it may, harmless growths can be not kidding assuming they push on fundamental designs like veins or nerves. consequently, now and again they require treatment and different times they don't. in any case, liberal tumours can be dead serious if they push on fundamental structures, for instance, veins or nerves. along these lines, at times they require treatment and various events they don't. the specific reason for a benign tumour is frequently obscure. it creates when cells in the body partition and develop at an exorbitant rate. commonly, the body can adjust cell development and division. at the point when old or harmed cells pass on, they are consequently supplanted with new, sound cells. on account of tumours, dead cells remain and structure a development known as a tumour. cancer cells fill in a similar way. nonetheless, in contrast to the cells in amiable tumours, harmful cells can attack close by tissue and spread to different pieces of the body. 1.2. malignant tumour harmful tumours [4] are cancer-causing. they make when cells grow uncontrollably. in case the cells continue to create and spread, the contamination can get dangerous. dangerous tumours can grow quickly and spread to various bits of the body in a cycle called metastasis. the malignancy cells that transition to different pieces of the body are equivalent to the first ones, yet they can attack different organs. in the event that cellular breakdown in the lung’s spreads to the liver, for instance, the malignancy cells in the liver are still cellular breakdown in the lung’s cells. various sorts of malignant tumours start in various kinds of cell. the expression malignant demonstrates that there is moderate to high likelihood that the growth will spread past the site where it at first creates. these cells can spread by movement through the circulation system or by movement through lymph vessels malignant tumour demonstrates that there is moderate to high likelihood that the tumour will spread past the site where it at first creates. these cells can spread by movement through the circulation system. a harmful cerebrum tumour is a carcinogenic development in the brain. it's unique in relation to a kind mind tumour, which isn't malignant and will in general develop more slowly. malignant mind tumours contain disease cells and frequently don't have clear lines. they are viewed as hazardous in light of the fact that they develop quickly and attack encompassing cerebrum tissue. in the existing mechanism mr brain image were taken and biopsy test was conducted that is known as follicular dendritic cell tumour (fdct) pathology test. fdct test is performed for removal of noise and then extracted the features from that mr brain image. after extracting the features from the image then support vector machine (svm) classification algorithm is applied to classify the features and characteristics. but in the svm the accuracy and speed of them is very slow. the results may not be clear and accurate. by understanding this problem, we are proposed a new classification algorithm called as artificial neural networks (ann). ann is used to improve the accuracy of the classifier and classifier speed may be increased. 2. our proposed methodology and its discussion the proposed ann extracts the features from the brain image and classifies the brain tumour into multiple images. there are three stages in our proposed methodology to observe whether the tumour is benign or malignant they are: a) pre-processing b) feature extraction c) classification 2.1. pre-processing in this headway the proposed framework utilizes the median filter. middle filter eliminates the turmoil from the mr cerebrum image. noise on the image means undesirable data present in the mr brain image. median filter has very protective capacity and heftiness. it diminishes the salt and pepper noise in the mr brain image. it also reduces the blurring of an image by applying smoothing technique. the main observation of median filter is it replaces the current point in the image to median value of the brightness of nearby pixel i.e. supplanting each neighbour an incentive with the middle estimation of the pixel. median filter also eliminated the impulse noise. so, median filter is a suitable pre-processing method in our proposed method. 2.2. feature extraction after pre-processing is completed the noiseless mr brain image was generated by applying the median filter technique then features are extracted from that image. feature extraction means, it is a process of identifying the set of features in an image. features are obtained from colour, shape and texture. good features are having produces the informative distinctive, accuracy, locality, reliability, quality, robustness and efficiency. all these are observing in the classification process. still the feature extraction is quite challenging issue to identify the current features of an image. many feature extraction techniques are available. in our proposed method we have used db4 (daubechies 4) wavelet transform for extracting the features like standard deviation, minimum and maximum value in wavelet transform. db4 wavelet transform is used in our proposed method for extracting the features. 2.3. classification after extracting the features from the image by applying db4 wavelet transform technique. the input feature extraction values are used for classification. in the real time many classification algorithms/techniques are available in the existing system they were used svm [5] classification technique but the accuracy is not up to the mark. processing also slow and will take a huge time. to overcome these problems, we are using artificial neural network classifier for image classification in our proposed method. in this ann, we use back propagation neural network. neural network classification is done by using multilayer perceptron algorithm. after applying all these algorithm/techniques we will set the output as benign or malignant tumour on the mr brain image. 3. db4 wavelet transform the daubechies wavelet changes are described as the haar wavelet change by enlisting midpoints and differences to acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 methods for scalar things with scaling signs and wavelets the single qualification between them involves in how these scaling signs and wavelets are portrayed. for the daubechies wavelet [6] changes, the scaling signs and wavelets have somewhat longer backings, i.e., they produce midpoints and contrasts utilizing just a couple of more qualities from the sign. the daubechies d4 change has four wavelet and scaling capacity co-efficient. the following formula shows the scaling capacity of the coefficient. ℎ0 = 1 + √3 4√2 , ℎ1 = 3 + √3 4√2 , ℎ2 = 3 − √3 4√2 , ℎ3 = 1 − √3 4√2 . (1) every movement of the wavelet change applies the scaling ability to the data input, if the main instructive assortment has n regards and the scaling limit will be applied in the wavelet change step to learn n2 smoothed characteristics in the organized wavelet change and the smoothed characteristics are taken care of in the lower half of the n part input vector. this can be represented as follows. {𝑔0 = ℎ3; 𝑔1 = −ℎ2; 𝑔2 = ℎ1; 𝑔3 = −ℎ0} (2) the wavelet change applies the wavelet ability to the data if the primary instructive assortment has n regards. the main enlightening file has n regards and the wavelet limit will be applied to figure n/2 differentiations. the scaling and wavelet limits are dictated by taking the internal consequence of the coefficient and four data regards. the conditions are shown following formula. 𝑎𝑖 = ℎ0 𝑠2𝑖 − ℎ1 𝑠2𝑖−1 − ℎ2 𝑠2𝑖−2 − ℎ3 𝑠2𝑖−3 𝑎[𝑖] = ℎ0 𝑠[2𝑖] − ℎ1 𝑠[2𝑖 − 1] − ℎ2 𝑠[2𝑖 − 2] − ℎ3 𝑠[2𝑖 − 3] (3) daubechies d4 wavelet function: 𝑐𝑖 = 𝑔0 𝑠2𝑖 − 𝑔1 𝑠2𝑖−1 − 𝑔2 𝑠2𝑖−2 − 𝑔3 𝑠2𝑖−3 𝑐[𝑖] = 𝑔0 𝑠 [2𝑖] − 𝑔1 𝑠[2𝑖 − 1] − 𝑔2 𝑠[2𝑖 − 2] − 𝑔3 𝑠[2𝑖 − 3] (4) each iteration in the wavelet step calculates a scaling value and a wavelet function value. 4. artificial neural network a neural organization comprises of formal neurons which are associated so that every neuron yield further fills in as the contribution of for the most part more neurons correspondingly as the axon terminals of a natural neuron are associated by means of synaptic ties with dendrites of different neurons. the quantity of neurons and how they are interconnected decides the engineering of neural organization. counterfeit neural organizations are one of the primary instruments utilized in artificial intelligence. as the neural a piece of their name recommends, they are cerebrum enlivened frameworks which are planned to reproduce the way that we people learn. neural organizations comprise of information and yield layers, just as a concealed layer comprising of units that change the contribution to something that the yield layer can utilize. they are astounding instruments for seeing examples which are far as excessively mind boggling or various for a human software engineer to concentrate and show the machine to perceive. the multi-layered neural organization is the most generally applied neural organization, which has been utilized in many explores up until this point. a back-engendering calculation can be utilized to prepare these multilayer feed-forward organizations with differentiable exchange capacities. it performs work estimate, design affiliation, and example arrangement. the term back proliferation alludes to the cycle by which subsidiaries of organization blunder, concerning network loads and inclinations, can be registered. the preparation of anns by back proliferation includes the following three phases: (i) the feed forward of the info preparing design, (ii) the estimation and back spread of the related mistake, (iii) the change of the loads. this cycle can be utilized with a number of different enhancement procedures. the following figure shows the artificial neural network [7]-[10] procedure it contains the input data, hidden layer processing and then produces the output. a neural network has at least three layers that are interconnected. the main layer comprises of information neurons. those neurons send information on to the more profound layers, which thus will send the last yield information to the last yield layer. all the inward layers are covered up and are shaped by units which adaptively change the data got from layer to layer through a progression of changes. each layer demonstrations both as an information and yield layer that permits the ann to see more unpredictable items [11]-[15]. all things considered; these inward layers are known as the neural layer. figure 1 depicts an artificial neural network. 5. results based on our proposed method and as per the above discussions we took a series of benign and malignant tumour images are as input mr images. a malignant tumour is a quickly developing malignancy that spreads to different zones of the cerebrum and spine. by and large, brain tumours are evaluated from one to four, as indicated by their conduct, for example, how quick they develop and that they are so prone to develop back after treatment. a malignant tumour is either grade three or four, though grade one or two tumours are generally classed as kind hearted or non-destructive. most harmful tumours are an optional malignant growth, which implies they began in another piece of the body and spread to the cerebrum. essential cerebrum tumours are those that begun in the mind. the figure 2 shows the series of benign tumour images and figure 3 shows the series of malignant tumour images which are considered as input images to our method. figure 1. artificial neural network. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 after taking input mr image, we have applied our proposed pre-processing method, feature extraction and classification techniques to find weather the resultant image having benign tumour or malignant tumour. initially we have taken the original gray scale image in the pre-processing method. gray scale image is converted into binary image after then cleaned the binary image by applying smoothing technique. noise was reduced by applying the pre-processing method. subsequent to pre-processing to identify the features in the mr image, we are using db4 wavelet transform method to identify the features and also extracted the feature in the given image. here salt and figure 2. series of benign tumours in brain mr images (input images). figure 3. series of malignant tumours in brain mr images (input images). figure 4. after applying db4 wavelet transform. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 pepper noise also reduced in this technique. after identifying the features, finally proposed classification technique like ann was applied and observes the parameters standard deviation, maximum value and minimum value to classify image either benign or malignant. figure 4 shows the pre-processing working procedure and extracting the feature by using db4 wavelet transform. it shows how the original grayscale input image is converted into a binary image and then the cleaned binary image after then applying the db4 wavelet transform. the histogram representation is shown in figure 5. it shows the highest and lowest pixel values of a binary image and it also shows the parameter values of standard deviation, maximum value and minimum value. table 1 represents the stratified non-validation. table 2 represents detailed accuracy by class. confusion matrix is shown in table 3 which represents the classification of given input images that produce the benign or malignant tumour output by applying anns [16], [17]. the event of cerebrum tumours in india is consistently rising. an ever-increasing number of instances of cerebrum tumours are accounted for every year in our country among individuals of changed age gatherings. brain tumours were positioned as the tenth most basic sort of tumour among indians. there are more than 32,000 instances of brain tumours announced in india every year and in excess of 28,000 individuals purportedly pass on because of cerebrum tumours yearly. a brain tumour is a genuine condition and can be deadly if not identified early and treated. in the results we have disclosed the category of tumour that is either benign or malignant tumour using our methodology so that we can predict the type of cancer in advance. table 4 shows the distinctive features of a db4 wavelet transform like standard deviation, minimum and maximum values of a series of mr images. based on these values the system easily classifies benign tumour or malignant tumour. 6. conclusion our proposed method is utilized for identifying e tumour from the given mr mind pictures and grouping whether it is benign (normal) tumour or malignant (cancer causing) tumour at the beginning stage. in the new patterns this framework/method assumes a significant job to distinguish the brain cancer in the beginning at very early stage which diminishes the death rate. the degree of things to come improvement in this endeavour is that we would connect data be able to base so colossal number of pictures can be used in distinguishing the threatening development. we can improve its exactness by utilizing various calculations of counterfeit neural organizations like convolution neural organizations, uphold vector machine and others. this computer aided classification system of our method takes any mr cerebrum filter picture, examinations it, and gives the yield as disease tainted mind if the sweep picture contains harmful tumour or probably gives yield as the cerebrum is malignant growth free if the output picture contains survival rate to some extent favourable tumour. as a whole we have succeeded in identify the tumour used in the given input mr images. we have successfully deployed our proposed methodology and able to classify the tumour weather it is a benign or malignant using artificial neural network. this methodology truly supports the patients as well as doctors to identify the tumour in a little bit advance which might save the lives. figure 5. histogram representation. table 1. stratified non-validation. s. no summary validation % of validation 1 correctly classified instances 15 75 2 incorrectly classified instances 5 25 3 kappa statistic 0.5 4 mean absolute error 0.3299 5 root mean square error 0.5034 6 relative absolute error 65.9703% 7 root relative squared error 100.6852% 8 total number of instances 20 table 2. detailed accuracy by class. tp rate fp rate presidion recall f-measure mcc roc area prc area class 0.800 0.300 0.727 0.800 0.762 0.503 0.700 0.735 m 0.700 0.200 0.728 0.700 0.737 0.503 0.700 0.724 b 0.750 0.250 0.753 0.750 0.749 0.503 0.700 0.729 ← weighted avs table 3. confusion matrix. a b ← classified as 8 2 3 7 a = m b = b acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 references [1] evangelia i. zacharaki, sumei wang, sanjeev chawla, dong soo yoo, ronald wolf, elias r. melhem, christos davatzikos, classification of brain tumor type and grade using mri texture and shape in a machine learning scheme, magn reson med, 62(6), (2009), pp. 32-39. doi: 10.1002/mrm.22147 [2] federica vurchio, giorgia fiori, andrea scorza, salvatore andrea sciuto, comparative evaluation of three image analysis methods for angular displacement measurement in a mems microgripper prototype: a preliminary study, acta imeko, 10(2) (2021), pp. 119-125. doi: 10.21014/acta_imeko.v10i2.1047 [3] g. çınarer, b. g. emiroğlu, classification of brain tumors by machine learning algorithms, 3rd international symposium on multidisciplinary studies and innovative technologies (ismsit), ieee, ankara, turkey, 11-13 october 2019, pp. 1-4. doi: 10.1109/ismsit.2019.8932878 [4] heba mohsen, el-sayed a. el-dahshan, el-sayed m. el-horbaty, abdel-badeeh m. salem, classification using deep learning neural networks for brain tumors, future computing and informatics journal, 3(1) (2018), pp.68-71. doi: 10.1016/j.fcij.2017.12.001 [5] isabel martinez espejo zaragoza, gabriella caroti, andrea piemonte, the use of image and laser scanner survey archives for cultural heritage 3d modelling and change analysis, acta imeko, 10(1) (2021), pp.114-121. doi: 10.21014/acta_imeko.v10i1.847 [6] v. gavini, g. r. jothi lakshmi, m. z. u. rahman, an efficient machine learning methodology for liver computerized tomography image analysis, international journal of engineering trends and technology, 69(7) (2021), pp. 80-85. doi: 10.14445/22315381/ijett-v69i7p212 [7] n. j. krishna kumar, r. balakrishna, eeg feature extraction using daubechies wavelet and classification using neural network, international journal of pure and applied mathematics 118(18) (2018), pp. 3209-3223. doi: 10.26438/ijcse/v7i2.792799 [8] o. i. abiodun, a. jantan, a. e. omolara, k. v. dada, a. m. umar, o. u. linus, h. arshad, a. a. kazaure, u. gana, m. u. kiru, comprehensive review of artificial neural network applications to pattern recognition, ieee access, 7(2019), pp. 158820158846. doi: 10.1109/access.2019.2945545 [9] a. lay-ekuakille, c. chiffi, a. celesti, m. z. u. rahman, s. p. singh, infrared monitoring of oxygenation process generated by robotic verticalization in bedridden people, ieee sensors journal, 21(13) (2021), pp. 14426-14433. doi: 10.1109/jsen.2021.3068670 [10] m. z. u. rahman, s. surekha, k. p. satamraju, s. s. mirza, a. layekuakille, a collateral sensor data sharing framework for decentralized healthcare systems, ieee sensors journal, 21(24) (2021), pp. 27848-27857. doi: 10.1109/jsen.2021.3125529 [11] a. lay-ekuakille, m. a. ugwiri, c. liguori, s. p. singh, m. z. u. rahman, d. veneziano, medical image measurement and characterization: extracting mechanical and thermal stresses for surgery, metrology and measurement systems, 28(1) (2021), pp. 3-21. doi: 10.24425/mms.2021.135998 [12] a. tarannum, m. z. u. rahman, l. k. rao, t. srinivasulu, a. layekuakille, an efficient multi-modal biometric sensing and authentication framework for distributed applications, ieee table 4. distinctive features of a set of images. sl. no 1 2 3 4 5 6 7 8 9 max 1h 210.2 77.36 131.1 206.8 137.1 110.6 93.98 120.9 129.4 min 1h -55.6 -96.15 -151.6 -161.9 -141.78 -118.3 -120.2 -141.7 -105.9 sd1h 11.5 9.657 14.8 16.03 14.11 9.67 9..657 14.21 11.69 max 1v 169.9 76.33 115.4 218.5 131.9 79.18 94.74 132.6 114.5 min 1v -239.4 -57.54 -115.6 -242.8 -126.9 -99.18 -99.68 -178 -110.7 sd1v 11.21 9.009 10.81 19.18 13.05 8.814 11.09 21.6 14.59 max 1d 112.8 24.22 54.88 110.9 52.09 53.31 44.47 68.54 36.22 min 1d -0.597 -25.92 -48.83 -93.87 -56.78 -73.87 -42.41 -79.63 -47.09 sd1d 4.803 2.709 4.8148 7.208 5.539 4.262 4.145 6.334 4.186 max 2h 369.1 315.3 218.6 324.5 356.6 250.3 257.3 417.5 302.6 min 2h -354.8 -240.6 -216.1 -465.8 -375.3 -446.5 -195.4 -284.2 -251.3 sd2h 38.02 37.87 39.66 47.23 51.1 30.99 29.67 43.83 41 max 2v 342.6 306.4 218.4 367.2 344 184.3 243.4 384.1 296.3 min 2v -275.2 -244.6 -299.8 -390 -315.6 -187.9 -238.3 -313 -214.6 sd2v 46.6 35.39 35.95 53.32 47.48 30.15 33.09 63.09 54.28 max 2d 100.9 107.3 90.2 173.6 188.7 157.2 135.2 177.7 242.2 min 2d -122.4 -126.5 -144.3 -190.6 -199.6 -117.8 -126.6 -167.7 -162.9 sd2d 16.55 14.9 19.01 25.31 23.67 17.62 17.75 27.46 33.58 max3 1066 1132 1107 1118 1114 1117 1128 1014 1082 min3 -28.66 -122.7 -91.2 8.373 -121.3 -105.9 -114.6 -148.6 -112.3 sdb 255.3 245.2 282.6 217.7 203.4 171.3 212.2 248.3 254 e 97.83 97.86 96.73 95.92 93.16 96.23 98.63 9418 96.33 image? m m m m m b b b b https://doi.org/10.1002/mrm.22147 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-10%20%282021%29-02-17 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-10%20%282021%29-02-17 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-10%20%282021%29-02-17 https://doi.org/10.21014/acta_imeko.v10i2.1047 https://doi.org/10.1109/ismsit.2019.8932878 https://www.sciencedirect.com/science/article/pii/s2314728817300636#! https://www.sciencedirect.com/science/article/pii/s2314728817300636#! https://www.sciencedirect.com/science/journal/23147288 https://www.sciencedirect.com/science/journal/23147288 https://www.sciencedirect.com/science/journal/23147288/3/1 https://doi.org/10.1016/j.fcij.2017.12.001 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-10%20%282021%29-01-15 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-10%20%282021%29-01-15 https://doi.org/10.21014/acta_imeko.v10i1.847 https://doi.org/10.14445/22315381/ijett-v69i7p212 https://doi.org/10.26438/ijcse/v7i2.792799 https://doi.org/10.1109/access.2019.2945545 https://doi.org/10.1109/jsen.2021.3068670 https://doi.org/10.1109/jsen.2021.3125529 https://doi.org/10.24425/mms.2021.135998 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 sensors journal, 20(24) (2020), pp. 15014-15025. doi: 10.1109/jsen.2020.3012536 [13] a. tarannum, m. z. u. rahman, t. srinivasulu, an efficient multimode three phase biometric data security framework for cloud computing-based servers, international journal of engineering trends and technology, 68(9) (2020), pp. 10-17. doi: 10.14445/22315381/ijett-v68i9p203 [14] a. tarannum, m. z. u. rahman, t. srinivasulu, a real time multimedia cloud security method using three phase multi user modal for palm feature extraction, journal of advanced research in dynamical and control systems, 12(7) (2020), pp. 707-713. doi: 10.5373/jardcs/v12i7/20202053 [15] m. egmont-petersen, d. de ridder, h. handels, image processing with neural networks – a review, pattern recognition. 35 (10) (2002), pp. 2279–2301. doi: 10.1016/s0031-3203(01)00178-9 [16] h. t. siegelmann, eduardo d. sontag, analog computation via neural networks, theoretical computer science. 131 (2) (1994), pp. 331–360. doi: 10.1016/0304-3975(94)90178-3 [17] cancer types on cancer net. online [accessed 25 march 2022] https://www.cancer.net/cancer-types https://doi.org/10.1109/jsen.2020.3012536 https://doi.org/10.14445/22315381/ijett-v68i9p203 https://doi.org/10.5373/jardcs/v12i7/20202053 https://doi.org/10.1016/s0031-3203(01)00178-9 https://doi.org/10.1016/0304-3975(94)90178-3 https://www.cancer.net/cancer-types standards and affordances of 21st-century digital learning: using the experience application programming interface and the augmented reality learning experience model to track engagement in extended reality acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 6 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 standards and affordances of 21st-century digital learning: using the experience application programming interface and the augmented reality learning experience model to track engagement in extended reality jennifer wolf rogers1, karen alexander2 1 people accelerator, the woodlands, texas, usa 2 xrconnected, pittsburgh, pennsylvania, usa section: technical note keywords: xapi; arlem; augmented reality; virtual reality; data transformation; capability development; training; learning; education citation: jennifer wolf rogers, karen alexander, standards and affordances of 21st-century digital learning: using the experience application programming interface and the augmented reality learning experience model to track engagement in extended reality, acta imeko, vol. 11, no. 3, article 7, september 2022, identifier: imeko-acta-11 (2022)-03-07 section editor: zafar taqvi, usa received march 1, 2022; in final form september 16, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: karen alexander, phd, karen@xrconnected.com 1. introduction as the delivery of learning has been increasingly digitized over the past few decades, standards such as scorm, the sharable content object reference model, have helped to ensure interoperability across learning management systems (lms). but the 21st century has brought new, powerful tools for learning that enable educational and training experiences far richer than those available via a desktop or laptop computer. virtual, augmented, and mixed reality (vr, ar, mr) bring affordances to learning that far surpass what was previously available. immersive, embodied learning in 3d environments, with interactive 3d objects and collaborative engagements with teachers or other learners, will revolutionize education and training as we know it. for these reasons, new standards that can aid in the capture of data about a learner’s experience have arisen, notably the augmented reality learning experience model (arlem) and the experience application programming interface (xapi). with xapi and arlem, specific learner behaviour can be directly tracked and measured as it is shaped and/or changes in a specific interaction, thus permitting predictions of transfer from knowledge to demonstrable skill. adoption of these standards is key to avoiding silos of information and data around associated learner development and behaviour change encoded in different systems and formats that make communication across them difficult. in section 2 we will discuss how new technologies for learning demand new means of assessment. section 3 introduces xapi, its structure, and design principles for interoperable data structures. in section 4, we describe the augmented reality learning experience model, or arlem. a case study illustrates xapi in virtual reality in section 5. in section 6, the challenges and opportunities in using ar and vr and xapi in early abstract the development of new extended reality (xr) technologies for learning enables the capture of a richer set of data than has previously been possible. to derive a benefit from this wealth of data requires new structures appropriate to the new learning activities these immersive learning tools afford. the experience application programming interface (xapi) and the augmented reality learning experience model (arlem) standards have been developed in response, and their adoption will help ensure interoperability as xr learning is more widely deployed. this paper briefly describes what is different about xr for learning and provides an account of xapi and its structures as well as design principles for its use. the relationship between environmental context and arlem is explained, and a case study of a vr experience using xapi is examined. the paper ends with an account of some of the promises for collecting data from early childhood learning experiences and an unsuccessful attempt at a study using augmented reality with young children. mailto:karen@xrconnected.com acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 childhood education are outlined. in conclusion, we point toward possibilities for future developments in the fields of extended reality (xr) for training and interoperable standards for data collection. 2. new ways of learning, new ways of measuring among the affordances of virtual reality that make it such an effective tool for learning is the way that it engages the senses of proprioception and presence while permitting movement and gesture as the learner interacts with the virtual world and content within it. research has shown that large gestures and bodily movements enhance the acquisition and retention of knowledge, and vr is able to deliver experiences that allow such movement, unlike the small, hands-only interactions we have when using a computer keyboard interface for digital learning [1]. ar and mr deliver digital content within a real-world environment, and such content appears to the learner as present in space alongside them. the learner can modify the size of the virtual objects with which they are interacting or rotate them for a different view, walk around them, and experience the content in a fully embodied, present manner. xr technologies engage the body as it is inhabited in space and tap into embodied cognition. as researcher mina johnsonglenberg notes, “human cognition is deeply rooted in the body’s interactions with the world and our systems of perception.” but johnson glenberg also acknowledges that “for several decades, the primary interface in educational technology has been the mouse and keyboard; however, these are not highly embodied interface tools” [1]. xr permits the collection of data from bodily movements and interactions. one of the benefits of this can be seen in recent research that shows that the combination of physical movement and a gaming element in learning experiences for children not only enhances their ability to learn, but in fact also enhances cognitive function [2], [3]. with such new tools that can engage the body, “we need better learning analytics for real-time and real-world interaction” [4]. 3. the experience api 3.1. experience (xapi) data the experience (xapi) data is a modern data standard that is uniquely positioned to measure and track human behaviour in learning scenarios in a manner that more closely approximates their behaviour in everyday situations and experiences. the xapi differs from traditional, compliance-based, linear forced-choice methods of measuring learning (such as scoring responses to a finite series of multiple-choice questions and distractors). instead, it takes a more dynamic and analytical approach, acknowledging the often less-than-predictable nature of human response, and allowing learners to immerse themselves in a variety of contexts and respond organically (and, in some cases, automatically, without an intentional initiation of cognitive processing) to a variety of stressors and/or motivators in a manner that more closely approximates their true everyday behaviour in the real world. additionally, xapi provides a standard javascript object notation (json) syntax that facilitates the collection and subsequent analysis of these learner behaviours across learning scenarios and experiences, providing assurances around degree of predictability of a learner response over time, as well as the opportunity to correlate behaviour in practice/simulated environments with behaviour from the same individual/group of individuals, and potential real-world outcomes arising from that behaviour, in the physical world. 3.2. key considerations for structuring data collection using xapi though xapi is a broad standard and syntax aimed at increasing the efficacy of all modalities associated with learning experience, immersive environments are, in particular, wellsuited to this type of measurement. in xr experiences, there is a unique opportunity for learners to embody specific roles and interact with specific environments and objects, as well as demonstrate behaviours that are highly analogous to the physical world. when structuring xapi data collection in these immersive scenarios, it is important to consider the ways in which people, objects, and actions/behaviours might interact with one another, the outcomes and/or consequences that may arise from these interactions, and how these interactions and interdependencies might be captured. ideally, data collection will be structured using the base syntax to answer the following types of questions: ● what is the environmental context? ● who/what is present with the learner in this environment? ● how does the environment (and/or people or objects within it) trigger a learner behaviour? ● how may learner behaviour be described in such a way that it may be generalized across other immersive environments? is this behaviour tracked in the physical world in the form of metrics and/or key performance indicators (kpis)? how is it described/reported? ● does the environmental context change as a result of the learner’s behaviour(s)? is there a natural resolution to a specific problem/tension? if so, how is this described? 3.3. key design principles for human metrics in xr beyond the decisions driving specific ontologies and/or taxonomies to describe json statement syntax within specific interactions/scenarios, there are also key design principles to consider when designing an overall interoperable data structure meant to support and analyse this syntax: 1. human metrics design should be scalable. when constructing data schemas and frameworks to support xapi, it is imperative that human-centred design is table 1. overview of styles and font sizes used in this template. json syntax field field description adult learner example child learner example subject/ actor identifiers and/or descriptors for individual learners/groups jane jill verb action that learner takes in the scenario released donned object object and/or person that learner interacts with the pressure valve a coat context optional extension information to describe the context of the behaviour (e.g. location coordinates, physical barriers, emotional factors, etc.) …when operating parameters [began to trend out of limits] …when snow fell from the sky results optional extension information to describe results and/or resolution of the initial context, based upon learner action ….which resulted in pressure stabilization ….which resulted in health stats increasing acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 employed to ensure that the resulting metrics include verbs, results, and contexts that may be shared across a wide variety of xr scenarios/experiences for the same user(s) over time. in this case, the human/individual is the constant as they “travel” through a variety of xr simulations/scenarios and their behaviour in these different contexts must be described in a manner that lends itself to trend analysis and/or situational anomalies. additionally, as humans are not truly “compartmentalized” within a particular function/domain of their life, frameworks should be constructed in such a way that provides for longevity around the description of behaviour across broad ecosystems an individual may transverse, in both the physical and virtual world (e.g. familial networks, social networks, academic networks, professional networks, etc.). 2. actions/verb selection should minimize bias and/or interpretation, wherever possible. as the range of human behaviour is vast, and “appropriate” responses may vary across simulations/scenarios, every effort should be taken to avoid introduction of verbs into an ontology/taxonomy and/or individual interaction that imply some sense of judgement and/or right and wrong responses. instead, the verb portion of the syntax itself should merely describe the user behaviour in the statement itself, utilizing the results and context extensions where appropriate to contextualize the nature of appropriateness of the response in light of the conditions at a particular point in time in an interaction. 3. human metrics data should be interoperable and machine-readable. deviation from json format and xapi-specific context should be avoided, as both are open source standards that ensure that data recorded during immersive experiences/scenarios may be created, managed, and analysed broadly across existing and future platforms and technology, particularly those that pertain to measurement of human development, capabilities, and performance (e.g. student/workforce management systems, capability management and workforce planning systems, learning management systems, assessment management systems, performance management systems, etc.). additionally, metrics should be constructed in such a manner that lends them to be analysed by machine learning principles such as natural language processing and, in some cases, subsequently actioned further by artificial intelligence in subsequent interactions, as described below. 4. individual actors should be differentiated. appropriate and differentiated identifiers should be utilized in actor fields that, though anonymized where necessary, may be correlated back to a specific learner. this standardized practice ensures that interaction data is cumulative across a variety of immersive scenarios/simulations and that resulting data may be analysed for specific behavioural trends in a variety of contexts. additionally, this convention provides an affordance whereby user experience and/or nuances of a specific scenario’s context in future trials may be personalized to target specific growth/performance targets of the specific individual. 5. overall data sets, and the ontologies and/or taxonomies inherent within them, should be human-readable. to enable the proper analysis of human behaviour within specific contexts and the subsequent growth and performance of people over time, particularly as it pertains to informed practice for educational and human resources professionals responsible for guiding academic and career growth, it is imperative that virtual interaction descriptors are chosen that most closely approximate observable human behaviours in the physical world. 4. augmented reality learning experience model one recent iteration of the xapi standard as it pertains to xr specifically lies in the augmented reality learning experience model, otherwise known as arlem [5]. in this formulation, as described by secretan, wild, & guest, performance = action + competence. “an action is the smallest possible step that can be taken by a user in a learning activity,” they say, and “competence becomes visible when learners perform” [4]. accordingly, arlem aims to solve many of the challenges faced by colleagues in industrial and/or operational environments, whereby a worker's ability to take the right action at the right time is paramount. traditional measurement protocols were previously oriented toward providing flat, twodimensional “performance support” to these colleagues, in the form of viewing a schematic or standard operating procedure, watching a video of a colleague performing a process step, etc. and were centred around the passive “completion” of learning, which inferred only that an individual had accessed content and, in some cases, progressed through to the end. in some cases, additional measurement in the form of more formal assessment required learners to answer knowledge-based questions around the operational process. these assessments, often based on rote memorization, are incapable of measuring what actions a worker will take when faced with a particular situation. with the arlem and xapi standards, actions performed may now be prioritized over memorization of facts. 4.1. environmental triggers a hallmark of the arlem standard itself is its unique application of environmental “triggers” prompting human behaviour or experience; in this case, in the workplace specifically. by monitoring the environmental context and its operational parameters, arlem provides an association between situational context and appropriate supports provided to the end user to assist them in taking the appropriate action. 4.2. operational excellence key performance indicators utilizing the differentiators between environmental triggers and human activity, and further applying the results extension in the xapi syntax allows for a unique opportunity to assess the effectiveness of augmented reality in the flow or work. in some instances, iot data associated with operational processes and/or equipment that is trending out of limits/acceptable parameters may “trigger” an augmented reality support layer, comprised of appropriate schematics, videos, and even remote operations assistance from a geographically distant subject matter expert, over the real world and deployed to the correct individual in proximity to prompt human intervention to normalize the operational process. the specific iot parameters may be recorded in the “context” extensions in xapi syntax so that the acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 resulting human action can be understood in relation to operational kpis. furthermore, resolution of the iot data associated with the operational process, where available, may be recorded. 5. xapi in virtual reality an operational case study in light of the success with xapi in the arlem standard, this paper’s authors hypothesized that utilization of the xapi base syntax, associated extensions, and concept of operational triggers would also lend themselves quite well to measurement of learning experiences and other immersive experiences, such as virtual reality. to test this theory, an existing 360 video-based virtual reality interaction, centred around operational safety and risk in a heavy-manufacturing environment, and for which existing learning measurement data was available, was selected. in the selected vr scenario, an operator at a plant was given a job task to complete, and, in so doing, a set of dangers/risks in the environment were layered throughout the scenes (see figure 1). the scenario itself had been created to simulate the work environment and to increase plant operators’ ability to recognize and mitigate risks present in their working environment without overt prompts to do so. existing dashboards associated with the scenario were then examined to ascertain the likelihood of correlation with actual plant operator behaviour in the workplace. analysis of this information demonstrated that, though the immersive environment lent itself quite well to application of the xapi syntax, more traditional learning measurement methods, such as scoring of responses as right/wrong were employed, along with a scoring mechanism that subtracted points for each incorrect response (or lack thereof) within the simulation. elapsed time in the interaction was also reported (see figure 2). because the original measurement design did not approximate the type of behaviours observed, collected, and reported in the workplace, the effectiveness of the immersive learning experience, as compared to other traditional learning modalities, was difficult to determine across various learners and/or similar scenarios. as expected, the nature of the existing data did not give a great deal of information that would allow inference as to the immersive scenario’s ability to shape and/or change human behaviour in a manner that one could reasonably assume would make them safer and/or more productive in their actual jobs in the real world. 5.1. enhanced data set vr safety/risk in order to enhance the measurement of human interaction within the simulation in a manner that could be more closely correlated to human behaviour in the workplace, we obtained a sample of the historical data and, along with a careful analysis of the operational risks present in the design of the vr scenario, completed a data transform that allowed for xapi syntax to be expressed in a manner that identified environmental context, as well as subsequent user action/behaviour, and results of this behaviour in relation to the original “triggering” context itself (see figure 3). xapi-formatted json statements were then created, using existing available data fields, as well as additional data available and yet not previously recorded (see figure 4). these statements were subsequently loaded into a learning record store (lrs), and subsequent dashboards were created to demonstrate the relationships between environment and corresponding user behaviour, as well as relationships with existing competency frameworks related to health and safety and designed by the u.s. department of labor [6] (see figure 5). as compared to the original, more traditional measurement design, these dashboards contextualized human behaviour in conditions of varying risk in a manner similar to the way health and safety behaviours are measured in real operational environments and gave a clearer indication of human behaviour that might be demonstrated in these operational environments, based upon actions measured and tracked in the immersive simulation. 5.2. key findings the results of this case study, as well as reference implications of the arlem standard, as previously described by wild et al., demonstrate that it is possible to map additional information to existing xr datasets to infer “shaping of human behaviour”, as well as potentially correlate actions in simulated and/or augmented environments to those observed and recorded in the real world. furthermore, the original “triggering condition/context” metaphor suggested in the arlem standard was found to extend more broadly to all experiential-based learning experiences and work in harmony with the xapi data syntax associated with learning experiences. to enable effective measurement across scenarios and users, we created the following taxonomies for use in the xapi syntax, designed to be utilized broadly across xr scenarios measuring hazard identification and mitigation of operational risk: figure 1. selected vr scenario, designed to measure learner’s ability to identify and mitigate operational risk in a simulated environment. figure 2. measurement of non-xapi-compliant learner behaviour in a vr scenario. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 1. stimulus (context) taxonomy to describe starting condition(s) within a scenario. 2. trigger (context) taxonomy to describe expected levels of perceived risk, competing human priorities (dilemmas), and/or intentional environmental distractor(s). 3. response (results)taxonomy to describe user roles and levels within the scenario, ending condition(s), correlation to open competency frameworks, haptics, eye gaze, biometrics. while these taxonomies were created to facilitate historical data transformation from an existing vr scenario into an xapicompliant dataset, it is acknowledged here that actual ontologies are more desirable. in order to promote interoperability and the broad reach of xapi as a measurement standard in xr learning scenarios, it is recommended that representative ontologies and/or taxonomies be developed that can inform widespread historical data transformation in the xr space, as well as serve as guides around measurement designs for additional immersive scenarios that are designed in future. these advancements, we believe, will provide a powerful measurement tool designed to effectively measure, describe, and perhaps even predict human behaviour in context in a manner that speaks to not only the knowledge that individuals possess, but also to their performed skill/competency level across a variety of contexts. this has major implications for the ways in which we measure and assess specific and generalized learnings in individuals over time. 6. designing augmented reality learning experiences for early childhood education: opportunities and challenges in a literature review of vr, ar, and mr in k-12 education, maas and hughes [7] noted that studies found increased attitude, motivation, engagement, performance/learning outcomes, and 21st century skills and competencies with the use of these xr technologies. but they also emphasized that it was challenging to find studies on xr in k-12 education in comparison to such research in higher ed. the reasons they theorized such research was difficult to find were the rapid development of the technology, which means that it has only recently become available; the lack of xr content for this age group; and a lack of resources, making xr unevenly available within countries and around the globe. with early childhood education, these challenges are even more daunting. when the current paper was first proposed, the intention was to discuss research on an early childhood ar application built on a platform compatible with xapi. that research was to be informed by work indicating that bodily engagement enhances learning outcomes and improves executive function. research on children’s learning by eng et al. using a modified vr game combined with a cognitive task suggests that the combination of “exergames” with a learning task not only improves performance on that task, but can actually improve cognitive ability and executive function [2], [3]. while eng et al. studied children using vr along with measurements in fmri as well as teacher assessments, in the case of the proposed new research, ar was to be used. augmented reality is much more accessible than virtual reality for early childhood and k-12 education, which increases the opportunities to study it. for example, tablets were already in use at the lab school where the study is expected to take place. more ar platforms and experiences are becoming compatible with xapi. thus, we still believe there is great promise for the study in development and for other such studies. in addition, arcompatible smartphones and tablets are also quite common in households in 2022, which means that many children would be able to participate in ar learning experiences from home. ironically, such ar learning experiences could greatly benefit the many children who have not been able to attend school in person in the time of covid and who have missed precious learning time at a key moment in their young lives. this proved challenging on many fronts. the challenges included ensuring that the design principle around differentiation of individual actors be followed. in practical terms, this would involve ensuring that each child be supplied with a device assigned to and associated with that particular child to avoid relying on young children to log into the device with the appropriate credentials, which would alleviate pressures on teachers in the classroom context. for young children who are just beginning to learn to read, audio prompts may need to be incorporated into the experience, or each student may need to be guided by the teacher, making the process resource intensive. the continuing effects of the pandemic also had a significant impact on the project because the early childhood education lab where the research was to be conducted experienced repeated closures and disruptions. as a result, the study has not yet formally begun. figure 3. measurement of learner behaviour in the same vr scenario, utilizing the xapi standard, in conditions of medium risk (context). figure 4. measurement of predicted results based upon learner behaviour in the same vr scenario, mirroring operational consequences in the workplace. figure 5. number of user actions in vr scenario showing evidence of competency in u.s. department of labor standard: “assessing material equipment, and fixtures for hazards”. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 7. conclusions and next steps some of the early childhood learning that we would like to measure and that are appropriate for structuring with xapi include self-concept skills such as proprioception, awareness of others in space, and geometry and spatial sense skills such as recognition of 3d shapes and their persistent form when rotated, for example. learning of appropriate behaviours for self-care and social responses to others give situational triggers that are also targets for studies of the use of xapi with young children. in general, future steps include continuing to explore specific elements of xapi profiles appropriate for xr with regards to experiential learning, considering the relationship between knowledge-based concepts and experiential skills, and examining xapi profiles for xr across the talent pipeline (from pre-k to adult) in order to hypothesize about implications for the future of education. acknowledgement special thanks to colleagues at warp vr, namely guido helmerhorst and menno van der sman for contributing scenarios, historical data subsets, and transformation support. this effort would not have been possible without your spirit of innovation and commitment to open source-initiatives and practices. references [1] m. johnson-glenberg, immersive vr and education: embodied design principles that include gesture and hand controls, frontiers in robotics and ai, july 2018. doi: 10.3389/frobt.2018.00081 [2] c. m. eng, m. pocsai, f. a. fishburn, d. m. calkosz, e. d. thiessen, a. v. fisher, adaptations of executive function and prefrontal cortex connectivity following exergame play in 4to 5-year old children. [3] c. m. eng, d. m. calkosz, s. y. yang, n. c. williams, e. d. thiessen, a. v. fisher, doctoral colloquium enhancing brain plasticity and cognition utilizing immersive technology and virtual reality contexts for gameplay. 6th international conference of the immersive learning research network (ilrn), san luis obispo, ca, usa, 21-25 june 2020, pp. 395-398. doi: 10.23919/ilrn47897.2020.9155120 [4] j. secretan, f. wild, w. guest, learning analytics in augmented reality: blueprint for an ar / xapi framework, in tale 2019, yogyakarta, indonesia, 10-13 december 2019, pp. 1-6. doi: 10.1109/tale48000.2019.9225931 [5] ieee standard for augmented reality learning experience model, ieee std 1589-2020, april 2020, pp. 1–48. doi: 10.1109/ieeestd.2020.9069498. [6] credential finder. online [accessed 15 september 2022] https://credentialfinder.org/competencyframework/1928/enginee ring_competency_model_u_s__department_of_labor_(dol) [7] m. j. maas, j. m. hughes, virtual, augmented, and mixed reality in k-12 education: a review of the literature. technology, pedagogy, and education 29 (6) 2020. doi: 10.1080/1475939x.2020.1737210 https://doi.org/10.3389/frobt.2018.00081 https://doi.org/10.23919/ilrn47897.2020.9155120 https://doi.org/10.1109/ieeestd.2020.9069498 https://doi.org/10.1109/ieeestd.2020.9069498 https://credentialfinder.org/competencyframework/1928/engineering_competency_model_u_s__department_of_labor_(dol) https://credentialfinder.org/competencyframework/1928/engineering_competency_model_u_s__department_of_labor_(dol) https://doi.org/10.1080/1475939x.2020.1737210 a strategy to control industrial plants in the spirit of industry 4.0 tested on a fluidic system acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 7 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 a strategy to control industrial plants in the spirit of industry 4.0 tested on a fluidic system laura fabbiano1, paolo oresta1, rosario morello2, gaetano vacca1 1 dmmm, politecnico di bari university, bari, italy 2 diies, university mediterranea of reggio calabria, italy section: research paper keywords: industry 4.0; predictive maintenance; prognostic approach; plant operation simulator; fluidic thrust plant citation: laura fabbiano, paolo oresta, rosario morello, gaetano vacca, a strategy to control industrial plants in the spirit of industry 4.0 tested on a fluidic system, acta imeko, vol. 11, no. 2, article 31, june 2022, identifier: imeko-acta-11 (2022)-02-31 section editor: francesco lamonaca, university of calabria, italy received november 9, 2021; in final form february 21, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: laura fabbiano, e-mail: laura.fabbiano@poliba.it 1. introduction the costs of maintenance are a consistent amount of the total operating costs in industrial production. moreover, several extra costs come from the profit loss due to the undesired failure of the plant. they can be dropped down with a failure prediction that preserves the devices through just on-time maintenance. these two classes of problems can be solved by the implementation of preventive maintenance in the perspective of the smart industry, [1]-[3], for which all the elements of a factory work in a completely, collaborative and integrated way and deal in real-time with the timely changes of the workflow. concerning costs reduction in force of the just on-time failure detection, the goal of the smart industry is to implement all the fundamental measurement procedures for continuous real-time monitoring (real-time condition monitoring) and coordinated monitoring of all plant elements, [4]-[6]. the keywords of the fourth industrial revolution are, therefore "preventive maintenance", "intelligent and real-time orchestration" and "synchronization of physical and digital processes". in literature, there are crucial new insights about the continuous screening of the devices that made possible the detection of incipient failure, [7]-[9]. among others, the prognostic approach is mandatory to obtain an accurate and coding of incipient machine failures, [10]-[12]. the detection of the degradation and damage of a plant component is the goal of the predictive approach. the condition-based maintenance (cbm) specifications [13], [14] inspired by the prognostic approach has been applied by the authors to the case of a simple fluidic thrust system by using a mathematical approach. it consists of a pump-motor block with inverter and a control valve, which represents the load acting on the p-m block and attributable to the network supplied by it. the approach uses a numerical simulator to manage and analyse in real-time all the characteristic parameters (sensed or mathematically predicted) of each system components to get and to advise concerning likely incipient anomalies of the monitored components. abstract the goal of the paper is to propose a strategy of automating the control of wide spectrum industrial processes plants in the spirit of industry 4.0. the strategy is based on the creation of a virtual simulator of the operation of the plants involved in the process. through the digitization of the operational data sheets of the various components, the simulator can provide the reference values of the process control parameters to be compared with their actual values, to decide the direct inspection and/or the operational intervention on critical components before a possible failure. as example, a simple fluidic thrust plant has been considered, which a mathematical model (simulator) for its optimal operating conditions has been formulated for, by using the digitalized real operational data sheets of its components. the simple thrust system considered consists of a centrifugal pump driven by a three-phase electric motor, an inverter to regulate the rotation of the motor and a proportional valve that simulates the external load acting on the pump. as results, the operational data sheets and principal characteristics of the pump have been reproduced by means of the simulator here developed, showing a very good agreement. mailto:laura.fabbiano@poliba.it acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 the virtual simulator is based on the digital reconstruction of the data sheets of all the more sensitive components of the system to allow the digital prediction of the optimal point of system working conditions. these data provided by model are compared with the real-time instantaneous acquisitions of the correspondent main operating parameters. the eventual discrepancy between virtual and actual values of the monitored parameter identifies which of the components of the system may be in critical conditions, thus dropping down its start and stop time responsible of the maintenance and service interruption costs. the proposed procedure, based on the digitization of the technical data sheets of the plant devices can be easily extended to most of the systems operating in industrial processes, thus allowing to control the entire system in real-time and detecting the dangerous failure scenarios. 2. plant specifications the system here simulated is a simple fluidic thrust plant and consists of a centrifugal pump (calpeda nm4 25/12a/a, [15]) driven by an electric asynchronous motor whose rotation speed is regulated by an inverter, and a control valve simulating the supplied network. the operational point of such system is determined by the coupling of the pump with the valve. the technical and geometric characteristics of each component are known. those of the pump are here reported: 𝑛 = 1450 rpm rotational speed of reference 𝑃 = 0.25 kw absorbed power at 1450 rpm 𝑄 = 1 ÷ 6 m3/h flow rate working range 𝐻𝑢 = 6.1 ÷ 3.3 m head range 𝐷2 = 131.5 mm external diameter of the impeller 𝛽2 = 157.5 ° angle of blade exit 𝑙2 = 4 mm blade height 𝑧 = 7 number of blades 𝜁 = 1 − 𝑧𝑠 π 𝐷2 = 0.95 blade bulk coefficient a user interface has been set up to manage the procedure of analysis and acquisition of data. specifically, that interface has been created on labview platform and is divided into several sections; each section manages and analyses the different parts of the plant, figure 1. in the control section you find the commands to change the opening degree of the valve and the inverter frequency. the target section enforces the rotation number of the engine. once the frequency is set by the control panel, it automatically varies until the system reaches a stable point of operation where the number of revolutions of the pump coincides with the value imposed in the target panel. the transient panel shows the indicators of instant monitoring of the plant. in particular, the quantities acquired by the transducers (number of revolutions, mechanical torque, flow rate, head) and some derived quantities such as instantaneous absorbed power are represented. the results panel collects in graphic form (three different plots) the trends of the pump head (characteristic curve), efficiency and absorbed power as a function of the flow rate. 3. virtual simulator the mathematical model of the plant consists of: 1. valve model; 2. pump motor block model. most of the geometric and operating data have been inferred from the technical specifications of the components. 3.1. valve model the control valve (here a proportional solenoid valve) allows the modulation of the circuit external characteristic (valve characteristic), [16]. that is, by acting on the closing/opening control devise, the relationship between the pressure drops and the flow rate flowing through the valve changes, so determining a new operating point as intersection with the internal characteristic (pump operating curve). from bernoulli's equation the relationship to the basis of the valve operation comes out as: 𝑄 = [𝑐𝑣max ℎ + 𝑐𝑣min (1 − ℎ)]√ 𝐻𝑣 𝐻𝑣 ∗ , (1) where: • 0 < ℎ < 1 degree of opening of the valve (or shutter stroke); • 𝑐𝑣max = 0.005 m 3/h and 𝑐𝑣min = 10 −6 m3/h flow rates (or efflux coefficients) minimum and maximum estimated by the technical data sheet of the pump; • 𝐻𝑣 → static pressure drop through the valve (m); • 𝐻𝑣 ∗ → 1 𝑝𝑠𝑖 static pressure drop through the valve (m). it represents the linear operational limit of the valve itself. from the previous relationship we get: 𝐻𝑣 = 𝑄 2 𝐻𝑣 ∗ [𝑐𝑣max ℎ + 𝑐𝑣min (1 − ℎ)] 2 (2) if a linear valve is considered, its characteristic coefficient 𝐾𝑣 (ℎ) reads as: 𝐾𝑣 (ℎ) = 𝐻𝑢 ∗ [𝑐𝑣𝑚𝑎𝑥 ℎ + 𝑐𝑣𝑚𝑖𝑛 (1 − ℎ)] 2 , (3) so, making possible to rewrite the previous relation as: 𝐻𝑣 = 𝑄 2𝐾𝑣 (ℎ) (4) the efflux coefficient of the valve is an increasing monotonous function of the shutter stroke, 𝑐𝑣 (ℎ), and it can be expressed in dimensionless form if the following quantities are considered: • the relative efflux coefficient: 𝜑 = 𝑐𝑣 /𝑐max • the intrinsic rangeability, ratio between the maximum and the minimum values of the efflux coefficient: 𝑟 = 𝑐𝑣max /𝑐𝑣min figure 1. user interface in labview platform. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 examples of valve characteristics expressed in terms of ℎ are shown in figure 2 for linear, equal percentage, fast-opening and quadratic valves: for the linear characteristic, it holds the relation: 𝜑(ℎ) = ℎ + 1 𝑟 (1 − ℎ) (5) in the plant operating model (simulator), the characteristic coefficient of the valve can be rewritten as, from (4): 𝐾𝑣 (ℎ) = 𝐻𝑣 ∗ 𝑐𝑣 (ℎ) 2 = 𝐻𝑣 ∗ 𝑐𝑣 max 2 𝜑(ℎ)2 (6) this relation for 𝐾𝑣 (ℎ) keeps holding even for other kind of valves, if the right formula for 𝜑(ℎ) is introduced in it. 3.2. motor pump block the 'pump-motor' block, figure 3, represents the virtual simulator of a centrifugal pump and its electric control motor. this block has 2 inputs and 4 outputs. the 2 inputs are, respectively: 1. the coefficient of the valve, whose operating characteristic is enclosed in the 'valve' model and is provided by it. its regulation will result in the variation of the number of revolutions of the pump, 𝑛 2. the magnetic field frequency, parameter recalled by the control unit, which allows to act in feedback on it, in order to restore the number of revolutions to a target and constant value during the process of digital reconstruction of the characteristic curves of the pump. the 4 outputs represent the virtual sensors of the p-m block suitable for the detection of: 1. q flow rate (m³/h) 2. 𝐻𝑢 head provided by the pump (m) 3. 𝑛 number of engine revolutions (rpm) 4. 𝐶𝑚 drive torque, (n m). they are recalled by the supervisor function (see figure 1) and made visible in the transient panel, where the instant monitoring indicators of the system are reported. in particular, the quantities acquired by the transducers (for a number of revolutions, mechanical torque, flow rate, head) and some derived quantities such as the instantaneous absorbed power are shown. in the model there are some geometric parameters directly available from the technical characteristics of the pump; other necessary parameters are inferred from the characteristic curves and the power absorbed by the pump, for a fixed pump revolution. the internal characteristic of the pump at the revolution 𝑛0 = 450 rpm can be expressed in polynomial form as follows: 𝐻𝑢 = −0.0893 𝑄 2 + 0.0706 𝑄 + 6.104 (7) obtained by best square polynomial regression of the pump, reported in the table 1 and shown in figure 4. it is possible to generalize this relation and make it valid for any number of revolutions, as: 𝐻𝑢 = 𝑎 𝑄 2 + 𝑏 𝑛 𝑛0 𝑄 + 𝑐 ( 𝑛 𝑛0 ) 2 (8) where 𝑎, 𝑏 and 𝑐 are the coefficients of the specific previous relation. substituting the expression of the valve characteristic (4) in (8), in the steady state operating condition of the system, we obtain: [𝑎 − 𝐾𝑣 (ℎ)]𝑄 2 + 𝑏 𝑛 𝑛0 𝑄 + 𝑐 ( 𝑛 𝑛0 ) 2 = 0. (9) by expressing then 𝑄 as a function of the geometrical and operating parameters of the pump as: 𝑄 = 𝐾𝑄 𝜑 𝑛 , (10) where 𝐾𝑄 = 𝜂𝜈 𝜁 π 2 𝑙2 𝐷2 2, and substituting it in the equation that regulates the operation of the plant (9) we get the new relation: 𝐾𝜙0 2 [𝑎 − 𝐾𝑣 (ℎ)]𝜙 2 + 𝑏𝐾𝜙1 𝜙 + 𝐾𝜙 2 = 0 , (11) where 𝐾𝜙0 = 𝐾𝑄 2 𝑛0 2[𝑎 − 𝐾𝑣 (ℎ)], 𝐾𝜙1 = 𝑏 𝐾𝑄 𝑛0, 𝐾𝜙2 = 𝑐, and 𝜙 the flow parameter. the equality of the head provided by the pump with the pressure drop introduced by the control valve is not sufficient to describe the operation of the experimental system. we need the figure 2. valve characteristic curves in terms of the flow rate coefficient, 𝜙 as function of the opening position, h. the curves show the cases of linear, parabolic, equal percentage and fast opening design. figure 3. inputs and outputs of the pump-motor block. figure 4. pump performance curve at 𝑛 = 1450 rpm. comparison between simulation and actual operational point. table 1. pump data. q in m³/h 1 1.2 1.5 1.8 2.4 3 3.6 4.2 4.8 5.4 6 h in m 6 6.05 6 5.9 5.8 5.5 5.2 4.8 4.4 3.9 3.3 0 2 4 6 8 0 2 4 6 8 h u [ m ] q [m 3 / h] best-fit data sheet acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 power conservation law, for which the power supplied by the motor equals the power absorbed by the pump. the last one is expressed by: 𝑃𝑎𝑝 = 𝛾 𝑄 𝐻𝑢 𝜂𝑦 𝜂𝑣 𝜂𝑚 = 𝜌 𝑄 𝐿𝑖 𝜂𝑣 𝜂𝑚 = 𝜌 𝐾𝑄 𝜙 𝜓 π 𝐷 2 𝑛2 2 𝜂𝑣 𝜂𝑚 (12) in which 𝜓 = 2(1 − 𝜙 cotg𝛽2) − (2 π sin 𝛽2)/𝑧 and 𝜂𝑣 𝑒𝑑 𝜂𝑚 are the volumetric and mechanical efficiencies to be discussed in the paragraph of the results, where the methods to evaluate their values during the actual operation of the plant for its monitoring will be described. both could result indices of possible anomalies. with reference to the expression of the power supplied by the motor, 𝑃𝑚, the mechanical torque, 𝐶𝑚, can be expressed. the mechanical torque 𝐶𝑚 of the electric motor can be expressed as a function of the slip, 𝑠, according to the following relation: 𝐶𝑚 = 2𝐶𝑚 max 𝑠max ∙ 𝑠 (𝑠max 2 + 𝑠2) (13) in which 𝐶𝑚max is the maximum torque available for the slip value 𝑠 = 𝑠max, available from the relative data sheet and equal to 𝐶𝑚 = 50 n m and 𝑠 = 0.2. the slip is representative of the difference between the speed of rotation of the rotor, i.e., the shaft and the speed of rotation of the magnetic field, 𝑛𝑠,that is: 𝑠 = 𝑛𝑠 − 𝑛 𝑛𝑠 , (14) where: 𝑛 = 60𝑓 𝑝 (1 − 𝑠) (15) with 𝑝 = 2, motor polar couples. the mechanical characteristic of the motor, 𝐶𝑚, is represented as function of the revolution of the shaft in figure 5 for 3 different values of frequency. this characteristic shows that when the slip is roughly zero the torque reaches its maximum value, and the motor speed is close to the synchronism speed 𝑛𝑠. the efficiency 𝜂 of the three-phase asynchronous motor can be calculated with the well-known formula: 𝜂 = 𝑃𝑟 𝑃𝑎 , (16) where 𝑃𝑟 is the mechanical power supplied to the rotor and 𝑃𝑎 is the electrical power provided by the stator. equating the power absorbed by the pump to the power supplied by the motor, we obtain: 𝜌 𝐾𝑄 𝜙 𝜓 π 𝐷 2 𝑛2 2 𝜂𝑣 𝜂𝑚 = (2𝐶𝑚 max 𝑠max ∙ 𝑠 𝑠max 2 + 𝑠2 ) 2 π 𝑛 (17) and plugging into it the expression of the slip 𝑠, we get: 𝜌 𝐾𝑄 𝜙 𝜓 π 𝐷2 2 ( 𝑝 𝑓 ) 2 𝑛4 − 𝜌 𝐾𝑄 𝜙 𝜓 π 𝐷2 2 2 𝑝 𝑓 𝑛3 + 𝜌 𝐾𝑄 𝜙 𝜓 π 𝐷2 2(𝑠max + 1) 𝑛 2 + 8 𝐶𝑚 max 𝑠max 𝜂𝑣 𝜂𝑚 𝑝 𝑓 𝑛 − 8 𝐶𝑚 max 𝑠max 𝜂𝑣 𝜂𝑚 = 0. (18) the previous equation can be rewritten in the easier-to-read following way: 𝐾𝑛0 𝑛 4 + 𝐾𝑛1 𝑛 3 + 𝐾𝑛2 𝑛 2 + 𝐾 𝑛3 𝑛 + 𝐾𝑛4 = 0, (19) with obvious meaning of the symbols. the former equation represents the main equation of the plant operating model. such an equation can be solved iteratively by the newton-raphson method, for example, to provide the revolution speed of the system for a fixed working condition. the iterations, in the present case, have been interrupted when the percentage error on 𝑛 was less than 10−5. the described set of model equations of the thrust system components constitute the simulator of its working operation. 4. results the volumetric efficiency of the pump changes according to the operating regime. it is negligible for null flow rates in the case of a control valve fully closed. as the flow rate increases, the volumetric efficiency increases up to the plateau value that is almost constant for a wide range of the pump flow rate, in steady state operation. the data sheet values of it are shown in figure 6 here below. in the case of high flow rate, the following procedure aims to calculate the values of the mechanical and the volumetric efficiencies, 𝜂𝑚, 𝜂𝑣∞ (asymptotic value), respectively. the pressure parameter, 𝜓, can be computed, in the following two ways: 𝜓 = 2 𝑔 𝐻𝑢 𝑢2 2 𝜂𝑝 𝜂𝑚𝜂𝑦 (20) or 𝜓 = 2(1 + 𝜙 cotg 𝛽2) − 2 π sin𝛽2 𝑧 . (21) equating the above equations and rewriting the flow parameter as a function of the flow rate, 𝜙 = 𝜙(𝑄), we get: figure 5. characteristics of the asynchronous motor in terms of torque as function of revolution, for different frequency values. figure 6. pump volumetric efficiency experimental data. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 5 2 𝑔 𝐻𝑢 𝑢2 2 𝜂𝑝 = 2 cotg 𝛽2 𝜂𝑣 𝜉 π 2 𝑙2 𝐷2 2 𝑛 1 𝜂𝑚 𝜂𝑣 𝑄 + 2 𝜂𝑚 𝜂𝑣 (1 − π sin 𝛽2 𝑧 ) . (22) from the experimental data of the pump calpeda nm4 25/12a/a, it is observed that for flow rates greater than a threshold value (0.001 m3/s), the term 2 𝑔 𝐻𝑢 /(𝑢2 2 𝜂𝑝) is a linear function of 𝑄, figure 7. in the linearity range, the coefficients of the previous equation can be calculated by best fitting the pump data sheet discrete values, figure 8, and therefore it is possible to estimate the unknown values of the mechanical efficiency 𝜂𝑚 and the volumetric efficiency 𝜂𝑣 . specifically, the mechanical efficiency is equal to 𝜂𝑚 = 0.955 and it is assumed constant for the whole operating regime, whereas the volumetric efficiency obtained from this procedure represents the asymptotic limit only in the range of high flow rates and it is equal to 𝜂𝑣∞ = 0.719. known the value of the mechanical efficiency, from the (22), 𝜂𝑣 can be made explicit as function of the flow rate only, such as 𝜂𝑣 = 𝜂𝑣(𝑄). the relationship 𝜂𝑣 = 𝜂𝑣 (𝑄) is well described by the following relation: 𝜂𝑣 = 𝜂𝑣∞ (1 − e − 𝑄 𝜏 ) , (23) where the value of 𝜏 = 3.83 ∙ 10−4𝑚3/𝑠 is obtained from the best-fit polynomial of the available data operation data of the pump with 𝜂𝑣∞ = 0.719. the following figure shows the good agreement between the theoretical prediction and pump data over the whole range of the flow rates. the simulator function uses this model of the volumetric efficiency and the prediction of the mechanical efficiency. it provides a good agreement of the control parameters with the characteristic data of the pump, in terms of absorbed power 𝑃𝑎 , efficiency 𝜂𝑝 and pressure head 𝐻𝑢 . through the simulator, described in the previous section, it is possible to digitally reconstruct the pump characteristic curves, figure 10, figure 11 and figure 12. by using the digitalized technical and operating data of the system components and setting a value of the frequency of the inverter regulating the number of revolutions of the pump-motor block, this one can be iteratively determined from (19), and then the outputs of the pump-motor are calculated from its model, i.e. the flow rate, the head, the torque 𝐶𝑚 and the efficiency of the pump 𝜂𝑝, which constitute the virtual values with which to compare the physical ones acquired (and calculated, such as mechanical and volumetric efficiencies) by the sensors prepared for that purpose in the system. so, the entire performance curves of the pump can be reconstructed and digitalized just varying the opening degree of the valve for each of the desired discrete values of the frequency (revolution speed of the pump), and by reiterating the use of the simulator equations as reported in the sections. figure 7. experimental values for the left-hand side term in (22) as a function of the flow rate. figure 8. linear regression of experimental data for the left-hand side term of (22) as function of the flow rate. figure 9. least square regression of 𝜂𝑣 experimental data. figure 10.expected pump efficiency from the simulator compared with actual operational points. 0 1 2 3 0 0.0005 0.001 0.0015 0.002 0.0025 2 g h u /( h p u 2 ) q [m 3 / s] data sheet acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 6 the figure 13 shows the pump performance curves evaluated by the simulator for 3 revolution speed values, as example of its application. the operating parameter values predicted by the models, including the mechanical and volumetric efficiency, are the reference targets for eventual anomalies detection when compared with the relative acquired values. the eventual significative discrepancy between the predicted and sensed values of each characteristic quantity can indicate an incipient criticality for the component whom it refers to. in these conditions it is possible to report and provide for the irregularity of the operation with an active control on the system. in addition, it is even possible to understand the origin of the disagreement between the real and expected values of mechanical or volumetric efficiency. for example, with reference to their possible discrepancy it is possible to suspect a critical condition of the bearings or seals of the pump-motor block. 5. discussion and conclusion the intent of the authors was to propose a strategy for the automation of the monitoring of wide-ranging industrial processes, in the spirit of the "smart industry", in order to reduce the risks of sudden default of the activities caused by the possible criticality of the most delicate components, to reduce the time to identify the criticality, to reduce the maintenance time necessary to restore normal operating conditions, thus reducing the associated costs. the strategy is based on the creation of a virtual simulator of the operation of the various plants involved in the process which, through the digitization of the data sheets of the various components, can provide the reference values of the process control parameters. those values are so compared with the values acquired by the measuring chains predisposed for the purpose, to allow the operator the reporting of any suspected anomaly in place and to quickly intervene to restore optimal operating conditions through targeted maintenance. to this end and by way of an example, we have proposed the formulation of a simulator of a simple fluidic thrust system, consisting of a pump-motor block with inverter and a regulation valve, which represents the load acting on the pm block and attributable to the network it supplies. the simulator allows to manage and analyze in real time all the characteristic parameters, acquired and/or calculated, of each of the monitored system components, during normal operation, to determine if there are conditions of possible incipient anomalies on the components under observation. this operation allows to compare, through realtime acquisitions, the instantaneous values of the characteristic operating parameters with those provided by the simulator, corresponding to those that should be in the optimal operating conditions of the system. so, it is possible to identify which characteristic parameter of the various monitored components of the system reveals a more discrepant value from the optimal one, thus denouncing a possible critical condition of the component to which it refers to. in this way, the operator can decide to intervene on the component in time, thus minimizing the intervention times and therefore the maintenance and restoration costs of the normal operation of the system. is opinion of the authors that the identified procedure, based on the digitization of the technical data sheets of the plant components, is extendable to most of the operating plants of an industrial process, thus allowing the entire process to be controlled in real time. references [1] r. a. luchian, g. stamatescu, i. stamatescu, i. fagarasan, d. popescu, iiot decentralized system monitoring for smart industry applications, 2021 29th mediterranean conference on control and automation (med), 2021, pp. 1161-1166. doi 10.1109/med51440.2021.9480341 [2] l. fabbiano, g. vacca, g. dinardo, smart water grid: a smart methodology to detect leaks in water distribution networks, figure 11. expected pump manometric head from the simulator compared with actual operational points. figure 12. expected pump absorbed power from the simulator compared with actual operational points. figure 13. expected pump performance curves from the simulator for three revolution speed (red lines) and four 𝜂𝑝 iso-lines. https://doi.org/10.1109/med51440.2021.9480341 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 7 measurement, vol. 151, (2020). doi: 10.1016/j.measurement.2019.107260 [3] l. ardito, a. messeni petruzzelli, u. panniello, a. garavelli, towards industry 4.0: mapping digital technologies for supply chain management-marketing integration, business process management journal, vol. 25, (2019), no. 2, pp. 323-346. doi: 10.1108/bpmj-04-2017-0088 [4] a. j. isaksson, i. harjunkoski, g. sand, the impact of digitalization on the future of control and operations, comput. and chem. eng., 114, (2018), pp. 122–129. doi: 10.1016/j.compchemeng.2017.10.037 [5] g. dinardo, l. fabbiano, g. vacca, a smart and intuitive machine condition monitoring in the industry 4.0 scenario. measurement, 126, (2018), pp. 1–12. doi: 10.1016/j.measurement.2018.05.041 [6] m. short, j. twiddle, an industrial digitalization platform for condition monitoring and predictive maintenance of pumping equipment. sensors, 19, (2019), 3781. doi: 10.3390/s19173781 [7] p. girdhar, c. scheffer, predictive maintenance techniques: part 1 predictive maintenance basics, in practical machinery vibration analysis and predictive maintenance, p. girdhar and c. scheffer, eds., oxford, newnes, (2004), pp. 1-10. [8] p. girdhar and c. scheffer, predictive maintenance techniques: part 2 vibration basics, in practical machinery vibration analysis and predictive maintenance, p. girdhar and c. scheffer, eds., oxford, newnes, (2004), pp. 11-28. [9] m. caciotta, v. cerqua, f. leccese, s. giarnetti, e. de francesco, e. de francesco, n. scaldarella, a first study on prognostic system for electric engines based on envelope analysis, in ieee metrology for aerospace (2014), benevento, italy, 29-30 may 2014, pp. 362-366. doi: 10.1109/metroaerospace.2014.6865950 [10] t. van tung, y. bo-suk, machine fault diagnosis and prognosis: the state of the art, international journal of fluid machinery and systems 2.1, (2009), pp. 61-71. doi: 10.5293/ijfms.2009.2.1.061 [11] li zhe, yi wang, ke-sheng wang, intelligent predictive maintenance for fault diagnosis and prognosis in machine centers: industry 4.0 scenario, advances in manufacturing 5.4, (2017), pp. 377-387. doi: 10.1007/s40436-017-0203-8 [12] e. petritoli, f. leccese, g. schirripa spagnolo, new reliability for industry 4.0: a caste study in cots-based equipment, ieee international workshop on metrology for industry 4.0 & iot (metroind4.0&iot), rome, italy, 7-9 june 2021, pp. 27-31. doi: 10.1109/metroind4.0iot51437.2021.9488555 [13] e. quatrini, f. costantino, g. di gravio, r. patriarca, conditionbased maintenance an extensive literature review, machines, (2020), 8, 31, pp. 1-28. doi: 10.3390/machines8020031 [14] a. k. s. jardine, d. lin, d. banjevic, a review on machinery diagnostics and prognostics implementing condition-based maintenance, mechanical systems and signal processing, vol. 20, (2006), pp. 1483-1510. doi: 10.1016/j.ymssp.2005.09.012 [15] caldepa website. online [accessed 03 november 2021] https://pump-selector.calpeda.com/pump/23 [16] f. fornarelli, a. lippolis, p. oresta, a. posa, computational investigation of a pressure compensated vane pump, energy procedia, volume 148, 73rd conference of the italian thermal machines engineering association, pisa, italy, 12 september 2018, pp 194-201. doi: 10.1016/j.egypro.2018.08.068 https://doi.org/10.1016/j.measurement.2019.107260 https://doi.org/10.1108/bpmj-04-2017-0088 https://doi.org/10.1016/j.compchemeng.2017.10.037 https://doi.org/10.1016/j.measurement.2018.05.041 https://doi.org/10.3390/s19173781 https://doi.org/10.1109/metroaerospace.2014.6865950 http://dx.doi.org/10.5293/ijfms.2009.2.1.061 https://doi.org/10.1007/s40436-017-0203-8 https://doi.org/10.1109/metroind4.0iot51437.2021.9488555 https://doi.org/10.3390/machines8020031 https://doi.org/10.1016/j.ymssp.2005.09.012 https://pump-selector.calpeda.com/pump/23 https://doi.org/10.1016/j.egypro.2018.08.068 magnetic circuit optimization of linear dynamic actuators acta imeko issn: 2221-870x september 2021, volume 10, number 3, 134 141 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 134 magnetic circuit optimization of linear dynamic actuators laszlo kazup1, angela varadine szarka1 1 reseach institute of electronics and information technology, university of miskolc, miskolc, hungary section: research paper keywords: magnetic brake; linear brake; magnetic circuit calculation; dynamic braking citation: laszlo kazup, angela varadine szarka, magnetic circuit optimization of linear dynamic actuators, acta imeko, vol. 10, no. 3, article 19, september 2021, identifier: imeko-acta-10 (2021)-03-19 section editor: lorenzo ciani, university of florence, italy received february 5, 2021; in final form april 27, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: the described article/presentation/study was carried out as part of the efop-3.6.1-16-2016-00011 “younger and renewing university – innovative knowledge city – institutional development of the university of miskolc aiming at intelligent specialisation” project implemented in the framework of the szechenyi 2020 program. the realization of t this project is supported by the european union, co-financed by the european social fund. corresponding author: laszlo kazup, e-mail: laszlo.kazup@eiki.hu 1. introduction in the field of manufacturing, many methods of lifetime testing are used. in case of electronic devices, the stressand lifetime testing is quite simple in most cases compared to the mechanical tests, when mechanical load emulation is very difficult and expensive. for example, power tools are tested with different loads and some of them have alternating linear movements which should be loaded. there are not fully developed contactless load emulating methods for this application. sometimes hydraulic system is used with low efficiency. in case of traditional practice test methods, an operator works with the device under test (dut), he/she cuts, sands, or planes different materials. this type of testing is expensive, not reliable, and less repeatable than the automated test solutions. in some tests the operators were replaced by industrial robots, but the test is still expensive and dirty due to the using of real materials. the best result would be a system which can emulate the loading force without any physical contact, abrasion, and dirt. research of braking method for fast alternating linear movements by using contactless magnetic methods is in the focus of our project. this work is the second phase of the research and development project, which was started in 2008. the original goal of the project was to develop a special magnetic brake which can simulate the real operation of an electric jig saw to replace the traditional practice test method in which operators perform the full test process by cutting different types of material such as wood, steel, aluminium, etc. the repeatability of this test method is very low, as well as the reliability of the documented test circumstances. also a lot of waste and dirt in the test centre was produced. in those days a special hydraulic brake was developed in switzerland to solve this problem, but its performance, controllability and reliability was poor. the analysis of different dynamic brake constructions and methods have proved that the most reliable and efficient solution can be achieved by using a magnetic construction. the test equipment should perform a special braking characteristic which can be controlled even in a single moving period, and in case of using an electromagnetic solution, these properties can be abstract contactless braking methods (with capability of energy recuperation) are more and more widely used and they replace the traditional abrasive and dissipative braking techniques. in case of rotating motion, the method is trivial and often used nowadays. but when the movement is linear and fast alternating, there are only a few possibilities to break the movement. the basic goal of research project is to develop a linear braking method based on the magnetic principle, which enables the efficient and highly controllable braking of alternating movements. frequency of the alternating movement can be in wide range, aim of the research to develop contactless braking method for vibrating movement for as higher as possible frequency. the research includes examination and further development of possible magnetic implementations and existing methods, so that an efficient construction suitable for the effective linear movement control can be created. the first problem to be solved is design a well-constructed magnetic circuit with high air gap induction, which provides effective and good dynamic parameters for the braking devices. the present paper summarizes the magnetostatics design of “voice-coil linear actuator” type actuators and the effects of structure-related flux scattering and its compensation. mailto:laszlo.kazup@eiki.hu acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 135 realized. the development of this test equipment needed both practical and theoretical improvement. the prototype testing and evaluation originated the further hypotheses. our research group aims to create general theoretical and modelling methodology supporting reliable practical realization of dynamically controlled magnetic fields which can be used in many different industries to optimize the performance of voice-coil type linear actuators and brakes. my research work includes systematic analysis of all possible and realistic magnetic circuit constructions, optimization of the magnetic circuits, and also performing static and dynamic simulations to maximize the efficiency. the research includes analysis of voice-coil type magnetic actuators, design of magnetic circuit to maximize efficiency and reliability and minimize weight and size of the brake. the first part of this paper includes introduction of a method for transformation of 2-dimension magnetic calculations to the cylindrical coordinate system and presents the analysis of the flux leakage and its effects to the results of calculations. the calculations are confirmed by finite element simulations, and the results are also used to correct the differences caused by the flux leakage. these calculations, simulations and corrections were solved for different shapes to realize a shapeand size independent model to calculate the correct average flux density in the air gap. the second part of paper presents the results of dynamic simulations, by which dynamic behaviour (relationship between the current and the force in dynamic cases, eddy currentand solid losses, etc.) of the voice coil-type actuator is analysed. this research is the extension of the air-gap induction distortion analysis, which work is described in paper “a. diagnostics of air-gap induction’s distortion in linear magnetic brake for dynamic applications”, [1] [3]. 2. design of cylindrical magnetic circuit with twodimensional, plane cross-section model calculations the aim of the first part of the work was to theoretically establish, develop, and validate a method that transforms the dimensions of a cylindrical symmetrical magnetic circuit of a given size into an equivalent two-dimensional cross-sectional and constant depth model. in this way, cylindrical magnetic circuits can also be calculated. in this method the cylindrical magnetic circuit is “spread out” so that the vertical (z) dimensions are leaved unchanged, and the r values are transformed into x values to create a flat-section, fixed-depth model (“cubic model”) in which the volume of each part is the same as the volume of the corresponding parts in the cylindrical model, so the two models is connected to each other by the unchanged value of the flux. however, inductions calculated in the planar model is also valid for the transformation, the calculated values correspond to the average values in the cylindrical model, since values of the magnetic induction in the cylindrical model are changing in radial direction. as a result, higher induction values are observed on the inner half of the cylindrical parts and lower on the outer half. figure 1. shows the dimensions of a typical cylindrical model, and figure 2 illustrates the corresponding x and z dimensions in a planar cross-sectional model for transformation. [6], [7] the depth d of the plane cross-section fixed-depth model should be taken so that the x dimensions of the plane model are close to the radius differences of the cylindrical model in the area most affected by the test (this is practically the air gap). thus, for practical reasons, depth dimension d is selected equal to the length of line at the air gap center circle which is as follows: 𝑑 = (𝑟2 + 𝑟3) π (1) to determine the relationship between the radius and the x values, the volume equality was used as already mentioned above, which is the following: 𝑉𝑛𝑟ℎ = 𝑉𝑛𝑥𝑦 (𝑟𝑛 2 − 𝑟𝑛−1 2) π ℎ𝑚 = 𝑥𝑛 𝑦𝑚 𝑑 . (2) since the given h and y values in the two models are the same based on a previous condition, the final correlation for the x values after transformations is as follows: 𝑥𝑛 = (𝑟𝑛 2 − 𝑟𝑛−1 2) π 𝑑 . (3) in the above relation if n = 1, then r0 is 0, assuming that inner bar (iron cores and also magnets) is cylindrical. if the inner bar has ring section, value of r0 is equal to radius of inner ring. 3. verification of the relationships received by finite element simulation to verify transformation relationships introduced above, a transformation of a cylindrical magnetic circuit of a given size was performed. in determining the depth dimension d, the length figure 1. the mechanical drawing of the first experimental dynamic magnetic brake prototype, [3]. figure 2. the half cross-section of a typical cylindrical magnetic circuit. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 136 of the central circle of the air gap was considered, which will result nearly equal air gap induction in plane model and in the cylindrical model. the dimensions of the initial cylindrical model and the plane model which is the result of the transformation are summarized in table 1. the next step was to determine the air gap induction based on static magnetic calculations in which dimensions of the plane model were used. initial data had to be determined, these are the main magnetic parameters of the applied soft ferromagnetic and permanent magnetic material. these data were received from the finite element simulation software database for better comparison. these material properties are as follows (the operating point values of the magnet have been determined graphically): permanent magnet o material: y25 ferrite magnet o residual induction: 0.378 t o coercive force: 153035 a/m o relative permeability at the operating point section of the demagnetization curve: 1.9 soft steel parts o material: 1010 low carbon soft steel o relative permeability: ~103 o maximum induction: 1 t (in linear part of bh graph) the summarized relative permeability of the permanent magnet and the air gap is three order of magnitude smaller than relative permeability of the body, therefore the permeability of the soft iron was neglected in the calculations. in this phase of the research the magnetic calculations focused to the permanent magnet and the air gap. based on the above described initial data, the main steps of the calculations were as follows. 1.) determining the operating point of the magnet by equation of the air gap line (based on data of the transformed geometry): 𝐻𝑚 = −1 𝜇0 ∙ 𝑥1 𝑦1 ∙ 𝑥3 𝑦2 ∙ 𝐵𝑚 𝐻𝑚 = −7000 a m 𝐵𝑚 = 0.3726 t (4) 2.) determination of the flux: 𝛷 = 𝐵 ∙ 𝐴 = 𝐵𝑚 ∙ 𝐴𝑚 = 𝐵𝑚 ∙ 𝑥1 ∙ 𝑑 = 1.873 ∙ 10−3 wb . (5) 3.) determination of the air gap induction: 𝐵 = 𝛷 𝐴 → 𝐵𝛿 = 𝛷 𝐴𝛿 = 𝛷 𝑦1 𝑑 = 0.2092 t . (6) after performing the calculations, both the original cylindrical and the transformed magnetic circuits were simulated by femm 4.0 finite element simulation software [7]. the simulation results and the calculated values are summarized in table 2. the air gap induction value was approx. 75 % compared to the calculated one in the simulation (the reason is the leakage flux around the air gap). in spite of this, the average value of the induction in the magnet and the flux of the magnetic circuit showed only a very small difference from the calculated data. according to these results we can state that the model and transformation provided good results and can be used in the next stage of the research. most of the differences from the simulated values (the finite element simulation is the validation of the calculations in this stage) are due to the leakage flux, correction of which needs further studies. although flux in the air gap is also 72 % less, including leakage induction lines, simulation gives the original calculated flux value. the further paragraphs show that in case of flux leakage compensation the calculations will approximate the simulated results very close. 4. calculation of magnetic circuit for required air gap induction and air gap depth while the previous calculations illustrated the transformation of a magnetic circuit with given dimensions, in practice, developing a so-called “voice-coil-actuator”, a much more common problem is adjusting dimensions of the structure, especially dimensions of the magnet to be used, to the given air gap height and air gap induction value. in such a case, the minimum air gap diameter has to be defined at which the given induction can be performed at the specified air gap height. also, if the diameter of the air gap is fixed, feasibility of the desired induction with the given permanent magnet type at the specified sizes should be checked. in addition, the effect of table 1. basic dimensions of the cylindrical model and calculated dimensions of the transformed model. parameter description value (mm) original values of the cylindrical model 𝑟0 internal radius of the ring magnet 40 𝑟1 external radius of the ring magnet 70 𝑟2 internal radius of the air gap 𝑟3 external radius of the air gap 72.5 𝑟4 internal radius of the outer ring 130 𝑟5 external radius of the outer ring 150 𝑧1 height of the air gap 20 𝑧2 height of the magnet/outer ring 60 𝑧3 height of the bottom cylinder 20 transformed values 𝑥1 width of the transformed magnet 11.23 𝑥2 distance between the air gap and the magnet 23.16 𝑥3 air gap length 2.49998 (~2.5) 𝑥4 distance between the air gap and the outer mild iron part 81.71 𝑥5 width of the outer mild iron part (transformed one of the original outer ring) 39.3 𝑧1 height of the air gap 20 𝑧2 height of the magnet/ outer mild iron part 60 𝑧3 height of the bottom mild iron part 20 𝑑 depth of the magnetic circuit 447.68 table 2. comparison of the calculated and the simulated results parameter calculated value simulated value (cylindrical model) simulated value (plane model) bm 0.3726 t 0.3739 t 0.3744 t φm 1.873 · 10-3 wb 1.879 · 10-3 wb 1.886 · 10-3 wb bδ 0.2092 t 0.1509 t 0.138 t φδ 1.873 · 10-3 wb 1.351 · 10-3 wb 1.24 · 10-3 wb acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 137 leakage flux must be considered when calculating these data (the study and correction of leakage is discussed in chapter vi. 4.1. calculation steps 1.) definition of operating point values of magnets the real demagnetization curve of permanent magnets is linear in a relatively wide range, it has nonlinearity only near the coercive force. therefore, the operating point of the magnet should be defined to provide maximum value of the product b · h, which is in the point 𝐵𝑚 = 𝐵𝑟 / 2 in the practice. 2.) determination of the cross section of magnets perpendicular to flux the air gap flux can be determined from the air gap cross section and the desired induction. since the air gap and the flux of the magnet are the same in theory, the cross section of the magnet can be determined from the equation φ = b · a. the value of the radius of this surface must be checked to ensure that it is smaller than the inner circle line of the air gap (in the case of a plane model the test can also be done, in which case the x value at the beginning of the air gap must be greater than or equal to x). if the evaluation shows that the desired induction is not feasible at a given air gap circle, the air gap diameter must be chosen to be higher, or if it is not possible, the magnet can be used with a different (higher) induction than the optimal operating point, which will result an increase in the length of the magnet (see point 3). in practice, the air gap and the flux of the magnet do not match due to scattering, so the correction described later should be applied. 3.) determination of the optimal length of permanent magnets since the reluctance of the iron body is neglected according to an earlier condition, the equation ∮ 𝐻𝑑𝑙 in the magnetic circuit can be defined as follows (without considering leakage correction): 𝐻𝛿 𝛿 + 𝐻𝑚 𝑦𝑚 = 0, 𝐵𝑚 𝜇0 𝜇𝑚 𝑦𝑚 = −𝐵𝛿 𝜇0 𝛿 (7) where 𝜇𝑚 is the relative permeability of the permanent magnet (typically will be between 1 and 2), 𝐵𝑚 is the operating point induction of the permanent magnet, 𝑦𝑚 is the length of permanent magnet to be defined, 𝐵𝛿 required induction, 𝛿 length of air gap. calculation example shows that the required air gap induction can be achieved without operating the magnet at the operating point. the demagnetizing field strength is less than the operating point field strength, but in this case the magnet length must be greater than the optimal value for the equation to be realized. if the goal is to use a permanent magnet with the smallest possible volume (and at the same time the lowest cost), it is advisable to use the optimal dimensions determined by the operating point. to achieve this, custom-made permanent magnets are necessary in practice. experience shows that when using commercial permanent magnets from catalogs some we should accept some comptonization. 4.) determination of cross section of soft iron body based on the data sheets of various commonly used soft iron materials, we can state that approx. up to 1 t, their b-h curve is linear, so it is not recommended to design above this value. otherwise, especially near saturation, the relative permeability of the iron decreases, and in this case the reluctance of the given section is no longer negligible. [6] the previously determined flux value and the maximum induction can be calculated based on the cross-sectional dimensions. 5. application of ferrite magnets in the outer ring as a flux conductor the results of the dynamic simulations proved, that the reluctance of the magnetic circuit and the properties of the materials in the vertical columns and rings greatly influence the value of the inductivity of the moving coil and the magnitude of the eddy current losses during dynamic operation. we have also examined an initial experimental design that included a permanent magnet both inside and outside. a static study of this construction was also carried out, during which it was found that the external permanent magnet does not substantially increase the air gap induction in the magnetic-air gap-magnet series magnetic circuit, it only increases the demagnetization field strength of the inner, so-called “working” magnet, which results operating point down shifting of this magnet. however, dynamic studies have discovered some advantages, which are the reduced inductance of the moving coil and reduced power dissipation of iron loss. results show that the inductance is approx. half of the level when using soft iron instead of an external magnet in such a structure. the conclusion is that if the dimensions of the external magnet are determined so that the magnetic field strength inside of it is close to 0, then the magnetic ring behaves as a “flux conductor” like iron with low relative permeability. this operating state is also characteristic of soft iron materials in the near-saturation state, with significant flux leakage. however, in the case of permanent magnets, the leakage is minimal. the results are better dynamic parameters caused by reduced inductance of the moving coil as well as reduced eddy current losses, but in turn it and does not reduce the performance of the correctly calculated dimensions working magnet. if this solution is used for the sizing of the magnetic circuit, the last point of the design steps is modified as follows: the cross section of the external magnetic ring at which the induction is equal to the value of the residual induction of the magnet must be determined. for practical reasons, it is recommended to choose the length of the external magnet as equal to the length of the inner magnet (this simplifies the construction). while experience show that neodymium iron-boron (or samarium cobalt at higher operating temperatures) is the most suitable material for the internal working magnet due to its high energy density, conventional strontium ferrite magnets can also be used for the external magnetic ring used as a flux conductor. they have lower prices than the two types of magnets listed above and, since they are used as external elements, their relatively large size is not limited by critical parameters affecting the moving mass, such as the diameter of the moving coil. 6. analysis and correction of flux leakage the leakage of magnetic induction lines in a magnetic circuit with an air gap is a complex problem that depends on several design and operating parameters. practical experiences show that in optimal situation more than 90% of the leakage flux is present around the air gap, but in case of permanent magnets operating out of the optimal operating point or at soft iron sections near acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 138 the saturation, significant part of lines may close outside the magnetic circuit. there are several estimation work-help documents of leakage calculations, including air-gap leakage calculations are available from permanent magnet manufacturers to magnetic circuit designers. in these documents, the air gap and the field around it are divided into several areas depending on the type of magnetic circuit, which can include semicylindrical, semispherical, quarter-spherical and/or prismatic areas. the magnetic reluctances of these parallel fields are defined using exact, empirical formulas. however, these definitions help only in the design of certain magnetic circuits with frequently used structures. [9], [10] in general cases, the effects of leakage in an arbitrary magnetic circuit can be estimated most accurately by finite element simulation. this requires exact and accurate information about geometric model and characteristics of the applied materials. in order to make method outlined earlier applicable in practice for design “voice-coil actuator” type electromagnetic actuators, correct levels of leakage flux should be estimated. in the first step we have worked out an algorithm for automated static magnetic calculations. the calculations are based on the basic magnetostatic calculations mentioned in chapter 4. the main input parameters of this algorithm are the residual induction of the applied magnet types (internal and external), its coercive force, the height of the air gap, and the radius of its internal and external sections. the required air gap induction can be given by the following values: start value stop value number of steps the algorithm can generate a table including the most important geometric parameters of the parametric simulation sequences, which are the following: 𝑟0: the inner radius of the inner magnet 𝑟1: the outer radius of the inner magnet, equal to the inner radius of the air gap 𝑟2: the inner radius of the outer magnet, equal to the outer radius of the air gap 𝑟3: the outer radius of the outer magnet several simulation series were performed to analyze the effect of flux leakage and determine a correction relationship. these simulations were built by using ansys maxwell 2d finite element simulation software [8]. the models in the simulation were axis-symmetrical models. in the first simulation a magnetic circuit according to figure 1. was used, in which the radius of the central circle of the air gap is 70 mm and the height of the air gap is 20 mm. the start value of induction is 0.1 t and the stop value is 1 t, with 0.05 t steps. the 1 t stop value is maximized by the given air gap center circle radius. using the received geometric dimensions, a parametric simulation was prepared to investigate the difference between the average induction value experienced in the air gap and the initial air gap induction value due to flux leakage. in the simulation deviation of working range from the operating point was examined for internal magnets (induction value is 0.67 t for n35 type magnets) and deviation of the average induction from the residual induction value (br = 0.38 t) was analyzed. the cross-sections of the soft iron elements of the construction were set large enough to avoid saturation or close to saturation operation. initial values and the results of the simulation are summarized in table 3. simulation results prove slight increase (between 72 % and 78 %) in the ratio of theoretical and simulated air gap induction when varying the value of the air gap induction from 0.1 t to 1 t. examining the simulation results, we can find that that the operating point of the working magnet on the demagnetization curve b-h always shifts to the right of the ideal operating point. this phenomenon is caused by the modelling error, that is the computational models do not consider either the alternative reluctances caused by leakage or the real magnetization curve of the body. however, the difference in practice is small enough to be neglected in the general case. for this reason, the operating point of the external magnets also differs slightly from h = 0. the next step of the work is to determine a correction factor for the required initial air gap induction, resulting corrected geometric parameters at which the value of the simulated air gap induction will be equal to the originally required (not corrected) air gap induction. the correction relationship determined from the simulation results is expressed by the following equation 𝐵δkorr = 𝐵δ ∙ 1.281 + 0.018 . (8) including this correction into the original algorithm, the parametric simulation was repeated with initial values of 0.1 t and 0.75 t. the higher values is selected according to the maximum possible 1 t corrected initial value of induction. the results are shown in table 4. 7. the effect of permanent magnets used as flux conductors on dynamic behaviour experience has shown that the method mentioned in the previous chapter, discussing the use of permanent magnets instead of mild steel in the outer ring of cylindrical magnet circuits, improves the dynamic behaviour of such magnetic actuators. this improvement is detectable in both permanent magnet and electromagnetically excited constructions. due to the structure of the above-mentioned constructions, the magnetic field created by the vertically arranged voice coil current has an impact on the magnetic field of the stator (the still table 3. comparison of the calculated and the simulated results required air-gap induction in t simulated air-gap induction in t ratio simulated vs. required induction in % operating point induction of inner magnet in t operating point induction of external magnet in t 0.1 0.071 71.00 0.793 0.335 0.15 0.109 72.66 0.784 0.344 0.2 0.147 73.50 0.779 0.351 0.25 0.185 74.00 0.774 0.356 0.3 0.223 74.33 0.770 0.362 0.35 0.261 74.57 0.767 0.367 0.4 0.299 74.75 0.764 0.371 0.45 0.338 75.11 0.761 0.375 0.5 0.376 75.20 0.758 0.379 0.55 0.415 75.45 0.755 0.382 0.6 0.453 75.50 0.752 0.386 0.65 0.492 75.69 0.750 0.389 0.7 0.532 76.00 0.746 0.393 0.75 0.571 76.13 0.743 0.395 0.8 0.611 76.37 0.740 0.399 0.85 0.651 76.59 0.736 0.402 0.9 0.692 76.88 0.732 0.405 0.95 0.734 77.26 0.727 0.409 1 0.778 77.80 0.717 0.413 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 139 part of the actuator, which means the overall magnetic circuit) as well. previous tests and simulations in the field have verified that in certain applications, when a high current flows through the voice coil, the magnetic field created by the coil modifies, distorts the originally homogeneous magnetic field of the air gap. the distortion of induction in the air gap is particularly high in the case of voice coils excited with direct current. as a result, since the length of the voice coil is finite, during operation the force applied to the turns exceeds the mechanical limits the construction of the voice coil was designed to withstand in the first place. thus, if the actuator is designed to exert constant or slowly changing high force, this distortion must be taken into consideration when developing the construction of the voice coil. figure 3 illustrates the character of distortion of induction in the air gap in the case of direct current controlling. furthermore, the tests have revealed that since in most cases the mild steel parts of such actuators are made of solid mild steel units, locally induced eddy currents compensate the distortion of induction in the air gap in the case of rapidly changing, dynamic voice coil currents. as indicated by tests on the first, electromagnetically excited prototype of the braking system to be developed, even in case of 20hz current components, the distortion of the induction in the air gap considerably decreases as compared to excitation with direct current. the results of a series of such tests are shown in figure 4. on the other hand, later finite element simulation tests on further developed magnetic constructions have revealed another problem: the momentarily self-inductance of the voice coil is connected to the reluctance of the magnetic ring of the stator. for dynamic operation, it is essential that the self-inductance of the voice coil be as low as possible in order to achieve the adequate impulse response. when excitation is carried out with ndfeb permanent magnets, the self-inductance of the voice coil in the construction examined (a magnetic ring with a 55 mm middle/mean diameter air gap) is relatively low, approximately 13 µh, which may be appropriate from the aspect of dynamic behaviour. in this case, finite element simulations have resulted in a linear connection with a fairly good approximation between the current in the coil and the force generated in the coil. this is caused by the high reluctance of the magnetic ring in the environment of the voice coil, for the magnetic field of the voice coil can considerably influence the volume and direction of inductance in permanent table 4. comparison of the calculated and the simulated results required induction in t corrected value of induction for simulation input in t simulated induction in t 0.1 0.139 0.106 0.15 0.205 0.154 0.2 0.271 0.202 0.25 0.337 0.251 0.3 0.402 0.3 0.35 0.468 0.35 0.4 0.533 0.399 0.45 0.598 0.449 0.5 0.663 0.499 0.55 0.728 0.55 0.6 0.792 0.6 0.65 0.857 0.652 0.7 0.919 0.705 0.75 0.984 0.759 figure 3. the cross-section of the 2-d model. figure 4. average ac component of the air gap induction as a function of the frequency in a voice-coil type linear actuator excited by dc current. figure 5. the simulation model of an improved construction of an ndfebmagnet based voice coil actuator (brake) including permanent magnets in the outer ring. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 140 magnets only in case of a much higher value of excitation than the optimal operational value in the bias point, due to the character of demagnetization curve of the permanent magnet. however, further dynamic simulations indicate that with the help of the flux conductor solution, discussed in the previous chapter, the self-inductance of the voice coil can be decreased even more (about 10 % 12 %). figure 5 demonstrates the a forementioned construction solution, while the relevant results of the finite element simulations are shown in figure 6 and figure 7. the dynamic analysis of the construction including permanent magnets has shown that this buildup can have quite excellent dynamic behaviour even without extra magnets. however, in the case of electromagnetic excitation, the entire magnetic ring is by default made of mild steel of high relative permeability, owing to which the reluctance of the magnetic ring is low and the magnetic field generated locally due to the current in the voice coil causes greater distortion in the original magnetic field of the stator excited with current pulse / stator supplied with excitation current. result of a finite element simulation on an electromagnetically excited construction (the size of the buildup is the same) indicate that in this case the self-inductance of the voice coil (identical in the case of voice coil dimensions) is about 100 times higher than the same value of the ndfeb magnet-based construction, and the exciting electric current changes considerably and non-linearly between the average value and depending on time. this simulation result is shown on figure 8. the reason for this is that the greater part of the metallic body is magnetized to almost full saturation for the sake of the greatest air gap induction possible, and in certain sections, due to the magnetic field of the voice coil, the temporary operating point shifts to the non-linear part of the magnetization curve. this results in a non-linear connection between the voice coil current and the force generated, which, together with a highvalue induction impair the dynamic behaviour of the construction. if certain parts of the outer ring of the stator consist of permanent magnets with the operation point settings defined in the previous chapter, this non-linearity and the inductance of the voice coil may be decreased significantly, thus improving dynamic behaviour. consequently, the solution discussed in the previous chapter is capable of improving the dynamic behaviour of a so-called “voice-coil-actuator” by decreasing the inductance of the voice coil and its non-linear nature, especially if the excitation of the magnetic ring of the stator is carried out with an electromagnet. 8. conclusions and outlook results of the research show that air gap induction correction is an effective method to calculate geometry of magnets and checking the real air gap induction for the calculated geometry for magnetic circuits of so-called “speaker-type voice coil actuator” actuators. accuracy of air gap induction simulation can be increased if considering further construction details, like join deviations or detailed material properties. difference between the magnetic properties of real soft magnetic material and simulated material may also cause some simulation error. converting cylindrical, axially symmetrical magnetic constructions to plane model by defined geometric transformations, the induction and field strength values of the magnetic circuit’s sections can be determined with acceptable accuracy. the acceptable accuracy highly depends on the compensation capacity of the control system to be used in the system, therefore checking and correcting the calculations by finite element simulations can be still useful. validation of the developed calculation and simulation methods is in progress. a prototype is designed and built with the geometrical dimensions defined by the described methods; tests will be performed in the near future. the validated methods will be used for development and optimization of industrial testing processes. acknowledgement this research was supported by the european union and the hungarian state, co-financed by the european regional development fund in the framework of the ginop-2.3.4-152016-00004 project, aimed to promote the cooperation between the higher education and the industry. references [1] a. váradiné szarka, linear magnetic break of special test requirements with dynamic performance, journal of electrical and electronics engineering, vol. 3, no. 2, 2010, pp. 237‒240. online [accessed 2 september 2021] https://www.ingentaconnect.com/content/doaj/18446035/201 0/00000003/00000002/art00031 [2] c. h. chen, a. k. higgins, r. m. strnat, effect of geometry on magnetization distortion in closed-circuit magnetic measurements, journal of magnetism and magnetic materials, vol. 320, no. 9, figure 6. moving coil inductance of the original construction (current flow starts at 50ms) figure 7. moving coil inductance of the improved construction (current flow starts at 50ms) figure 8. dynamic finite element simulation of a construction excited by electromagnet https://www.ingentaconnect.com/content/doaj/18446035/2010/00000003/00000002/art00031 https://www.ingentaconnect.com/content/doaj/18446035/2010/00000003/00000002/art00031 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 141 2008, pp. 1597‒1656. doi: 10.1016/j.jmmm.2008.01.035 [3] l. kazup, a. váradiné szarka, diagnostics of air-gap induction’s distortion in linear magnetic brake for dynamic applications, xxi imeko world congress on measurement in research and industry, prague, czech republic, 30 august 4 september 2015, pp. 905–908. online [accessed 2 septemner 2021] https://www.imeko.org/publications/wc-2015/imeko-wc2015-tc7-190.pdf [4] g. kovács, m. kuczmann, simulation of a developed magnetic flux leakage method, pollack periodica, vol. 4, no. 2, 2009, pp. 45‒56. doi: 10.1556/pollack.4.2009.2.5 [5] m. kuczmann, nonlinear finite element method in magnetism, pollack periodica, vol. 4, no. 2, 2009, pp. 13‒24. doi: 10.1556/pollack.4.2009.2.2 [6] nikola a. spaldin, magnetic materials: fundamentals and device applications, cambridge university press, 2003, isbn 9780521016582. [7] heinz e. knoepfel, magnetic fields, john wiley & sons, 2008, isbn 3527617426. [8] femm – finite element method magnetics – homepage. online [accessed 2 september 2021] https://www.femm.info/wiki/homepage [9] ansys maxwell – product homepage. online [accessed 2 september 2021] https://www.ansys.com/products/electronics/ansys-maxwell [10] magnetic circuit design guide – tdk tech notes. online [accessed 2 september 2021] https://product.tdk.com/en/products/magnet/technote/design guide.html [11] design of magnetic circuits – tokyo ferrite. online [accessed 2 september 2021] https://www.tokyoferrite-ho.co.jp/en/wordpress/wpcontent/uploads/2017/03/technical_02.pdf http://dx.doi.org/10.1016/j.jmmm.2008.01.035 https://www.imeko.org/publications/wc-2015/imeko-wc-2015-tc7-190.pdf https://www.imeko.org/publications/wc-2015/imeko-wc-2015-tc7-190.pdf http://dx.doi.org/10.1556/pollack.4.2009.2.5 http://dx.doi.org/10.1556/pollack.4.2009.2.2 https://www.femm.info/wiki/homepage https://www.ansys.com/products/electronics/ansys-maxwell https://product.tdk.com/en/products/magnet/technote/designguide.html https://product.tdk.com/en/products/magnet/technote/designguide.html https://www.tokyoferrite-ho.co.jp/en/wordpress/wp-content/uploads/2017/03/technical_02.pdf https://www.tokyoferrite-ho.co.jp/en/wordpress/wp-content/uploads/2017/03/technical_02.pdf 3d shape measurement techniques for human body reconstruction acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 8 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 3d shape measurement techniques for human body reconstruction iva xhimitiku1, giulia pascoletti2, elisabetta m. zanetti3, gianluca rossi3 1 centro di ateneo di studi e attività spaziali g. colombo (cisas), university of padua, via venezia 15, 35131 padua, italy 2 department of mechanical and aerospace engineering (dimeas), politecnico di torino, corso duca degli abruzzi 24, 10129 turin, italy 3 department of engineering, university of perugia, via g. duranti 93, 06125, perugia, italy section: research paper keywords: 3d scanning techniques; non-contact measurement; low-cost technology; customised orthopaedic brace; non-collaborative patient; multimodal approach; 3d printing citation: iva xhimitiku, giulia pascoletti, elisabetta m. zanetti, gianluca rossi, 3d shape measurement techniques for human body reconstruction, acta imeko, vol. 11, no. 2, article 33, june 2022, identifier: imeko-acta-11 (2022)-02-33 section editor: francesco lamonaca, university of calabria, italy received december 20, 2021; in final form march 15, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: iva xhimitiku, e-mail: vaxhimitiku@gmail.com 1. introduction biomedical applications often required device customization: as well known, no patient is identical to another one, and this is even truer referring to pathologic conditions. in the past, customization has often been sacrificed in favour of manufacturability, however, with the advent of 3d printing [1], this shortcoming is being overcome [2], [3], and more and more emphasis is being given to the necessity of providing fast and accurate systems to obtain the geometry of the whole body [4], [5], [6] or of specific body segments [7]. traditional techniques are based on plaster moulds and are affected by some major limitations such as: the invasiveness, the need to keep the patient still for the curing time [8], a limited accuracy (over 15 mm, according to[9], [10]), and the impossibility of acquiring undercut geometries. more recently, and as a viable alternative, various non-contact instruments have been developed in order to perform digital scanning [11], [12], [13] and the respective performances have been extensively reported in literature [14], [15]. however, the application introduced in this work was somehow peculiar due to the young age of the patient [16] which led to add some requirements to the scanning methodology that are a time limit to perform the whole acquisition, and the possibility to compensate motions since the patient was not collaborative due to his young age [17], [18]. the final aim was to obtain the 3d geometry of his trunk in order to gather input data for brace design [19]. prior attempts had been made with traditional moulding techniques and they did not succeed due to frequent patient movements [20], [21], [22]. a specific methodology has been here developed, tested and discussed, which is based on a multimodal approach [23] where the benefits of different scanning technique are merged in order to optimize the final result. in section 2, three common scanning techniques are briefly described, reporting their specifications, and highlighting the respective advantages and disadvantages in relation to their application to human body scanning. these technologies are photogrammetry, light detection and ranging (lidar) and structured light scan. the performances of these shape measurement techniques have been assessed reconstructing the torso of two adults (one abstract in this work the performances of three different techniques for 3d scanning have been investigated. in particular two commercial tools (smartphone camera and ipad pro lidar) and a structured light scanner (go!scan 50) have been used for the analysis. first, two different subjects have been scanned with the three different techniques and the obtained 3d model were analysed in order to evaluate the respective reconstruction accuracy. a case study involving a child was then considered, with the main aim of providing useful information on performances of scanning techniques for clinical applications, where boundary conditions are often challenging (i.e., non-collaborative patient). finally, a full procedure for the 3d reconstruction of a human shape is proposed, in order to setup a helpful workflow for clinical applications. mailto:vaxhimitiku@gmail.com acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 male and one female); the main objective of this first analysis was to evaluate the performances of two low-cost tools [16], [24], [25] (smartphone camera and ipad pro lidar), in relation to the accurate reconstruction obtainable with the structured light scanner, used as reference measurement system [26]. once the performances of these tools have been defined under ‘ideal’ scanning conditions (collaborative subject able to maintain a position throughout the scan process), the same techniques have been used to obtain a set of 3d scans of a 4-year-old boy’s torso, at an orthopaedic laboratory (officina ortopedica semidoro srl, perugia, italy). for both these analyses, the process of 3d reconstruction and structure extraction is described in detail in section 5. the accuracy and correlation among the geometries reconstructed with different visual devices, are evaluated and discussed, and the bias given by a non-collaborative patient is illustrated, leading to introduce a new methodology based on a multimodal approach, whose benefits are outlined and quantified. in section 6, it is demonstrated how this methodology can be applied in orthopaedics [1], [8], [11], and on least collaborative patients, making it possible to obtain body scans where the alternative approach based on plaster of paris moulds would fail or would result in lower accuracy and longer execution times. 2. background there are several techniques to perform body scans; herein photogrammetry, structured light and lidar will be considered (figure 1). a) photogrammetry (ph): it is an imaging method used to capture pictures of objects from different perspectives with calibrated cameras. the feature points obtained by overlapped images are used to calculate shooting position through specific algorithms which allow individuating automatically chromatic analogies between two images [21], [27], [28]. b) lidar: recent developments of commercial devices such as smartphones and tablets have led to very fast scanning with lidar. this technique is based on the time of flight [29] measurements, that is the time taken by an object, a particle or a wave to travel a certain distance. more specifically, lidar emits a pulse or modulated light signal and measures the time difference in the returning wavefront [25], [30]; this allows estimating distance from signal propagation speed. c) structured light scanner (sl): this technology is based on the projection of a known light pattern (grid) on the object and, according to the deformation of this projected grid on the curved surface of the object, reconstructs its geometry. moreover, triangulation is used for the location of each point on the object, thanks to two cameras placed at known angles. these measurement techniques have some specific advantages over contact measuring techniques, such as fast acquisition, high accuracy, and minimal invasiveness. depending on the application, some specifications may become more relevant than others. with reference to clinical applications, in some cases high resolution and accuracy must be prioritized, while in other cases a good representation of the colour and structure is mandatory. 3. instruments 3.1. photogrammetry for this application a commercial smartphone redmi note 10 with an average cost of 190 € was used. it is equipped with a duo camera system dedicated for commercial use. its technology includes a digital stabilizer, 30 fps video speed and video rec. 4k (2160p). this tool has been paired to zephyr software (3d flow, v. aerial 3.1) in order to obtain a 3d reconstruction. this software uses structure from motion (sfm) algorithms for photogrammetric processing of digital images to create 3d spatial data. 3.2. structured light scanner the go!scan 50 (15 k€) is a hand-held scanner based on structured light with high speed. in total, this scanner uses three cameras positioned at various angles and depths. in the centre of the device, an rgb camera is installed surrounded by led flashlight to capture textures without the need for special light setup. the scanner works at a rate of 550000 measurements per second, covering a scanning area of 380 × 380 mm² with a resolution of 0.5 mm and a point accuracy up to 0.1 mm. a lamp guidance system helps to set the scanning distance between 0.3 m and 3.0 m. the surface is captured while moving the hand-held scanner over the object. moreover, it is possible to reduce the noise arising from movement, by setting the appropriate parameters on the acquisition software (vx element by creaform, v. 0.9) [31]. the go!scan 50 is the only certified instrument among those used for this work; for this reason, the respective reconstructed 3d geometries have been considered as the most accurate for replicating the actual torso shape and used as reference to evaluate the reconstruction accuracy of the other techniques [32], [33]. 3.3. lidar the ipad pro lidar scanner is a pulsed laser able to capture surroundings up to 5 m through a photon-level reading since it works at time of flight, the time required for data acquisition is strictly related to the speed of light and distance. apple inc. itself does not specify the accuracy of the respective technologies or hardware [25]. this tool allows scanning objects and exporting scans as 3d textured cad models. the scanning resolution used for our applications was 0.2 mm. the scanning time for a particular subject varies from operator to operator since using each scanner is an acquired skill. in general, scanning could take about 15 min depending on the desired accuracy level of the resulting scan [14]. as a rule of thumb, the fastest technique is the lidar scanning and the slowest one is the sl system. subject comfort is comparable among the reviewed scanners. figure 1. instruments and operation scheme: a) structured light; b) photogrammetry; c) lidar. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 4. methodology in the first part of this work the accuracy of reconstruction of the considered scanning techniques was investigated through the trunk reconstruction of a male and a female human subject. scanning results coming from this analysis have been used as reference for techniques comparison, since the subjects can be considered in a stable configuration, with the exception of the intrinsic deformability of the trunk (micro movements due to breathing). in the following step, the same analysis has been repeated on a 4-year-old trunk, adding one more bias given by subject’s macro-movements. ph is characterised by a timing video acquisition of about 50 s for each subject. using zephyr software, the geometry of the torso was reconstructed taking a total of 7 h with high software settings: up to 15000 keypoints per image and “pairwise image matching” setting on at least 10 images. keypoints are specific points in a picture that zephyr can understand and recognize in different pictures. matching stage depth (pairwise image matching) controls how many pairwise image matching to perform. usually, the more is the better; however, this comes at a computational cost. the mesh given by this scan technique can result in shape’s topological errors due to shadow areas and object’s movement. the shape complexity and the macro movements led to sudden changes of curvature, making the reconstruction difficult and resulting so in missing parts and loss of details. the mesh obtained from the scan performed with go!scan 50 required higher manual processing times, given the computational heaviness due to the high resolution. the scan parameters were set directly in the vx elements software according to the manufacturer, with a resolution of 2 mm. targets, semi-rigid positioning, and natural features were used for placement parameters. the acquisition and processing data both required an average time of 15 min under ideal conditions (collaborative subject). scanning with the ipad is the fastest technique. accurate colour information (texture) can be obtained from the two rear cameras whose images are managed by proprietary algorithms. output meshes are of low quality, due to the limited number of triangles used for the surface discretization. for both adults and child scanning analyses, the procedure consists of four main steps: 1) scanning: trunk acquisition required the scanners to rotate around the subject; adhesive circular reference targets with a diameter of 10 mm have been used in order to facilitate the alignment and matching between scans on the post-processing phase (figure 2 b); these targets have been positioned over the trunk considering that, as well known, at least three tie-points must be present in two neighbouring scans in order to allow the respective alignment. 2) geometry reconstruction (post-processing): post-processing was performed using geomagic studio software (3d system, v. 12) [34]. sometimes, data acquisition results in more than one point cloud; hence, these point clouds have to be registered and then merged in order to obtain one single cloud. a cleaning phase follows, where spurious points are eliminated; these points are generated by environment noise and by subject motion or by the camera resolution being close to the size of geometric details. a triangulated mesh is then generated, and it is smoothed to obtain a more regular geometry. the smoothing phase must be performed carefully in order to avoid losing relevant information. finally, the mesh is edited to avoid double vertices, discontinuities of face’s normal, holes, internal faces, so obtaining a manifold geometry. at the end of editing, the mesh is optimised to reduce the number of triangles. 3) comparison among measurement techniques: first of all, scanners’ performances were evaluated in terms of times required to obtain the final geometry. the geometries were then compared through a fully automated operation, performed by dedicated software: geomagic studio. it should be reminded that mesh coming from different scanning are not iso-topological [25] and this can make this operation more critical in addition to the 288389, 431000, 158000 triangles being processed for male, female and child torso respectively. more in detail, for the adults’ scans case, the reference geometry obtained from go!scan 50 was compared to output geometries from ph and from lidar, analysing the distribution of distances both before and after mesh filtering. a software-coded mapping analysis between pairwise scans was performed: results of this analysis are represented by the standard deviation of the statistical distribution of the shortest distance between two scans, along with the mean value of this distance. this analysis is a signed type analysis; for this reason, in the following positive and negative values of the mean distance will be provided, representing deviations towards the outer or inner scanned volume, respectively (figure 3). for the child torso (figure 4), this deviation analysis was performed twice. in the first instance, lidar scans were compared, analysing the deviation distribution at different threshold levels (10, 20, 80, 120 and 180 mm), where the threshold parameter represents the distance value (in mm) beyond which the mesh points are considered as outliers. this analysis was performed because three lidar scans figure 2. data obtained by acquisition with the three instruments; a) body reconstruction of a female and a male (red and blue); b) marker details. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 were obtained: a full body scan (longer acquisition time) and two partial body scans (shorter acquisition time). over the acquisition time of the two partial scans, the subject’s torso could be reasonably considered still, while the full body scan, due to the longer acquisition time, was more biased by macro movements. the multimodal approach used for the child torso consists in the reconstruction of the 3d geometry using the two partial sl scans after their alignment with the lidar scan, which was used as a reference for the global alignment, since it was the only technique which allowed obtaining a full body scan. the deviation analysis among lidar models has allowed quantifying full scan’s macro movements and the respective reconstruction uncertainty for the final torso 3d model, if this scan was used as a reference for the go!scan 50 model positioning. 4) measurement: prior to subjects’ trunk scanning, some main measurements were taken with a seamstress meter in order to have a reference when checking the scanned geometry scale. 5. results 5.1. adult subjects scan scans from sl scanner are the most accurate and are certified; therefore, as aforementioned, they were used as a reference. for ph, it was possible to reconstruct only a portion of the surface for the female subject, while meshes related to the male test has resulted without detail: 94 % of points were too far from sl points to be used for the calculation of geometric deviation (figure 3). with reference to the female subject, 57 % of points resulted to be far from the sl model. this is due to male subject movement, colour and reflection of clothes. the best matching points were located at the torso back. lidar scans were more complete: only 17 % of points had to be discarded for the female subject and 38 % for the male one (table 1). in terms of triangles number, which is closely related to the geometry accuracy, sl scan has given a total of 350614 triangles, ph has resulted in 14430 triangles, and lidar has provided 37792 triangles for the male subject and 46811 triangles for the female subject. two reference points have been tracked through apposite markers (figure 2 b). the respective distance was equal to 100 mm with reference to the male subject, and 90 mm for the female subject. in the male subject this same distance was evaluated equal to 98.6 mm with sl (1.49 % uncertainty); 109 mm with lidar (9 % uncertainty). with reference to the female subject, the respective distance was evaluated equal to 89.9 mm with sl (1.11 % uncertainty), and 104 mm with lidar (15.5 % uncertainty). 5.2. young boy’s scans the following information has been obtained: a) 1 ph scan, with partial covering of the subject’s trunk, obtained in 18 s with 110244 triangles (referred as ‘ph’ in the following); b) 3 lidar scans: one full-body scan (biased by the movement of the subject) with 8420 triangles and two partial scans of the left (4310 triangles) and right side (4303 triangles), minimally affected by child’s movements. these scans required 4 s and 10 s for the figure 3. example of distances’ distributions between the outputs obtained with sl and lidar instruments: male torso. figure 4. a) young child torso and detail of scan’s output given by b) ph, c) lidar and d) sl techniques table 1. comparison of distances’ distributions between lidar and sl scans, for the adult case. pairwise comparison between techniques reference scan max (mm) mean +/ (mm) dev. std. (mm) distant point (%) sl_lidar (female subject) sl 20 7.61/5.42 8.63 17 sl_ph (female subject) sl 20 12.52/11.06 13.14 57 sl_lidar (male subject) sl 20 6.00/7.00 9.00 38 sl_ph (male subject) sl 20 15.00/14.00 16.00 94 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 5 left and the right side, and 20 s for the full trunk. in the following, these scans are referred to as lidar ‘1’, ‘2’ and ‘3’ (left, right and full-body respectively), according to the respective position in the scanning sequence (figure 5); c) 2 partial sl scans from go!scan 50: these are much more accurate (47294 triangles) and required about 5 min for the back side and 4 min for front side. ph failed to reconstruct the trunk because the legs were the only still part of the child’s body (figure 4 b). 5.2.1. analysis of lidar’s results in figure 6 a detail of lidar scans alignment is shown. the three scans were compared through three pairwise combinations, varying the threshold distance: the distribution of distances between two scans has been obtained on a limited set of points, whose distance laid below this given threshold value. value. this threshold was varied in order to assess its influence on final results (figure 7). according to figure 8 a, 10 mm or 20 mm threshold values have to be chosen in order to keep the standard deviation below 60 mm. however, 10 mm threshold would produce a too high percentage of outliers, as shown in figure 8 b. therefore, a threshold value equal to 20 mm has been chosen: it represents the trade-off between the standard deviation and the percentage of retained points. having chosen the 20 mm threshold as reference, the mean values for the standard deviation of distances among lidar scans have been analysed. these values are reported in table 2, they show that the minimum value of the deviation is associated to lidar 2 (which is the fastest scan) versus lidar 3 comparison and lidar 3 (which is the only full body scan available) versus lidar 1 comparison (bolded values in table 2). for this reason, lidar 3 has been chosen as a reference for the following alignment procedure in multimodal scans. 5.2.2. lidar versus structured light scans from sl have been considered as a reference since the respective scanner has been certified and this technique is known to be the most accurate [33]. an optimized geometric alignment was performed by geomagic studio software, which is based on iterative closest point algorithms. figure 9 show the displacement between scans after alignment. the sections are evaluated by a level curves measurement tool that returns the circumferences of trunk. three combinations have been studied: three scans form lidar were compared to both sl scans (figure 10). the maximum standard deviation has resulted to be equal to 6.93 mm with mean values equal to +6.32 mm and -6.36 mm (where positive and negative values represent deviations towards the outer or inner scanned volume, respectively) obtained from sl-lidar 3 combination, corresponding to the overlap between the full lidar and the sl scans (reference). on the other hand, the minimum standard deviation is represented by figure 5. lidar acquisition: a) right side scan detail of three lidar scan acquisitions referred to as ‘lidar 1’; b) left side scan without movement, referred to as ‘lidar 2’; c) total body scan movement, referred to as ‘lidar 3’. figure 6. example of lidar scans alignment: a) point selection for alignment; b) alignment; c) top view of alignment. figure 7. example of pairwise comparisons between lidar 1 and lidar 6 scans with a threshold value of 20 mm. green colour indicates areas with below-threshold distances, red areas are above the reference surface and blue ones are below the reference surface, grey areas are out of range (outlier). figure 8. trend of a) standard deviation and b) percentage of outliers versus threshold values for different pairwise comparisons. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 6 overlapping the fastest lidar scan (lidar 2) and both partial sl scans (references). the value of standard deviation in this case is 6.65 mm with mean values equal to +5.71 mm and -5.49 mm (table 3). as noted, the full body lidar scan (lidar 3) has the closest values to both sl scans, and it is the best suited to replicate the actual back shape and to be used as reference for sl scans alignment. 5.2.3. multimodal space procedure the full body scan obtained from lidar was used as reference for both sl scans (chest and back) positioning, while ph provided an incomplete result which could not be merged to obtain a full trunk scan. looking at lidar results, the lines of movement can be outlined in texture scans (figure 11). the two sl scans were overlapped on the 3d lidar full scan and in the next step a topological optimization of the trunk was performed with 3-matic (materialise, v. 12) [35], a software used for clinical application (figure 12). finally, a comparison between the actual trunk measurements (circumferences at chest and waist levels) and the corresponding measurements taken on the reconstructed geometry was performed, resulting in a difference of 5.9 mm (1.25 %) at the waist level, and an uncertainty of 8.2 mm (1.64 %) at the chest level (figure 13). the plaster mould accuracy, acceptable for medical applications is above 15 mm [9], [10]. the uncertainty of the reconstruction for this multimodal non-contact measurement methodology is within this limit in fact the maximum uncertainty is 8.2 mm. 6. discussion all instruments, photogrammetry, structured light scanner and lidar have been proved to be able to capture trunk geometry in a still patient. when results coming from all three table 2. comparison of distances’ distributions between lidar scans for the child case. pairwise comparison between three lidar scans reference scan max (mm) mean +/ (mm) dev. std. (mm) distant point (%) lidar 1 (young boy) lidar 2 20 7.60/9.54 10.00 57 lidar 2 (young boy) lidar 3 20 8.27/7.52 9.16 44 lidar 3 (young boy) lidar 1 20 6.05/5.97 7.90 13 table 3. comparisons between structured light and lidar scans for the child. techniques reference scan max (mm) mean +/ (mm) dev. std. (mm) distant point (%) sl_lidar 1 sl 20 6.73/5.83 6.68 55 sl_lidar 2 sl 20 5.71/5.49 6.65 75 sl_lidar 3 sl 20 6.32/6.93 6.93 62 a) b) figure 9. a) level curves (blue curves) for distance evaluation between lidar and sl scans: b) example of lidar 3 to sl scan alignment. figure 10. example of distances distribution between sl and lidar 3 scan. figure 11. lidar movement texture detail. upper row: side view. lower row: back view. a) column: lidar 1; b) column: lidar 2 scan; c) column: lidar 3. figure 12. reconstruction of torso in 3-matic materialise, using the full lidar scan as a reference. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 7 instruments were compared to those coming from traditional techniques based on plaster moulding, they proved to be more accurate with the advantage of producing a digital editable model; structured light scanner produced the most accurate results. when a non-collaborative patient is considered, new specifications must be taken into account such as the time required for scanning the whole geometry and the robustness of reconstruction algorithms. as a result, lidar technique was proved to be the only technique able to provide a full scan, thanks to the lowest acquisition time. however, the respective accuracy was quite low, and lidar could not be used alone; however, it could be used as reference for structured light scans registration, so removing the major source of noise in sl, that is non-collaborative patient’s movement. from this, it can be pointed out that a multimodal methodology was needed in order to overcome the limited accuracy of lidar, recovering information from partial scans obtained from sl. the whole methodology has been set up and tested with encouraging results: the final outcome has an acceptable accuracy (8.2 mm), where the only alternative would be taking a limited number of measurements on the noncollaborative patient body. compared to plaster moulding, the accuracy is greatly improved (8.2 mm against 15 mm), and the bias given by dermal tissue compressibility [36], [37] is totally absent. once the scans were cleaned, simplified and merged, the standard triangulation language (stl) model was exported and 3d printed, to evaluate the viability of this workflow to produce a customised brace. finally, the brace was manufactured with traditional method on the 3d printed volume, without any contact with the subject (figure 14), after having been virtually tested through mock up techniques [38]. 7. conclusions in this work a multimodal scanning approach was proposed. the uncertainty given by movement was analysed and compensated. a full procedure for the reconstruction of the 3d external shape was developed by integration of different 3d measurement techniques. the shape of the human torso of a child was finally measured, 3d printed and used for the creation of a patient-specific brace. future developments will focus on combining fast and lowcost techniques and algorithms with low-cost measurement systems for orthopaedic applications, in order to improve the measurement technique without the need for high-performance tools. acknowledgement this research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. references [1] d. f. redaelli, v. abbate, f. a. storm, a. ronca, a. sorrentino, c. de capitani, e. biffi, l. ambrosio, g. colombo, p. fraschini, 3d printing orthopedic scoliosis braces: a test comparing fdm with thermoforming, int. j. adv. manuf. technol. 111(5-6) (2020), pp. 1707-1720. doi: 10.1007/s00170-020-06181-1 [2] m. calì, g. pascoletti, a. aldieri, m. terzini, g. catapano, and e. m. zanetti, feature-based modelling of laryngoscope blades for customized applications, advances on mechanics, design engineering and manufacturing 3 (2021), pp. 206-211. doi: 1007/978-3-030-70566-4_33 [3] m. calì, g. pascoletti, m. gaeta, g. milazzo, r. ambu, a new generation of bio-composite thermoplastic filaments for a more sustainable design of parts manufactured by fdm, appl. sci. 10(17) (2020), pp. 1-13. doi: 10.3390/app10175852 [4] s. grazioso, m. selvaggio, g. di gironimo, design and development of a novel body scanning system for healthcare applications, int. j. interact. des. manuf. 12(2) (2018), pp. 611620. doi: 10.1007/s12008-017-0425-9 [5] f. remondino, 3-d reconstruction of static human body shape from image sequence, comput. vis. image underst. 93(1) (2004), pp. 65-85. doi: 10.1016/j.cviu.2003.08.006 [6] j. tong, j. zhou, l. liu, z. pan, h. yan, scanning 3d full human bodies using kinects, ieee trans. vis. comput. graph. 18(4) (2012), pp. 643-650. doi: 10.1109/tvcg.2012.56 [7] n. tokkari, r. m. verdaasdonk, n. liberton, j. wolff, m. den heijer, a. van der veen, j. h. klaessens, comparison and use of 3d scanners to improve the quantification of medical images (surface structures and volumes) during follow up of clinical (surgical) procedures, adv. biomed. clin. diagnostic surg. guid. syst. xv 10054 (2017), p. 100540z. doi: 10.1117/12.2253241 figure 13. reconstruction of torso and measurement of a) circumferences at chest and b) waist levels c) intersection between a horizontal plane and model. d) level curves to be measured. figure 14. a) 3d printed trunk, b) plaster mould built on printed model, c) plaster realization d) final model. https://doi.org/10.1007/s00170-020-06181-1 https://doi.org/1007/978-3-030-70566-4_33 https://doi.org/10.3390/app10175852 https://doi.org/10.1007/s12008-017-0425-9 https://doi.org/10.1016/j.cviu.2003.08.006 https://doi.org/10.1109/tvcg.2012.56 https://doi.org/10.1117/12.2253241 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 8 [8] p. andrés-cano, j. a. calvo-haro, f. fillat-gomà, i. andréscano, r. perez-mañanes, role of the orthopaedic surgeon in 3d printing: current applications and legal issues for a personalized medicine, rev. esp. cir. ortop. traumatol. 65(2) (2021), pp. 138151. doi: 10.1016/j.recot.2020.06.014 [9] m. farhan, j. z. wang, p. bray, j. burns, t. l. cheng, comparison of 3d scanning versus traditional methods of capturing foot and ankle morphology for the fabrication of orthoses: a systematic review, j. foot ankle res. 14(1) (2021), pp. 1-11. doi: 10.1186/s13047-020-00442-8 [10] w. clifton, m. pichelmann, a. vlasak, a. damon, k. refaey, e. nottmeier, investigation and feasibility of combined 3d printed thermoplastic filament and polymeric foam to simulate the cortiocancellous interface of human vertebrae, sci. rep. 10(1) (2020), pp. 1-9. doi: 10.1038/s41598-020-59993-2 [11] j. c. rodríguez-quiñonez, o. yu, sergiyenko, l. c. basaca preciado, v. v. tyrsa, a. g. gurko, m. a. podrygalo, m. rivas lopez, d. hernandezbalbuena, optical monitoring of scoliosis by 3d medical laser scanner, opt. lasers eng. 54(2014), pp. 175-186. doi: 10.1016/j.optlaseng.2013.07.026 [12] d. g. chaudhary, r. d. gore, b. w. gawali, inspection of 3d modeling techniques for digitization, int. j. comput. sci. inf. secur. (ijcsis) 16(2) (2018), pp. 8-20. [13] p. dondi, l. lombardi, m. malagodi, m. licchelli, 3d modelling and measurements of historical violins, acta imeko 6(3) (2017), pp. 29-34. doi: 10.21014/acta_imeko.v6i3.455 [14] c. boehnen, p. flynn, accuracy of 3d scanning technologies in a face scanning scenario, proc. int. conf. 3-d digit. imaging model. 3dim, 2005, pp. 310-317. doi: 10.1109/3dim.2005.13 [15] p. treleaven, j. wells, 3d body scanning and healthcare applications, computer, 40(7) (2007), pp. 28-34. doi: 10.1109/mc.2007.225 [16] a. ballester, e. parrilla, a. piérola, j. uriel, c. perez, p. piqueras, b. nácher, j. vivas, s. alemany, data-driven three-dimensional reconstruction of human bodies using a mobile phone app, int. j. digit. hum. 1(4) (2016), p. 361. doi: 10.1504/ijdh.2016.10005376 [17] m. pesce, l. m. galantucci, g. percoco, f. lavecchia, a low-cost multi camera 3d scanning system for quality measurement of nonstatic subjects, procedia cirp 28 (2015), pp. 88–93. doi: 10.1016/j.procir.2015.04.015 [18] j. conkle, k. keirsey, a. hughes, j. breiman, u. ramakrishnan, p. s. suchdev, r. martorell, a collaborative, mixed-methods evaluation of a low-cost, handheld 3d imaging system for child anthropometry, matern. child nutr., vol. 15, no. 2, 2019, pp. e12686–e12686. doi: 10.1111/mcn.12686 [19] i. molnár, l. morovič, design and manufacture of orthopedic corset using 3d digitization and additive manufacturing, iop conf. ser. mater. sci. eng. 448(1) (2018), pp. 1-7. doi: 10.1088/1757-899x/448/1/012058 [20] f. remondino, a. roditakis, 3d reconstruction of human skeleton from single images or monocular video sequences,” lect. notes comput. sci. (including subser. lect. notes artif. intell. lect. notes bioinformatics) 2781 (2003), pp. 100-107. doi: 10.1007/978-3-540-45243-0_14 [21] j. a. beraldin, basic theory on surface measurement uncertainty of 3d imaging systems, three-dimensional imaging metrol. 7239 (2009), p. 723902 doi: 10.1117/12.804700 [22] v. rudat, p. schraube, d. oetzel, d. zierhut, m. flentje, m. wannenmacher, combined error of patient positioning variability and prostate motion uncertainty in 3d conformal radiotherapy of localized prostate cancer, int. j. radiat. oncol. biol. phys. 35(5) (1996), pp. 1027-1034. doi: 10.1016/0360-3016(96)00204-0 [23] j. a. torres-martínez, m. seddaiu, p. rodríguez-gonzálvez, d. hernández-lópez, d. gonzález-aguilera, a multi-data source and multi-sensor approach for the 3d reconstruction and web visualization of a complex archaelogical site: the case study of ‘tolmo de minateda,’ remote sens. 8(7) (2016), pp. 1-25. doi: 10.3390/rs8070550 [24] l. barazzetti, l. binda, m. scaioni, p. taranto, photogrammetric survey of complex geometries with low-cost software: application to the ’g1′ temple in myson, vietnam, j. cult. herit. 12(3) (2011), pp. 253-262. doi: 10.1016/j.culher.2010.12.004 [25] m. vogt, a. rips, c. emmelmann, comparison of ipad pro®’s lidar and truedepth capabilities with an industrial 3d scanning solution, technologies 9(2) (2021) 25, pp. 1-13. doi: 10.3390/technologies9020025 [26] i. xhimitiku, g. rossi, l. baldoni, r. marsili, m. coricelli, critical analysis of instruments and measurement techniques of the shape of trees: terresrial laser scanner and structured light scanner, in 2019 ieee international workshop on metrology for agriculture and forestry, metroagrifor 2019 proceedings, oct. 2019, pp. 339-343. doi: 10.1109/metroagrifor.2019.8909215 [27] m. lo brutto, g. dardanelli, vision metrology and structure from motion for archaeological heritage 3d reconstruction: a case study of various roman mosaics, acta imeko 6(3) (2017), pp. 35-44. doi: 10.21014/acta_imeko.v6i3.458 [28] c. buzi, i. micarelli, a. profico, j. conti, r. grassetti, w. cristiano, f. di vincenzo, m. a. tafuri, g. manzi, measuring the shape: performance evaluation of a photogrammetry improvement applied to the neanderthal skull saccopastore 1, acta imeko 7(3) (2018), pp. 79-85 doi: 10.21014/acta_imeko.v7i3.597 [29] s. logozzo, a. kilpelä, a. mäkynen, e. m. zanetti, g. franceschini, recent advances in dental optics part ii: experimental tests for a new intraoral scanner, opt. lasers eng. 54 (2014), pp. 187-196. doi: 10.1016/j.optlaseng.2013.07.024 [30] d. marchisotti, p. marzaroli, r. sala, m. sculati, h. giberti, m. tarabini, automatic measurement of hand dimensions using consumer 3d cameras, acta imeko 9(2) (2020), pp. 75-82. doi: 10.21014/acta_imeko.v9i2.706 [31] m. di, m. di, piattaforma software 3d completamente integrata, no. mi, 2020 [in italian]. [32] l. ma, t. xu, j. lin, validation of a three-dimensional facial scanning system based on structured light techniques, comput. methods programs biomed. 94(3) (2009), pp. 290-298. doi: 10.1016/j.cmpb.2009.01.010 [33] a. cuartero, study of uncertainty and repeatability in structuredlight 3d scanners, no. 2. [34] 3d systems, presentation of the geomagic wrap 3d scanning software, 2021. online [accessed 26 april 2022] https://de.3dsystems.com/software/geomagic-wrap [35] materialise, 3-matic, version 14.0 – reference guide, april 2019. online [accessed 26 april 2022] https://help.materialise.com/131470-3-matic/3-matic-140-usermanual [36] m. terzini, c. bignardi, c. castagnoli, i. cambieri, e. m. zanetti, a. l. audenino, ex vivo dermis mechanical behavior in relation to decellularization treatment length, open biomed. eng. j. 10 (2016), pp. 34-42. doi: 10.2174/1874120701610010034 [37] m. terzini, c. bignardi, c. castagnoli, i. cambieri, e. m. zanetti, a. l. audenino, dermis mechanical behaviour after different cell removal treatments, med. eng. phys. 38(9) (2016), pp. 862-869 doi: 10.1016/j.medengphy.2016.02.012 [38] e. m. zanetti, c. bignardi, mock-up in hip arthroplasty preoperative planning, acta bioeng. biomech. 15(3) (2013), pp. 123128. doi: 10.5277/abb130315 https://doi.org/10.1016/j.recot.2020.06.014 https://doi.org/10.1186/s13047-020-00442-8 https://doi.org/10.1038/s41598-020-59993-2 https://doi.org/10.1016/j.optlaseng.2013.07.026 https://doi.org/10.21014/acta_imeko.v6i3.455 https://doi.org/10.1109/3dim.2005.13 https://doi.org/10.1109/mc.2007.225 https://doi.org/10.1504/ijdh.2016.10005376 https://doi.org/10.1016/j.procir.2015.04.015 https://doi.org/10.1111/mcn.12686 https://doi.org/10.1088/1757-899x/448/1/012058 https://doi.org/10.1007/978-3-540-45243-0_14 https://doi.org/10.1117/12.804700 https://doi.org/10.1016/0360-3016(96)00204-0 https://doi.org/10.3390/rs8070550 https://doi.org/10.1016/j.culher.2010.12.004 https://doi.org/10.3390/technologies9020025 https://doi.org/10.1109/metroagrifor.2019.8909215 https://doi.org/10.21014/acta_imeko.v6i3.458 https://doi.org/10.21014/acta_imeko.v7i3.597 https://doi.org/10.1016/j.optlaseng.2013.07.024 https://doi.org/10.21014/acta_imeko.v9i2.706 https://doi.org/10.1016/j.cmpb.2009.01.010 https://de.3dsystems.com/software/geomagic-wrap https://help.materialise.com/131470-3-matic/3-matic-140-user-manual https://help.materialise.com/131470-3-matic/3-matic-140-user-manual https://doi.org/10.2174/1874120701610010034 https://doi.org/10.1016/j.medengphy.2016.02.012 https://doi.org/10.5277/abb130315 an innovative correction method of wind speed for efficiency evaluation of wind turbines acta imeko issn: 2221-870x june 2021, volume 10, number 2, 46 53 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 46 an innovative correction method of wind speed for efficiency evaluation of wind turbines alessio carullo1, alessandro ciocia2, gabriele malgaroli2, filippo spertino2 1 dipartimento di elettronica e telecomunicazioni, politecnico di torino, corso duca degli abruzzi 24, 10129 turin, italy 2 dipartimento energia, politecnico di torino, corso duca degli abruzzi 24, 10129 turin, italy section: research paper keywords: renewable energy sources; wind energy systems; wind speed; power measurement; uncertainty citation: alessio carullo, alessandro ciocia, gabriele malgaroli, filippo spertino, an innovative correction method of wind speed for efficiency evaluation of wind turbines, acta imeko, vol. 10, no. 2, article 8, june 2021, identifier: imeko-acta-10 (2021)-02-08 section editor: ciro spataro, university of palermo, italy received january 15, 2021; in final form april 28, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: alessio carullo, e-mail: alessio.carullo@polito.it 1. introduction the increasing energy demand and the requirements of minimal environmental impact have pushed towards to a huge increase of renewable energy sources (res). a drawback of these sources is their intermittency, which can be mitigated by means of the integration of storage units, e.g. electrochemical batteries [1]-[3]. among the res, wind turbines (wts) represent a reliable and clean source of electricity with low marginal costs [3]. new wind power plants have been installed in europe in 2020, with a cumulative rated power of about 7 gw, and an increase of about 10 gw is expected in 2021, thus reaching a cumulative capacity of about 250 gw [4]. in this framework, offshore applications will represent about 20 % of new installations in the period 2020-2023, especially in the netherlands, ireland, norway and france. wts can work at fixed speed or variable speed, and the latter are able to adjust the rotor speed, thus following the maximum aerodynamic power of the wind [5]. on the other hand, their control requires the wind speed to be measured through an anemometer, thus increasing the overall cost and the size of the system. the anemometer is usually located on the back of the turbine, where a wind speed that is lower than the wind speed entering in the rotor is measured. for this reason, the use of the measurement of this anemometer leads wts to exhibit experimental performance that seems better than their nameplate specification, since the manufacturer states the power curve with reference to the wind speed at the entrance of the rotor. in addition, manufacturer-stated performance refers to ideal conditions of minimum turbulence, flat terrain and absence of wakes due to obstacles [6]. a reliable estimation of wt performance then requires the measured wind speed to be corrected and, for this reason, two different correction methods have been defined in technical specifications and international standards. the first method does not take into account the effects of the wakes of other turbines and obstacles, while the second method filters the considered direction of the wind in order to remove these wake effects. wake is a sort of loading effect, as occurs in electric circuits. wakes are long trails of wind abstract the performance of horizontal axis wind turbines (wts) is strongly affected by the wind speed entering in their rotor. generally, this quantity is not available, because the wind speed is measured on the nacelle behind the turbine rotor, providing a lower value. therefore, two correction methods are usually employed, requiring two input quantities: the wind speed on the back of the turbine nacelle and the wind speed measured by a meteorological mast close to the turbines under analysis. however, the presence of this station in wind farms is rare and the number of wts in the wind farm is high. this paper proposes an innovative correction, named “statistical method” (sm), that evaluates the efficiency of wts by estimating the wind speed entering in the wts rotor. this method relies on the manufacturer power curve and the data measured by the wt anemometer only, thus having the possibility to be employed also in wind farms without a meteorological station. the effectiveness of such a method is discussed by comparing the results obtained through the standard methods implemented on two turbines (rated power = 1.5 mw and 2.5 mw) of a wind power plant (nominal power = 80 mw) in southern italy. mailto:alessio.carullo@polito.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 47 in turbulent regime and with lower speed with respect to the entrance of turbines rotor. this effect needs to be minimized because if the wind flow entering in the wt rotor is affected by the wake of another turbine, its speed is lower than in the wakefree conditions, i.e. energy production is reduced. a common requirement of these methods is the availability of two quantities: the wind speed vwt measured by the anemometer and the wind speed vstat detected by a meteorological mast, which has to be close to the turbine under investigation. unfortunately, a meteorological mast is not always present in wind power plants, thus preventing the implementation of these correction methods. to overcome this limitation, the present work proposes an alternative method that relies on the manufacturer power curve and the wind speed detected by the turbine anemometer, thus not requiring the measurements provided by a meteorological mast. in section 2, a review of the two standard methods is presented and the new correction method is described. section 3 defines the concept of yearly average efficiency (which takes into account the energy generated by a wt), and describes the parameters availability and capacity factor. in section 4, a case study is presented related to a wind power plant in southern italy. section 5 reports the obtained results for two different wts that are located in two different areas of the considered power plant. finally, section 6 summarizes the main outcomes of this work. 2. correction methods an expansion of the stream tube occurs before and after the passage in the three-blade rotor. therefore, the cross section of the wind flow increases, while its kinetic energy (thus, its speed) decreases. the presented methods aim to correct the wind speed detected by the anemometer behind the turbine rotor, calculating the corresponding speed at its entrance. before the application of one of the proposed correction methods, a preliminary normalization to the reference air density ρref = 1.225 kg/m3 is performed, since manufacturer specifications refer to this condition. in particular, for wts with active power control, experimental results are corrected according to the following expression [6]: 𝑣cor = 𝑣𝑒𝑥𝑝 ∙ ( 𝜌air 𝜌ref ) 1 3⁄ , (1) where vcor is the corrected wind speed, vexp is the measured wind speed and ρair is the air density during the measurement. 2.1. method #1 – straight line method (slm) the first method requires vwt and vstat as input quantities, and consists of the following steps: 1) step a selection of the wind-speed direction. the wind direction β is properly selected in order to consider valid the assumption vstat ≈ ventr, where ventr is the wind speed that enters in the turbine rotor. in particular, experimental results are filtered in order to analyse the wind contributions flowing from the station to the wt. assuming to simplify the problem as a 2d system without the vertical coordinate, a straight line is traced between the anemometric station and the wt under test and its orientation βwt with respect to the north direction is calculated. however, if the set of experimental data is limited, a low number of experimental points is available. in this case, it is generally convenient to extend the analysis to wind speeds with orientations β = βwt ± δβ, where 2 δβ is the top angle of a triangle whose base is the rotor diameter d of the wt (d = 2 ∙ rd, by assumption, where rd is the length of a blade, neglecting the hub radius) and the third vertex of the triangle is the meteorological mast. 2) step b selection of data with ventr > vwt. as described in the step a, the wind speeds of interest flow from the anemometric station to the wt. therefore, ventr has to be larger than vwt because the kinetic energy of the wind decreases when it flows through the meteorological mast. 3) step c removal of experimental data with turbulence larger than 10 %. in each time interval, the turbulence is the ratio between the standard deviation of the wind speed and its average value in the interval. the power curve provided by the manufacturer is measured in conditions of minimum turbulence, which is generally lower than 10 % [7]. 4) step d linear regression of experimental data. in this step, a linear equation that describes vstat as a function of vwt is identified in order to estimate ventr by the line of regression of vstat on vwt, where the measurement of vwt is corrected thanks to the measurement of vstat. the goodness-of-fit of the linear regression to the experimental data is measured through the parameter r2, which ranges from 0 (no suitable model) to 1 (best model). during the design of a wind power plant, the position of the turbines has to be optimized in order to minimize their mutual wakes and maximize their energy production. however, due to different constraints, such as terrains and land morphology, these effects cannot be always minimized. therefore, the first method needs to be modified in order to remove possible errors due to mutual wakes effect. 2.2. method #2 – no wakes method (nwm) this method is similar to and consists of the same steps as the slm. however, since nwm aims to avoid the mutual wakes that affect the measurements, step a is modified. indeed, nwm does not focus the correction on the direction joining the meteorological mast and the wt; on the contrary, it investigates all the directions in which the anemometric station and the wt are not affected by the wakes of other turbines. the procedure used to determine the wind directions disengaged from any obstacles is based on the document [6]. in particular, for each obstacle in the neighbourhood of the wt, such as other operating wts or a meteorological station, the wind direction angles α that must be excluded from the analysis are calculated according to this expression: 𝛼 = 1.3 ∙ arctan ( 2.5 ∙ 𝐷 𝐿 + 0.15) + 10 , (2) where d is the rotor diameter and l is the mutual distance between the obstacle and the wt under test. after the selection of the proper wind direction, it is possible to verify the validity of the results thanks to a more sophisticated analytical model, which is named the “jensen model” or “park model” [8]. it permits to estimate the wind speed v* perturbed by the wake of a turbine using the following expression: acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 48 𝑣∗ = 𝑣0 ∙ [ 1 − 1 − √1 − 𝐶𝑇 (1 + 𝑘 ∙ 𝑥 𝑟𝑑 ) 2 ] , (3) where v0 is the wind speed not affected by wakes, ct is the thrust coefficient of a wt and depends on the wind intensity, rd is the radius of the turbine rotor, and x is the downwind distance. the parameter k is the decay constant of the wake that is estimated according to the following equation: 𝑘 = 0.5 ln ( ℎ 𝑧0 ) , (4) where h is the hub height of the wt and z0 is the roughness of the ground. according to jensen model, the wake increases linearly with x and its diffusion radius rx can be estimated as: 𝑟𝑥 = 𝑟𝑑 + 𝑘 ∙ 𝑥 . (5) it should be noted that the model considers the perturbation of the flow profile along the direction of the wind, while its perpendicular component is assumed constant (1-d model). finally, this model assumes that k is a constant parameter that depends only on h and z0. 2.3. method #3 – statistical method (sm) the alternative method does not require experimental data provided by a meteorological station [9]: the input quantities are the wind speed measured by the wt anemometer and the power curve provided by the manufacturer. the assumption behind this methodology is that the power curve of the wt manufacturer is the locus of the points where the generator operates with the best performance. therefore, the analytic relation between vwt and the wind speed provided by the wt manufacturer (for the same output electric power pk) is derived. figure 1 reports the scheme of the aforementioned methodology. more in detail, the methodology consists of the following steps: 1) step a removal of experimental data with turbulence larger than 10 % [7]; 2) step b selection of the experimental set sk. one of the available working point pk = p(vk) is selected on the power curve provided by the wt manufacturer. then, a set sk of experimental data is identified such that the electric output power lies in the neighbourhood of pk, i.e. in the interval between pk∙(1-ε) and pk∙(1+ε). in this work, the value of ε was set to 0.01, based on the consideration that output powers within a ± 1 % interval are not distinguishable due to the typical uncertainty of the equipment that is used to measure this quantity. the set sk is described as: 𝑆𝑘 = {[𝑣wt,𝑖, 𝑃(𝑣wt,𝑖)]: 𝑃(𝑣wt,𝑖) ∈ [𝑃𝑘 ∙ (1 − 𝜀) ÷ 𝑃𝑘 ∙ (1 + 𝜀)] } . (6) 3) step c calculation of the empiric cumulative distribution function (ecdf) of the wind speed. the ecdf of the wind speed corresponding to the selected value of vk is calculated, as shown in figure 2 (blue dots), which refers to the value vk = 12 m/s. the same figure also highlights how the calculated ecdf is well approximated by the cdf f(vwt) (red line) corresponding to the probability density function (pdf) f(vwt) of the known factorial function γ [10]: 𝑓(𝑣wt) = 𝑣wt 𝑎−1 𝑏𝑎 ∙ 𝛤(𝑎) ∙ 𝑒 − 𝑣wt 𝑏 (𝑣wt ≥ 0) , (7) where the parameter a is estimated as the square ratio between the mean value and the standard deviation of sk, while the parameter b is derived as the ratio between the mean value of sk and a; 4) step d estimation of the wind-speed fifth percentile. starting from the pdf f(vwt), the fifth percentile 𝑣wt 5 % of the wind speed, i.e. the value that has the 5 % of probability to not be exceeded in sk, is selected; steps from b to d are repeated for each available working point p(vk) in the power curve provided by the manufacturer. 5) step e linear regression of experimental data. this step is similar to step d of the other methods, but in this case a linear equation is obtained between 𝑣wt 5 % and the corresponding vk. figure 1. scheme of the methodology for the sm. figure 2. example of calculated ecdf for vk = 12 m/s. 0 0.2 0.4 0.6 0.8 1 10 11 12 13 14 15 c d f wind speed (m/s) ecdf gamma cdf 0 1 2 3 0 5 10 15 20 p o w e r o u tp u t (m w ) wind speed (m/s) manufacturer curve experimental data set sk pk 0 2 4 6 8 10 12 14 2 4 6 8 10 12 v st a t (m /s ) vstat = 0.926 ∙ + 0.936 r2 ≈ 1 ) steps a-b step e steps c-d 0 0.2 0.4 0.6 0.8 1 11 12 13 14 15 vwt (m/s) ecdf f(vwt) vk = 12 m/s acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 49 one should note that when the wt reaches its nominal power, the correspondence between the wind speed and the output power is not unique. the output power is indeed limited to the rated value and it can be obtained with several values of the wind speed. this represents a limit of the proposed method, which is not applicable in the range of high wind speeds due to its strong dependence on the power curve of the wt manufacturer. 3. estimation of wt efficiency the efficiency of a wt is the ratio between the electrical power it produces and the aerodynamic power of the wind at the entrance of the rotor. the aerodynamic power paer of the wind can be calculated as [11]: 𝑃aer = 1 2 ∙ 𝜌air ∙ π 4 ∙ 𝐷2 ∙ 𝑣entr 3 . (8) the efficiency can be also estimated as the ratio between electrical and wind energies in a certain time interval δt. indicating the measured wt output power as pout, the efficiency can be obtained as [12]-[13]: 𝜂 = 𝑃out 𝑃aer = 𝑃out ∙ δ𝑡 𝑃aer ∙ δ𝑡 = 𝐸el 𝐸aer , (9) where eel and eaer are electrical and aerodynamic energies, respectively. with the aim of comparing the three proposed correction methods, the results will be expressed in terms of weighted yearly efficiency η*: 𝜂∗ = ∑ (𝜂𝑘 ∙ 𝐸𝑘)year ∑ (𝐸𝑘)year = ∑ (𝜂𝑘 ∙ 𝐸𝑘)year 𝐸y,exp (10) where ηk is the wt efficiency, ek is the output energy in the k th time interval (δt = 10 min), and ey,exp is the experimental yearly energy generated by the wt. thanks to the availability of an anemometric station and to the accurate selection of the wind direction, the nwm is considered as the reference method. the other two methods will be then compared to the nwm by means of the efficiency deviation δη*, which is defined as: δ𝜂∗ = 100 % ∙ 𝜂∗ − 𝜂nwm ∗ 𝜂nwm ∗ (11) where 𝜂nwm ∗ is the average efficiency estimated with the nwm. moreover, the availability factor and the capacity factor are calculated for the turbines under analysis. the first quantity provides information regarding the probability that a system is operational at a specific time: in particular, it is the ratio between the uptime of wts and their total operation time, which includes non-operation periods due to failures or maintenance actions [14]-[15]. on the other hand, the capacity factor is, for a given time interval, the ratio between the real energy produced and injected into the grid, and the ideal energy that could be generated by the turbine working continuously at its rated power. an example of centralized power station with very high capacity factor (about 90 % of one year) are units for base load operation. 4. case study the three methods previously described were applied to two wts of a wind farm in southern italy (with 43 wts, global nominal power of 80 mw and altitude between 1100 m and 1200 m) using data collected during a measurement campaign in 2017. the turbines of the wind farm have hub height of 80 m and a three-bladed rotor. their wind speed range is the following: cut-in speed vc-in = 3.5 m/s, cut-out speed vc-out = 25 m/s. however, the wind farm is divided in two parts, with two different models of turbines. in the first area, wts have a nominal power of 2.5 mw, a rotor diameter of 80 m and rated wind speed of 15 m/s. in this part, a meteorological mast (height of about 80 m) is present, measuring the quantities of interest. in particular, it is equipped with: 1) a first class cup anemometer, which acquires the horizontal component of the wind speed according to the requirements provided in [6]; 2) first class sensors that detect the wind direction according to [7]; 3) pressure, humidity and temperature sensors, which measure the environmental quantities that are used to estimate the air density at the height of meteorological mast and turbine. the anemometer provides a resolution of 0.05 m/s and its stated uncertainty is ± 1 % of the measured value in the range (0.3 ÷ 50) m/s with a minimum uncertainty of ± 0.2 m/s. the environmental quantities are measured with uncertainties of ± 2 °c for the temperature, ± 5 %rh for the relative humidity and ± 1 kpa for the pressure. in the second area of the wind farm, wts have a rated power of 1.5 mw, a rotor diameter of 77 m and a rated wind speed of 12 m/s. however, in this area, there is no meteorological station installed; hence, the only data available are provided by the anemometer located on the back of the turbines. regarding the wts of the entire plant, they are equipped with an ultrasonic anemometer that measures the absolute value and the direction of the wind speed, providing a resolution of 0.01 m/s and an uncertainty of ± 2 % of the measured value in the range (0 ÷ 60) m/s (minimum uncertainty ± 0.25 m/s). the electrical output power of the wts is measured with a relative standard uncertainty of 1 %. 5. results 5.1. results of one turbine with the meteorological station in this subsection, the results for a wt in the first area of the wind farm (which is the one equipped with a meteorological mast) are presented. the electrical power measurements pout obtained at the output of the wt (average values within 10 min time intervals) are shown in figure 3 (blue points) with respect to the measured wind speed, which is corrected according to equation (1). in the same figure, which refers to results that are collected during a time interval of approximately one year, the manufacturer power curve (red line) is also reported. one should note that a high number of observations are on the left of the manufacturer power curve, since no correction methods are applied to these experimental results. this behaviour is not realistic, since the experimental performance of the wt cannot be higher than the manufacturer's specifications. furthermore, the cut-in and cut-out wind speeds are about 3 m/s and 24 m/s, respectively, which are lower than the corresponding nominal values (vc-in = 3.5 m/s, vc-out = 25 m/s). according to the correction methods described in section ii, experimental results that show turbulence larger than 10 % are removed. furthermore, also results showing null output power for wind speed in the range (vc-in ÷ vc-out) are removed, since they refer to failure conditions of the investigated plant. before acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 50 applying the described correction methods, a preliminary uncertainty estimation is performed, taking into account the instrumental uncertainty of wattmeter and anemometer of the wt and the contribution related to the repeatability of the measured output power. as a first step, the method of bins [16] is applied: power measurements are grouped according to the corresponding wind speed measured by the wt anemometer. since its uncertainty has a maximum value of 0.5 m/s for wind speed value of 25 m/s, experimental results are grouped in uniform wind-speed bins with a width of ± 0.5 m/s. then, the mean output power is estimated for each identified group and the standard deviation of the mean is considered as the estimation of the measurement repeatability. this contribution is combined to the instrumental standard uncertainty (1 % of the measured value), thus obtaining the combined standard uncertainty u(p). the obtained results are summarized in figure 4, where the red bars refer to the manufacturer power curve, while the grey bars represent the experimental means of each group centred around integer values of wind speed. the error bars superimposed to each grey bar are the intervals pmean,i ± u(pi). even though the anomalous data points are removed, the uncorrected experimental results are not fully consistent with the manufacturer specifications yet: for wind-speeds up to 6 m/s and at 21 m/s, the electrical output power is higher than the manufacturer specifications, and cut-in and cut-out wind speeds remain the same previously estimated. implementing the slm, the wind speed direction considered in the correction is β= (231 ± 13)° and the linear regression (r2 = 0.969) results in the following equation: 𝑣𝑠𝑡𝑎𝑡 = 0.971 ∙ 𝑣𝑊𝑇 + 0.758 (12) the results after the correction with equation (12) are reported in figure 5. for wind speeds lower than 21 m/s, the corrected output power is lower than the manufacturer curve. regarding cut-in and cut-out wind speeds, the slm correction leads to an estimation of vc-in that is comparable to the rated specification (≈ 3.5 m/s), while vc-out remains lower (≈ 24 m/s). moreover, for wind speeds higher than 13 m/s, the manufacturer power curve reaches a saturation power of about 2.5 mw, while the experimental data reach a higher saturation power. this behaviour is realistic, being due to the pitch regulation of the wt: in fact, the turbine is allowed to work with a maximum power of about 104 % of rated data. this performance of the wt results in a higher energy production; however, an earlier aging of the turbine due to a higher degradation of the materials may occur as well. the direction of the wind speeds considered in the slm may be affected by turbulence, mainly due to the wakes of other turbines or obstacles. the nwm permits to solve this issue by identifying the wind speed directions in which the turbine and the meteorological mast are not affected by wakes. according to jensen model, the angular section not affected by wakes corresponds to β ranging between -26.8° and 12.7°: thus, the north direction (β = 0°) is selected for the nwm. figure 6 presents the results of jensen model for the wind directions considered in the slm and the nwm using k = 0.075, h = 80 m, rd = 40 m, and z0 = 0.1 m. the blue and the red circles represent the wt under analysis and the meteorological mast, respectively; the grey circles indicate the other wind generators, and the cones represent the areas affected by wakes. with the wind direction assumed for the slm, the wt and the station are affected by the wake of another turbine; on the other hand, with north direction, they are wake-free. the resulting regression equation (r2 = 0.979) for the nwm is the following: 𝑣stat = 0.998 ∙ 𝑣wt + 0.550 (13) figure 7 reports the results after the correction. the nwm correctly estimates the cut-in and cut-out wind speeds as they coincide with the values provided by the manufacturer (≈ 3.5 m/s and ≈ 25 m/s, respectively). the sm does not require experimental data from a meteorological mast, thus it can be used also in wind power plants without weather stations close to the wts. however, as described in section 2, this correction cannot be applied to the figure 3. turbine #1 uncorrected raw experimental data (blue dots) and manufacturer power curve (red line). figure 4. turbine #1 uncorrected experimental data after the preliminary data processing. figure 5. turbine #1 slm corrected results (grey bars) and manufacturer power curve (red bars). acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 51 last part of the power curve, where the electrical power reaches a saturation value (the nominal power). thus, the sm is applied to wind speeds in the range (4 ÷ 15) m/s, which is the rated wind speed by the manufacturer. however, the local distribution of experimental wind speeds for the year under study (red bars in figure 8) shows that this range contains most of experimental data. actually, for wind speeds of ≈ 15 m/s, the corresponding cumulative function (blue curve) is higher than 0.93, i.e. more than 93 % of wind speeds are ≤ 15 m/s. in order to achieve a better accuracy, regression equations are identified for two wind speed ranges: lower than 6 m/s, and between 6 m/s and 15 m/s. the resulting equations are the following (r2  1): 𝑣stat = 0.779 ∙ 𝑣wt + 1.493; 𝑣wt < 6 m s (14) 𝑣stat = 0.923 ∙ 𝑣wt + 0.961; 6 m s ≤ 𝑣wt < 15 m s (15) the results of the sm, which are reported in figure 9, highlight that the performance of the wt is now realistic, since it is lower than manufacturer power curve. the weighted yearly efficiencies are calculated on a yearly basis to evaluate the effectiveness of the corrections (figure 10). for wind speeds higher than the rated value, the curves converge into the manufacturer data (blue curve), while at lower wind speeds uncorrected data (red curve) have a different shape from the manufacturer curve. moreover, at low wind speeds, the necessity to correct raw data is evident: in fact, at a wind speed of ≈5 m/s, the raw weighted efficiency (η* ≈0.4) is higher than manufacturer value. actually, uncorrected data are based on the wind speed detected on the back of the turbine rotor: this value is lower than the wind speed entering the turbine, leading to an overestimation of its efficiency. regarding the corrections, the slm (green curve) estimates the lowest efficiencies, but, as previously described, its results are affected by wakes. indeed, the comparison with the nwm curve shows that the presence of wakes leads to an underestimation of the efficiency. the shape of the efficiency curve of the sm, which is based on the manufacturer data, is the closest to the shape of the manufacturer one. table 1 reports the weighted yearly efficiencies and the deviations of the slm and the sm with respect to the nwm, which is assumed as the reference. for wind speeds < 6 m/s, the two methods underestimate the efficiency, with similar deviations from the nwm of about -9.1 % (slm) and -7.9 % (sm). in the intermediate wind speeds range (6 ÷ 15) m/s, the slm underestimates the efficiency with a spread of about -3.5 %, while the performance of the wt is overestimated by the sm with a deviation of about 5.2 %. the turbine under analysis has very high energy performance in the area of the farm with the meteorological mast. its availability and capacity factors are evaluated: the wt under study is in operation for 97.2 % of time, with an average capacity factor of 29.3 %. among the turbines in this part of the wind farm, the performance of this wt is one of the best, being the average availability and capacity factors of the plant equal to 96.5 % and 22.3 %, respectively. figure 6. turbine #1 jensen model with wind directions assumed for slm (top) and nwm (bottom). figure 7. turbine #1 nwm corrected results (grey bars) and manufacturer power curve (red bars). figure 8. cumulative function and pdf of wind speed distribution. figure 9. turbine #1 sm corrected results (grey bars) and manufacturer power curve (red bars). 0 0.1 0.2 0.3 0 0.2 0.4 0.6 0.8 1 4 10 16 22 p d f c u m u la ti v e f u n c ti o n wind speed (m/s) acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 52 5.2. results of one turbine without the meteorological station this subsection presents the results for a wt in the second area of the wind farm (i.e., without the meteorological mast). the power curve provided by the manufacturer (red line) and experimental observations (blue points, corresponding to average pout values within 10 min time intervals) are presented in figure 11; the wind speeds are corrected according to equation (1). in figure 11, a high number of data are shifted on the left of the manufacturer power curve. moreover, the cut-in and the cut-out wind speeds (≈ 3 m/s and ≈ 20 m/s) are lower than the nominal values (vc-in = 3.5 m/s, vc-out = 25 m/s): thus, a correction of these data is required. first, failure conditions are excluded from the analysis by removing observations exhibiting turbulence higher than 10 % and null output power. as described in the previous subsection, a preliminary uncertainty estimation is performed starting from the instrumental uncertainty of the wt wattmeter and anemometer, and the repeatability of the measured output power. figure 12 shows the results of this preliminary analysis: the red bars correspond to the manufacturer power curve, while the grey bars represent the experimental means of each group centered around integer values of wind speed. the error bars superimposed on each grey bar are the intervals pmean,i ± u(pi). the figure confirms the preliminary results of the other turbine under study: despite data corresponding to abnormal operation are excluded, the electrical output power remains higher than the manufacturer specifications. however, cut-in and cut-out wind speeds (≈ 4 m/s and ≈ 21 m/s) are closer to the values provided by the manufacturer. this wt is in the area of the wind farm without a meteorological station, thus only the sm can be applied and the data after the correction are presented in figure 13. as described in section ii, this method cannot be applied to the part of the power curve where the power is constant (nominal power); hence, wind speeds higher than rated value (12 m/s) are excluded from the analysis. after applying the correction, the performance of the wt (grey bars) is realistic, being lower than manufacturer curve (red bars). two regression equations (r2  1) are determined for the wind speed intervals < 6 m/s, and between 6 m/s and 12 m/s: 𝑣stat = 0.946 ∙ 𝑣wt + 1.258; 𝑣wt < 6 m s (16) 𝑣stat = 0.893 ∙ 𝑣wt + 1.923; 6 m s ≤ 𝑣wt < 12 m s (17) the weighted yearly efficiencies for raw data are about 33.5 % (wind speeds < 6 m/s) and 42.6 % (wind speeds between 6 m/s and 12 m/s). after the sm correction, the weighted efficiencies are of about 27.1 % (< 6 m/s) and 34.3 % (6 m/s ÷ 12 m/s), with a decrease of about 19 % for both wind speed ranges. this turbine exhibits very high energy performance in the area of the plant without the weather station. indeed, it has the highest availability of the wind farm (99.1 %), while the average value in the second part of the plant is 97.4 %; moreover, its capacity factor is 27.9 %. this value is lower than the other wt under analysis, despite being higher than the average quantity (20.1 %) in the second area of the plant. 6. conclusions this work proposes an innovative method, named “statistical method” (sm), to evaluate the average efficiency of wind turbines by correcting the wind speed at the entrance of the rotor from nacelle anemometer. in the literature, other two methods (straight line method, slm, and no wakes method, nwm) are defined to perform this correction, taking into account technical table 1. weighted yearly efficiencies and deviations for the correction methods (turbine with meteorological station). wind speed range 𝜼𝐫𝐚𝐰 ∗ 𝜼𝐒𝐋𝐌 ∗ 𝜼𝐍𝐖𝐌 ∗ 𝜼𝐒𝐌 ∗ 𝜟𝜼𝐒𝐋𝐌 ∗ 𝜟𝜼𝐒𝐌 ∗ < 6 m/s 39.7 30.0 % 33.0 % 30.0 % -9.1 % -7.9 % 6 – 15 m/s 39.9 33.7 % 34.8 % 37.0 % -3.5 % 5.2 % figure 10. turbine #1 weighted yearly efficiencies for the proposed corrections. figure 11. turbine #2 uncorrected raw experimental data (blue dots) and manufacturer power curve (red line). figure 12. turbine #2 uncorrected experimental data after the preliminary data processing. 0 0.1 0.2 0.3 0.4 0.5 0 5 10 15 20 25 w e ig h te d y e a rl y e ff ic ie n c y η * wind speed (m/s) uncorrected data manufacturer data sm nwm slm acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 53 specifications and international standards. these correction methods require data measured by a meteorological mast close to the turbines, but the presence of this station in wind farms is rare. conversely, the correction proposed in this paper evaluates the wind speed entering in the wts rotor only relying on the manufacturer power curve and the data measured by the wt anemometer. indeed, it may be applied also in wind farms that are not equipped with a meteorological station. in the present work, these three methods were applied to a one-year experimental campaign on a wind farm in southern italy. in particular, two turbines located in two different areas of the same plant were analysed: only the first turbine was close to a meteorological mast. the effects of the corrections were evaluated representing the electrical output power by means of the method of bins and setting the width of the bins according to the uncertainty of the used anemometer. an uncertainty estimation was also performed for the output wt power, taking into account the power measurement uncertainty and the repeatability within each wind-speed bin. the results of the nwm were considered as a reference. regarding the turbine in the part including the mast, the sm performed similarly to the slm, providing comparable absolute deviations in terms of weighted efficiencies with respect to the reference. in fact, the deviations of the sm are about ± 7 % in the total range, while the quantities corresponding to the slm are ± 9 % with respect to the nwm. after the sm correction, the weighted yearly efficiency decreased between about 10 % and 20 % with respect to raw data before correction in the usual wind speed range. references [1] f. spertino, a. ciocia, v. cocina, p. di leo, renewable sources with storage for cost-effective solutions to supply commercial loads, proc. of 2016 international symposium on power electronics, electrical drives, automation and motion (speedam), anacapri, italy, 22-24 june 2016, pp. 242-247. doi: https://doi.org/10.1109/speedam.2016.7525987 [2] a. mahesh, k.s. sandhu, hybrid wind/photovoltaic energy system developments: critical review and findings, renewable and sustainable energy reviews 52 (2015) pp. 1135-1147. doi: https://doi.org/10.1016/j.rser.2015.08.008 [3] z. zhang, y. zhang, q. huang, w. lee, market-oriented optimal dispatching strategy for a wind farm with a multiple stage hybrid energy storage system, csee journal of power and energy systems 4(4) (2018) pp. 417-424. doi: https://doi.org/10.17775/cseejpes.2018.00130 [4] wind energy in europe: outlook to 2023. online [accessed 09 june 2021] https://windeurope.org/about-wind/reports/wind-energy-ineurope-outlook-to-2023/ [5] p. w. carlin, a.s. laxson, e. b. muljadi, the history and state of the art of variable-speed wind turbine technology 6(2) (2003) pp. 129-159. doi: https://doi.org/10.1002/we.77 [6] cenelec en 61400-12-1:2017 power performance measurement of electricity producing wind turbines. [7] v. cocina, p. di leo, m. pastorelli, f. spertino, choice of the most suitable wind turbine in the installation site: a case study, proc. of 2015 international conference on renewable energy research and applications (icrera), palermo, italy, 22-25 nov. 2015, pp. 1631-1634. doi: https://doi.org/10.1109/icrera.2015.7418682 [8] a. peña, p. e. rethore, m. p. van der lan, on the application of the jensen wake model using a turbulence-dependent wake decay coefficient: the sexbierum case, wind energy 19(4) (2016) pp. 763-776. doi: https://doi.org/10.1002/we.1863 [9] f. spertino, p. di leo, i.s. ilie, g. chicco, dfig equivalent circuit and mismatch assessment between manufacturer and experimental power-wind speed curves, renewable energy 48 (2012) pp. 333-343. doi: https://doi.org/10.1016/j.renene.2012.01.002 [10] p.j. davis, leonhard euler's integral: a historical profile of the gamma function, american mathematical monthly 66(10) (1959) pp. 849-869. doi: https://doi.org/10.1080/00029890.1959.11989422 [11] k. grogg, harvesting the wind: the physics of wind turbines, 2005. online [accessed 09 june 2021] https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.589. 2982&rep=rep1&type=pdf [12] m.h. el-ahmar, a.m. el-sayed, a.m. hemeida, evaluation of factors affecting wind turbine output power, proc. of nineteenth international middle east power systems conference (mepcon), cairo, egypt, 19-21 december 2017, pp. 1471-1476. doi: https://doi.org/10.1109/mepcon.2017.8301377 [13] f. spertino, a. ciocia, p. di leo, g. iuso, g. malgaroli, l. roberto, experimental testing of a horizontal-axis wind turbine to assess its performance, proc. of 22nd imekotc4 international symposium, iasi, romania, 14-15 september 2017, pp. 411-414. online [accessed 14 june 2021] https://www.imeko.org/publications/tc4-2017/imeko-tc42017-080.pdf [14] f. spertino, e. chiodo, a. ciocia, g. malgaroli, a. ratclif, maintenance activity, reliability analysis and related energy losses in five operating photovoltaic plants, 2019 ieee international conference on environment and electrical engineering and 2019 ieee industrial and commercial power systems europe (eeeic / i&cps europe), genova, italy, 11-14 june 2019, pp. 1-6. doi: https://doi.org/10.1109/eeeic.2019.8783240 [15] f. spertino, e. chiodo, a. ciocia, g. malgaroli, a. ratclif, maintenance activity, reliability, availability, and related energy losses in ten operating photovoltaic systems up to 1.8 mw, ieee transactions on industry applications 57(1) (2021) pp. 83-93. doi: https://doi.org/10.1109/tia.2020.3031547 [16] j. f. manwell, j. g. mcgowan, a. l. rogers, wind energy explained, 2010, john wiley and sons, ltd, chichester, united kingdom, isbn: 9780470015001. [17] a. carullo, a. ciocia, p. di leo, f. giordano, g. malgaroli, l. peraga, f. spertino, a. vallan, comparison of correction methods of wind speed for performance evaluation of wind turbines, proc. of 24th imeko tc4 international symposium, 14-16 sept. 2020, pp. 291-296. online [accessed 09 june 2021] https://www.imeko.org/publications/tc4-2020/imeko-tc42020-55.pdf figure 13. turbine #2 sm corrected results (grey bars) and manufacturer power curve (red bars). https://doi.org/10.1109/speedam.2016.7525987 https://doi.org/10.1016/j.rser.2015.08.008 https://doi.org/10.17775/cseejpes.2018.00130 https://windeurope.org/about-wind/reports/wind-energy-in-europe-outlook-to-2023/ https://windeurope.org/about-wind/reports/wind-energy-in-europe-outlook-to-2023/ https://doi.org/10.1002/we.77 https://doi.org/10.1109/icrera.2015.7418682 https://doi.org/10.1002/we.1863 https://doi.org/10.1016/j.renene.2012.01.002 https://doi.org/10.1080/00029890.1959.11989422 https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.589.2982&rep=rep1&type=pdf https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.589.2982&rep=rep1&type=pdf https://doi.org/10.1109/mepcon.2017.8301377 https://www.imeko.org/publications/tc4-2017/imeko-tc4-2017-080.pdf https://www.imeko.org/publications/tc4-2017/imeko-tc4-2017-080.pdf https://doi.org/10.1109/eeeic.2019.8783240 https://doi.org/10.1109/tia.2020.3031547 https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-55.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-55.pdf fault compensation effect in fault detection and isolation acta imeko issn: 2221-870x september 2021, volume 10, number 3, 45 53 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 45 fault compensation effect in fault detection and isolation michał bartyś1 1 warsaw university of technology, institute of automatic control and robotics, boboli 8, 02-525 warsaw, poland section: research paper keywords: fault compensation effect; fault masking; fault isolation; diagnostics of processes; fault distinguishability citation: michał bartyś, fault compensation effect in fault detection and isolation, acta imeko, vol. 10, no. 3, article 9, september 2021, identifier: imekoacta-10 (2021)-03-09 editor: lorenzo ciani, university of florence, italy received january 25, 2021; in final form august 29, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was funded by the pob research centre for artificial intelligence and robotics of warsaw university of technology within the excellence initiative program research university (id-ub). corresponding author: michał bartyś, e-mail: michal.bartys@pw.edu.pl 1. introduction the model-based diagnostics of industrial processes intensively makes use of residuals [1], [2], [3]. the residuals express to which extent measurements (observations) and outputs of a diagnosed system differ from expected system behaviour predicted by the reference model of the system. figure 1 depicts the general block scheme exemplifying the basic workflow in the model-based fault detection and isolation approach (fdi) [1]. it generally consists of three consecutive steps: detection, isolation, and identification of faults. the main goal of fault detection is to detect the diagnosed system's abnormal behaviour, while the isolation (localization) points out the faults that potentially occurred. on the other hand, fault identification allows for recognizing the size of a fault. frequently, fault identification is not of concern in industrial applications. therefore, for simplicity, this step is not shown in figure 1. to react appropriately to faults, the process operator or fault-tolerant control system demands univocal isolation of faults. however, this is not a trivial task. the discrepancies r (residuals) between model �̂� and process 𝐕 outputs are indicative of a potential fault or faults. however, this is true under the condition that residuals are sensitive to the faults [1], [2]. furthermore, we assume that the diagnostic system is designed so that this postulate is met. figure 1. a block diagram of the basic workflow in model-based fault detection and isolation approach (fdi). notions: r residuals, v process outputs, v̂ model outputs, s – diagnostic signals, f – faults. abstract this paper discusses the origin and problem of the fault compensation effect. the fault compensation effect is an underrated common side effect of the fault isolation approaches developed within the fault detection and isolation (fdi) community. in part, this is justified due to the relatively low probability of such an effect. on the other hand, there is a common belief that the inability to isolate faults due to this effect is the evident drawback of model-based diagnostics. this paper shows how, and under which conditions, the fault compensation effect can be identified. in this connection, the necessary and sufficient conditions for the fault compensation effect are formulated and exemplified by diagnosing a single buffer tank system in open and closed-loop arrangements. in this regard, we also show the drawbacks of a bi-valued residual evaluation for fault isolation. in contrast, we outline the advantages of a three-valued residual evaluation. this paper also brings a series of conclusions allowing for a better understanding of the fault compensation effect. in addition, we show the difference between fault compensation and fault-masking effects. mailto:michal.bartys@pw.edu.pl acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 46 in the fault-free (normal) state of the diagnosed system, the residuals should converge to zero. however, considering the uncertainty of measurements and impreciseness of reference models applied, the residuals take values relatively close to zero. in practice, the residuals are being discretized through the constant or adaptive thresholding approaches [4]. as a result, the continuous or piecewise continuous residuals are converted into bi-, or three-valued crispy or fuzzy values referred to as diagnostic signals [5]-[7]. a set of diagnostic signal values associated with each particular fault creates its specific signature (pattern), typically taking the form of a column vector. the structure of signatures of all faults is referred to as the incidence matrix or structure of residual sets or diagnostic matrix [1]-[3], [5], [6]. the signatures allow for distinguishing faults under the condition that all signatures of all faults are unique. in general, this condition is not satisfied [5]. the main reason is that the number of measurements is lower than the total number of possible faults, including instrumentation and system components faults [6]. therefore, we should accept the fact that some faults will remain undetectable or indistinguishable. we consider this feature a severe drawback of the model-based fdi approaches. to at least overcome this problem, many approaches were developed that allow for increasing fault distinguishability. however, it was proven in [5] that, in general, this task is unsolvable. with regard to functional safety [8], [9], there is defined tolerable risk. according to a commonly recognized definition [9], tolerable risk is "a level of risk deemed acceptable by society in order that some particular benefit or functionality can be obtained." by analogy, we can claim that by employing a model-based diagnosis, if the risk of either undetectable dangerous or safe faults is deemed acceptable, then the fdi makes sense. however, this involves presuming some simplifications and undertaking some assumptions. for example, frequently, the assumption regarding the infallibility of measurement devices or the credibility of observations is adopted [10]. it is a case in both branches of the model-based diagnostics developed by the fdi and dx research communities [11]-[13]. later, in this paper, we assume the infallibility of measurement instruments. it may be explained, particularly for diagnosing industrial systems, employing high-reliability instruments exhibiting at least sil1 safety integrity level. the rationality of this assumption reinforces statistics of failures of industrial equipment [14]. the aforementioned explanations justify to some extent the assumption commonly adopted in the fdi regarding the infallibility of instruments. also, the assumptions regarding uncertainties of residuals, diagnostic signals, and models are discussed intensively in the context of fdi [2], [5], [7], [15]. the uncertainty of measurements is a fundamental problem of metrology. it has been discussed in series of publications for many years, i.e., in [16]-[18]. in the model-based diagnostics of processes, we have at least five different sources of uncertainties connected with measurements, models, residuals, residual evaluation, and fault diagnostic signals relation. in fact, the problem of uncertainty is common for metrology and diagnostics. in diagnostics, the measurements are intensively used for residual generation (figure 1). therefore, the uncertainty of the measurements impacts the uncertainty of residuals. on the other hand, the residuals' uncertainty also depends on the uncertainty of the reference model of the diagnosed system. the uncertainty of the model indirectly reflects its grade of perfection. therefore, uncertainties of measurements and models result in the uncertainty of the residuals. later on, the residuals are evaluated and take the form of socalled diagnostic signals. hence, the way how residuals are evaluated contributes to the overall uncertainty of diagnostic signals as well. finally, the diagnosis is based on inference using faultdiagnostic signals relation, or subjective logic, or expert knowledge [2]. thus, there is also an uncertainty in inferring about faults [15]. it is also important to mention that the complex problem of uncertainty of diagnosing has not been holistically solved yet. this paper deals mainly with the problem of the fault compensation effect and intends to expose some weaknesses of the fdi model-based diagnosing. the deliberations regarding the uncertainty of the fault compensation effect are beyond the scope of this paper. therefore, keeping in mind the paper's main objective, the uncertainty of measurements will not be considered further. several fdi methods assume and consider exclusively single faults [6]. this assumption is allowable for diagnosing relatively non-complex systems. according to occam's scissor rule, the single faults in non-complex systems are more likely than multiple. while the fault compensation effect is not a property of a system with single faults, we focus our attention exclusively on multiple fault cases. in the case of diagnosing complex systems, multiple faults are more likely [20]. therefore, in these systems, there is to expect occurrences of fault compensation effects. there is to mention that problem of the fault compensation effect is poorly represented in the literature. the fault compensation is the undesired and unpredictable side effect of multiple fault isolation based on signatures of all single faults, constituting multiple faults. this effect appears in all fdi approaches, in which multiple faults' signatures are obtained as the unions of signatures of all single faults constituting multiple ones [2], [6]. the union of bi-valued signatures is defined as a boolean alternative of signatures of all faults creating multiple ones. by three-valued signatures, the union of single fault signatures is slightly more complex [20]. developing multiple fault signatures based on single ones has some practical background. as far as the diagnosed system's phenomenological models are not available, the multiple fault signatures are not easy to obtain based on process data or process operators' expertise. moreover, frequently some multiple faults have never been registered or ought not to appear for process safety reasons, e.g., in the nuclear power stations. however, for clarity, this paper will use an analytical phenomenological model to explain the fault compensation effect. the fault compensation is understood differently even within the fdi research community. firstly, fault compensation is meant as an approach to sustain the system's nominal operation even when a fault occurs. this understanding of fault compensation is typical for the different fault tolerant control (ftc) approaches [3]. for example, the unique approach towards ftc can be found in [21]. here, the additional signals are superimposed on the controlled system's inputs to compensate for faults' effects. over there, fault compensation refers to understanding a fault, preferably in terms of a specific disturbance imposed on the control system. it is important to mention that regular control loops have also embedded inherent compensating abilities for the small size acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 47 faults. the ability to compensate for faults impacts is frequently called the fault masking effect [22], [23]. this paper faces a different understanding of the fault compensation effect. secondly, the fault compensation in fdi is understood as an effect of zeroing residuals values, despite faults [2], [20]. in other words, then the faults that occurred cannot be detected nor isolated based on residuals. therefore, in this case, the fdi completely fails and, depending on conditions, the multiple faults cannot be temporarily or permanently isolated. as far as the following conditions hold: • residuals are sensitive to at least two single faults, • multiple fault signatures are unions of all single faults constituting multiple ones, • different single faults act on residuals in opposite directions, then the fault compensation effect may occur. it is to recognize that those conditions are easily satisfied in the majority of industrial fdi approaches. therefore, we can conclude that fault compensation seems to become a significant practical problem. this statement implies inspiring motivation for a deep-in discussion of this effect in this paper. in conclusion, the fdi approaches to isolating multiple faults should be criticized as they may lead to misdiagnosis in a case of a fault compensation effect. the paper contributes both to the theory and practice. the primary outcome of the paper is the novel formulation of necessary and sufficient conditions for the fault compensation effect and, in turn, formulation of the sufficient condition for excluding this effect. the defined conditions contribute to the fdi theory and practice by proposing a method for seeking potential fault compensation effects by design or analyzing a diagnostic system. we also formulate a set of recommendations that have some practical meaning. they contribute and extend the set of good practices applicable to the design of diagnostic systems. the remainder of this paper is structured as follows. section 2 describes a nominal model of a single tank system, which we intensively exploit in this paper. section 3 presents an approach to fault detection and isolation based on an analytical description of the residuals in the inner form. section 4 illustrates the fault isolation based on biand three-valued residuals. section 5 discusses chosen results of the simulation, while section 6 outlines the problem of the fault-masking effect. finally, section 7 summarizes the achieved results. 2. the nominal model of the system the fault compensation problem will be explained based on the example of the model-based diagnosing workflow of a simple open-loop control system shown in figure 2. let the diagnostic problem rely on isolating two single faults: leakage in the tank (fault f1) and obliteration of the outlet pipe (fault f2) as well as one double fault {f1 ∧ f2}. the double fault represents the faulty state where the leakage and obliteration take place at the same time. for simplicity, we assume that used instruments are infallible. firstly, according to the schematic shown in figure 1, we develop the process's nominal (reference) model in a faultfree state. for this reason, we propose the phenomenological model of the process. this model will be exploited further for the closed-loop control system too. many other models are imaginable in this stage, including these, based on heuristic knowledge, fuzzy sets theory, fuzzy neural networks, and neural networks [1]-[3], [5], [7]. next, we assume the availability of measurements shown in table 1, except optional flow rate f1. in a fault-free state, for incompressible and inviscid liquid, the fluid accumulation in the tank is equal to the difference of inflow and outflow volumes. hence, the liquid volumetric inflow rate 𝐹0 is equal to 𝐹0 = 𝐴 𝑑𝐿 𝑑𝑡 + 𝛼𝑆√2𝑔𝐿 , (1) where 𝐴 is the cross-sectional area of the tank, 𝐿 is the liquid level in the tank relative to the outlet pipe axis, α is the outflow contraction coefficient, s is the nominal cross-sectional area of the outlet pipe, and g is the gravitational constant. eq. (1) will be further referred to as the nominal model of the process. 3. fault detection generally, fault detection should indicate whether the fault or faults occurred or not. we assume that a discrepancy between the nominal and process outputs will occur in a faulty state. however, this is true under two essential conditions: • residuals are sensitive to the faults which occurred; • the fault compensation effect does not take place. the paper's objective is principally concerned with the second condition. to obtain residuals, we assume three faults listed in table 2. next, we develop the model of the diagnosed system in the so-called inner form [6], i.e., in the way which reflects the impacts of faults. 𝐹0 𝑓 = 𝐴 𝑑𝐿 𝑑𝑡 + 𝛼𝑆√2𝑔𝐿 + 𝑓1𝛼𝑙 𝑆√2𝑔(𝐿 − 𝐿𝑙 ) − 𝑓2𝛼𝑆√2𝑔𝐿 , (2) where 𝐹0 𝑓 is the tank inflow rate in a faulty state; αl is the leakage outflow contraction coefficient; 𝐿𝑙 is the distance from the center of the area of leakage orifice to the axis of outlet pipe; 𝑓1 = 𝑆𝑙 /𝑆; 𝑓2 = 1 − 𝑆𝑜/𝑆; sl is the cross-sectional area of the leakage orifice; 𝑆𝑜 is the cross-sectional area of the outlet pipe. while residual 𝑟 = 𝐹0 − 𝐹0 𝑓 , then from (1) and (2) we obtain: 𝑟 = −𝑓1𝛼𝑙 𝑆√2𝑔(𝐿 − 𝐿𝑙 ) + 𝑓2𝛼𝑆√2𝑔𝐿 . (3) figure 2. the schematic of the process considered for diagnosing. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 48 therefore, the residual r equals zero if the model and process outputs are identical. however, this cannot be interpreted unambiguously as a fault-free state of the system, while the residual r may also take zero value in a faulty state due to the impact of faults on residuals. nevertheless, this effect occurs exclusively for multiple faults. from (3), we can easily withdraw a simple condition for fault compensation effect. 𝑓1 𝑓2 = 𝛼 𝛼𝑙 √ 𝐿 (𝐿 − 𝐿𝑙 ) . (4) the probability of the fault compensation effect is relatively low. however, this effect is the reason for false-negative fault isolation, and therefore, it should be avoided as far as possible. this paper shows how it may be possible. the following observation would be helpful here: the effect of fault compensation does not occur for single faults and for those multiple faults for which all residuals are unidirectionally affected, i.e., possess the same sign. from this observation, we can draw some practical conclusions. conclusion 1. the design of the diagnostic systems in which residuals are sensitive exclusively to single faults is strongly recommended for fdi because it avoids fault compensation effects. conclusion 2. consideration of residual signs may help increasing achievable fault distinguishability while bringing additional useful knowledge for diagnostics. conclusion 1 corresponds well with an idea for developing a family of intelligent single fault detectors [24] and with the concept of the diagonal structure of residual sets proposed in [6]. however, it sounds slightly unrealistic in nowadays world. therefore, the question arises on how we can avoid fault compensation effects if they are typical even for elementary processes, as shown in figure 2. there is no good general answer to this question. nonetheless, we can consider some productive actions. according to conclusion 1, the excellent solution seems to have an equal number of nominal models with the number of single faults, such that each model would be referred exclusively to one fault. let us now consider the same system as in figure 2. the only difference is that we now will use the additional flow rate instrument, i.e., f1. now, the nominal partial models of the process are { 𝐹0 = 𝐴 𝑑𝐿 𝑑𝑡 + 𝐹1 𝐹1 = 𝛼𝑆√2𝑔𝐿 , (5) the models in the inner form reflecting the impact of faults are { 𝐹0 𝑓 = 𝐹0 + 𝑓1𝛼𝑙𝑆√2𝑔(𝐿 − 𝐿𝑙 ) 𝐹1 𝑓 = 𝐹1 − 𝑓2𝛼𝑆√2𝑔𝐿 . (6) from (6), we obtain residuals: { 𝑟1 = −𝑓1𝛼𝑙 𝑆√2𝑔(𝐿 − 𝐿𝑙 ) 𝑟2 = +𝑓2𝛼𝑆√2𝑔𝐿 (7) as can be easily seen from (7), each residual is sensitive exclusively to a single fault. it promises to avoid the fault compensation effect at the expense of an additional flowrate measurement instrument. in this case, the double fault is easily recognizable (isolable) because both residuals (𝑟1 and 𝑟2) adopt non-zero values and opposite signs. from the above considerations, it follows that: conclusion 3. there is a trade-off between the quality of diagnoses and the number of sensors (instruments) applied in real-world systems. it should be mentioned that by the limited availability of sensors, the solution of the sensor placement problem would help maximize fault distinguishability and minimize fault compensation effects [25], [26]. 4. fault isolation the primary goal of fault isolation is to indicate the faults that occurred in the process. this diagnosing step is frequently called fault location or simply diagnosing. diagnosing requires a knowledge of the relation between the faults and diagnostic signals. we can express this relation in the analytical form, for example, as in (3) and (7). if the analytical relations are unknown, the process graphs (gp) [27] could be helpful. the process graph is a directed bipartite graph used in workflow modeling. in considered case, vertices of the gp graph represent disjunctive sets of process states and faults. the graph's edges link the faults with the states and between states, thus reflecting the process flow. the gp graph is handy for analyzing the qualitative impact of faults on process states. physical or virtual quantities represent the process states. in particular, the process states may be represented by measurements. figure 3 depicts the gp graph developed for the single tank open-loop control system shown in figure 2. this graph reflects equation (3) and refers to a situation where flow rates f0 and f1 are not available. from this graph, it can be seen that both single faults act in opposite directions on the liquid level. therefore, both faults may mutually compensate for their impacts. based on this statement, we will formulate two practical conclusions. conclusion 4. the possibility of the occurrence of the fault compensation effect is immediately detectable from the directed graph of the process. conclusion 5. it is necessary for fault compensation if at least one vertex in the gp graph is linked directly with fault vertices by edges labeled with opposite signs. the gp graph derived from equation (7) takes the shape shown in figure 4. table 1. list of available measurements. item symbol measured quantity 1 f0 liquid volumetric inflow rate 2 l liquid level 3 f1 liquid volumetric outflow rate (option) table 2. list of considered faults. item symbol fault 1 f1 leakage from the tank 2 f2 obliteration of the outflow pipe 3 f1 ⋀ f2 leakage and obliteration https://en.wikipedia.org/wiki/directed_graph https://en.wikipedia.org/wiki/bipartite_graph https://en.wikipedia.org/wiki/workflow https://en.wikipedia.org/wiki/conceptual_model acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 49 in figure 4, both faults are linked with different graph vertices. therefore, the necessary condition for fault compensation is not satisfied. in this case, the double fault bivalued signature based on the union of single fault signatures is correct and may distinguish single and double faults. from the above considerations, it follows the demand for reliable instrumentation. conclusion 6. avoidance of fault compensation effect is highly demanding on reliable measurements. the quantitative fault isolation needs to deploy the incidence matrix [6]. the usability of the gp graph in this scope is very limited. it provides only a cause-and-effect qualitative description of the process. the incidence matrix reflects the relation between faults and diagnostic signals. the question is, why are diagnostic signals being used instead of residuals? principally, fault isolation is the process of inference about faults that uses some logical rules. usually, boolean or lukasiewicz n-valued or fuzzy logic is applied. therefore, there is necessary to transform continuous residuals into discrete logical values or a finite set of predefined fuzzy membership functions. for these goals, we use the constant or adaptive discrimination thresholds [4]. in this paper, we will limit our considerations exclusively to elementary, however practicable, thresholding of residuals, which introduces some dead zones to residual values. for the binary assessment of residuals, we will further apply the formula: 𝑠 = { 0 ← |𝑟| < 𝑇h 1 ← |𝑟| ≥ 𝑇h , (8) while for the three-valued assessment of residuals, we will use: 𝑠 = { −1 ← 𝑟 ≤ −𝑇h 0 ← |𝑟| < 𝑇h +1 ← 𝑟 ≥ 𝑇h , (9) where: s is the diagnostic signal and th is an arbitrarily chosen nonnegative threshold. according to (8) and (9), the diagnostic signals are bior three-valued. the robustness of fault isolation, among others, can be characterized by the rate of false-positive and false-negative diagnoses. the false-positive diagnoses indicate non-existing faults, while false-negative diagnoses do not indicate existing faults. as one can infer from formulas (8) and (9), the introduction of dead zones immunizes somehow diagnostic signals against uncertainties and noise, however, at the expense of loss in sensitivity and elongation of fault isolation time. this paper will discuss both (8) and (9) residual evaluation approaches in the context of the fault compensation effect. we will show that fault compensation under some conditions may be determined from the incidence matrix. first, let’s refer to the incidence matrix presented in table 3. here, the entries are bivalued as in (8). therefore, this matrix is also referred to as a binary diagnostic matrix (bdm) [6]. it contains reference diagnostic signal values (signatures) expected by the occurrence of a fault or faults. in table 3, all signatures of all considered faults are identical. therefore, in a faulty system state, we cannot point out which fault or faults occurred. in other words, all three faults from the table 3 are indistinguishable. hence, the quality of obtained diagnosis is unacceptable. moreover, based on table 3, we cannot verify the hypothesis regarding the fault compensation effect. this simple example leads to the following conclusion: conclusion 7. the binary diagnostic matrix itself is useless for the recognition of a fault compensation effect. next, we discuss the case of the three-valued incidence matrix shown in table 4. here, the values of reference diagnostic signals of a double fault are in the set of all reference diagnostic signals of single faults constituting multiple faults, including the faultfree state. for example, diagnostic signal s in table 4 may have three alternative values -1 or 0 or +1. the complete procedure of synthesizing multiple fault reference signatures based on single fault reference signatures is described in [20]. now, we can easily distinguish single faults f1 and f2 because reference signatures of both faults are distinctive. however, both single faults are conditionally indistinguishable from a double fault. moreover, the double fault may not distinguish from the process's fault-free state by diagnostic signal (s = 0). the fault compensation effect, if any, will manifest by (s = 0). therefore, fault-free state, double fault state, and fault compensation effect are still indistinguishable. however, there is to mention that under some conditions: figure 3. the gp graph is reflecting the qualitative impact of faults on the values of process variables. a circle coloured in yellow depicts the available measurement. figure 4. the gp graph for the single tank system reflecting the additional flowrate measurement f1. table 3. diagnostic matrix for binary discretized residual (2). f/s fault-free f1 f2 f1 ∧ f2 s 0 1 1 1 table 4. trinary diagnostic matrix for residual (2). f/s fault-free f1 f2 f1 ∧ f2 s 0 -1 +1 -1, 0, +1 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 50 conclusion 8. the incidence matrix containing three-valued reference diagnostic signals allows for indicating the possibility of fault compensation effect. based on conclusion 8, we now formulate a necessary and sufficient condition for the possibility of a fault compensation effect by three-valued reference diagnostic signals. condition 1. the necessary and sufficient condition for fault compensation. the complete set of diagnostic signals {-1,0,+1} of at least one entry of signature of a multiple fault is necessary and sufficient to indicate the possibility of a fault compensation effect. in the case of biand three-valued reference diagnostic signals, we can formulate a sufficient condition for excluding fault compensation. condition 2. the sufficient condition for excluding fault compensation. it is sufficient for excluding the possibility of fault compensation if a submatrix consisting exclusively of signatures of single faults is diagonal. however, this condition is relatively difficult to meet in practice. therefore, most diagnostic systems based on either binary or trinary evaluation of residuals are exposed to fault compensation that degrades their diagnostic credibility. however, the degree of degradation is much less by three-valued incidence matrices [20]. the necessary and sufficient condition for excluding fault compensation implies: conclusion 9. single row incidence matrices do not allow for unambiguous indication of the possibility of fault compensation effect independently, whether diagnostic signals are bior threevalued. let us now discuss the case of a diagnostic system for which the gp graph is depicted in figure 4. the appropriate binary and trinary diagnostic matrices are shown respectively in table 5 and table 6. now, the binary diagnostic matrix allows for uniquely distinguishing all considered faults. in this case, the fault compensation effect will not occur. similarly, the three-valued diagnostic matrix depicted in table 6 allows for uniquely distinguishing all faults. here, the fault compensation effect does not take place because condition 1 does not hold. as can be seen, the submatrices of the diagnostic matrices depicted in table 5 and table 6 consisting exclusively of single fault signatures are diagonal. therefore, all multiple fault signatures, which signatures are the unions of signatures of all single faults constituting the multiple ones, are distinguishable independently whether they are bior three-valued. after this, we reinforce condition 2. conclusion 10: the diagonal structure of the diagnostic matrix of single faults avoids the fault compensation effect. the condition formulated in the given above conclusion is, however, almost unrealistic to implement in practice. while the bi-valued diagnostic signals are useless for the recognition of fault compensation effects (conclusion 7), it is strongly recommended to design three-valued incidence matrices because they allow for the indication of possible fault compensation effects (conclusion 8). the above recommendation is postulated mainly for the newly developed diagnostic systems. implementing this recommendation for running diagnostic systems is imaginable, although less realizable because of the necessity of installing additional instrumentation. on the other hand, the intensive implementation of intelligent fault diagnosing devices [20] combined with implementing embedded diagnostics ideas [28][28] allows for the successive rejection of fault compensation problems from the area of interests of fdi. 5. simulations the simulations were performed to exemplify the fault compensation effect using a model of a single buffer tank depicted in figure 2. the simulation model was developed in the matlab-simulink environment. the resulting flowchart of the simulation of the liquid storing and distributing process is shown in figure 5. the model generates an output vector whose components include liquid level, inflow and outflow rates, residua, and diagnostic signals. the liquid level depends on inlet and outlet liquid rates, leakages, and obliteration of pipe. therefore, the tank's liquid level can determine by integrating the dynamic liquid accumulation, i.e., by integrating the difference in the flow rates of liquid entering and leaving the tank. as assumed earlier, only one potential double fault is considered in this case. the simulations of two different double fault scenarios were performed. each of them exemplifies a fault compensation effect. we designed the first scenario to show a possibility of the permanent inability for the precise diagnosis by reasoning based on the three-valued diagnostic matrix shown in table 4. it depicts the diagnostic matrix, which meets the necessary and sufficient conditions for a fault compensation effect. this scenario will also consider the three-valued diagnostic matrix shown in table 6, which meets a sufficient condition for excluding fault compensation effect. in connection with the first one, the second scenario shows the different timed processes of tightening diagnoses even for the same diagnostic matrices. this scenario shows that diagnostic matrices admittedly allow for searching for potential fault compensation effects but do not directly reflect the transients of diagnoses. scenario 1. consider two incipient faults: leakage 𝑓1 and pipe obliteration 𝑓2 as in figure 6. the obliteration starts to grow immediately after the simulation gets started. the leakage begins to grow at the time instant 0.50∙105s. therefore, the double fault originates in this time moment. the slopes of both faults are selected in such a way as to show the fault compensation effect. in this case, both faults impact residual r bringing its value close to zero for the whole simulation period, as shown in 6. the liquid inflow rate f0 swings around a constant value within ±10% limits. residuals are three-valued. the diagnostic signal s, (3), and diagnostic signals s1 and s2, (7), are determined based on a fixed arbitrary chosen threshold th = 5%. table 5. diagnostic matrix for binary evaluated residuals (7). f/s fault-free f1 f2 f1 ∧ f2 s1 0 1 0 1 s2 0 0 1 1 table 6. diagnostic matrix for trinary evaluated residuals (7). f/s fault-free f1 f2 f1 ∧ f2 s1 0 -1 0 -1 s2 0 0 +1 +1 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 51 discussion: table 7 summarizes the results of simulations, while table 8 shows the obtained diagnoses. diagnose d0 is based on the bi-valued diagnostic matrix shown in table 5. diagnose d1 is derived from the tri-valued diagnostic matrix shown in table 4, while d2 is based on the tri-valued diagnostic matrix shown in table 6. despite fault compensation, diagnoses d0 and d2 finally isolate correctly double fault, however with a significant time delay. this time will be shorter if choosing a lower value of the threshold th. intermediate diagnosis 𝑓2 is not correct; however, it is not false. in turn, the diagnosis d1 is ambiguous, i.e., delivers much less useful information regarding faults, independently of whether the fault occurs or not. scenario 2. consider a case like in scenario 1 depicted in figure 7. the only difference is that in this case, both faults impact residual r, shifting its value far away from zero. moreover, different slopes of faults cause the reversal of the time sequence of diagnostic signals s1 and s2. it also influences the evolution of double fault diagnosis differently compared to scenario 1. discussion: table 9 summarizes the diagnostic signals obtained from simulations while table 10 contains the obtained diagnoses. diagnoses d0 and d2 finally isolate correctly double fault, however with a significant time delay. the only difference to scenario 1 is that the single fault f1 is detected before f2. in turn, the d1 diagnosis is slightly more valuable than d1 in the previous scenario. however, it is still far away from the quality of the d0 and d2 diagnoses. the diagnosis d1 is correct here, although pointed out faults are indistinguishable. analyzing the results of both scenarios, we can draw a practical conclusion. conclusion 11: it is advantageous to design a three-valued diagnostic matrix such that the elements of multiple fault signatures contain as few alternative values as possible. 6. fault masking effect as long as the process variable tracks the setpoint within some predefined limits, either the process operator or alarm system does not have any particular reasons to react. in closed-loop systems, the effects of faults are compensated for by controller action as long as the system is controllable. therefore, the faultmasking effect is frequently understood as an effect of faults' invisibility to process operators or alarm systems [22], [23]. in other words, the difference between the setpoint and process value may not be sensitive nor indicative of faults. here, the question arises: do the model-based fault diagnostics discussed earlier for the open control system is still valid if we close the loop? figure 5. simulation diagram of a single buffer tank process figure 6. example of a simulation of a double fault. notation: f0 liquid inflow rate dark blue line; l – liquid level blue line; f1 – leakage fault blue line; f2 – obliteration fault – red line; r1– dotted red line; r2 – dotted blue line; r – purple line; diagnostic signals: s1 – blue; s2 – red; s – purple. interval of the fault compensation effect 0.84 … 2.0∙105 s. table 7. diagnostic signals values (scenario 1). time s ∙ 105 0.00 0.50 0.50 0.84 0.84 1.14 1.14 2.00 s1 0 0 0 -1 s2 0 0 +1 +1 s 0 0 0 0 interval of the duration of fault compensation effect: 0.84 ... 2.0∙105 s table 8. obtained diagnoses (scenario 1). time s ∙ 105 0.00 0.50 0.50 0.84 0.84 1.14 1.14 2.00 d0 ∅ ∅ 𝑓2 𝑓1 ∧ 𝑓2 d1 ∅, 𝑓1 ∧ 𝑓2 ∅, 𝑓1 ∧ 𝑓2 ∅, 𝑓1 ∧ 𝑓2 ∅, 𝑓1 ∧ 𝑓2 d2 ∅ ∅ 𝑓2 𝑓1 ∧ 𝑓2 table 9. diagnostic signals values (scenario 2). time s ∙ 105 0.00 0.82 0.82 0.90 0.90 1.35 1.35 2.00 s1 0 -1 -1 -1 s2 0 0 +1 +1 s 0 0 0 -1 interval of the duration of fault compensation effect. 0.82 ... 1.35∙105 s table 10. obtained diagnoses (scenario 2). time s ∙ 105 0.00 0.82 0.82 0.90 0.90 1.35 1.35 2.00 d0 ∅ 𝑓1 𝑓1 ∧ 𝑓2 𝑓1 ∧ 𝑓2 d1 ∅, 𝑓1 ∧ 𝑓2 ∅, 𝑓1 ∧ 𝑓2 ∅, 𝑓1 ∧ 𝑓2 𝑓1, 𝑓1 ∧ 𝑓2 d2 ∅ 𝑓1 𝑓1 ∧ 𝑓2 𝑓1 ∧ 𝑓2 figure 7. example of a simulation of a double fault. interval of the fault compensation effect 0.82 … 1.35∙105 s. notations as in figure 6. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 52 to answer this question, we close the loop of the system shown in figure 2. the modified control system is presented in figure 8. the liquid inflow rate into the buffer tank is controlled by a control valve driven by a pi controller. the controller, employing the control valve, adjusts the liquid inflow rate into the tank keeping the liquid level close to the setpoint value. thus, in case of leakage, the controller increases the inflow to compensate for additional liquid demand. in turn, in the case of obliteration, the controller throttles the liquid inflow to keep demanded liquid level in the tank. the gp graph of the closed-loop control system is shown in figure 9. this graph contains additional vertices reflecting actuator fault f3, position av of the control valve stem, and arcs representing controller-in-the-loop. the actuator tracks the controller output cv. for simplicity, we assume the infallibility of the pi controller. let us assume a trivial static model of an actuator. the nominal model of the actuator is therefore av=cv. the actuator fault manifests in a discrepancy between av and cv values. let us assume now an additive actuator fault. hence, the model of the actuator in an inner form equals 𝐴𝑉 = 𝐶𝑉 ± 𝑓3 → 𝑟3 = ±𝑓3 . (10) as shown in figure 9, all faults are associated with observable (measurable) vertices. hence, the diagnostic matrix of single faults will take the diagonal shape, and in consequence, all single and multiple faults are isolable. the three-valued diagnostic matrix for the modeled control system depicts table 11. following condition 2, we exclude the impact of faults on residual values in this case. therefore, the fault compensation effect does not take place. figure 10 depicts the result of a simulation of a triple fault, i.e., slowly increasing obliteration 𝑓2 starting at the time instant 0, slowly increasing leakage 𝑓1 starting at the time instant 0.5∙10 5 s and abrupt actuator fault 𝑓3 appearing at the time instant 1.0∙105 s. signal s3 represents the diagnostic signal of residual r3. the summary of isolated faults is shown in table 12. as can be seen, closing the loop does not degrade the system's diagnostic properties as far as condition 2 holds. 7. final remarks there were defined necessary and sufficient conditions for fault compensation effect, allowing for identification of the possibility of appearing of this effect based on analysis of incidence matrix. in addition, a complementary condition of excluding the fault compensation effect was also formulated. in this connection, some practical recommendations and hints regarding the design of diagnostic systems were proposed. summing up the results of the discussion and performed simulations, we can conclude that: • the fault compensation effect is a common problem for model-based fdi diagnostic approaches. • the fault compensation effect manifests exclusively for multiple faults. • the fault compensation effect is an unwanted side effect originating from adopting an assumption regarding the generation of signatures of multiple faults based on unions of signatures of single fault signatures. • neglecting fault compensation effect leads to false or temporarily false diagnoses. • fault compensation problems should be considered, particularly in the case of slowly developing incipient faults. • application of the threeinstead of bi-valued diagnostic signals for reasoning about faults is irrelevant regarding the possibility of fault compensation effect. • fault compensation effect results from the fault reasoning method applied and should be distinguished from the faultmasking effect. table 11. diagnostic matrix for a liquid control system depicted in figure 8. f/s faultfree f1 f2 f3 f1 ∧ f2 f1 ∧ f3 f2 ∧ f3 f1 ∧ f2∧ f3 s1 0 -1 0 0 -1 -1 0 -1 s2 0 0 +1 0 +1 0 +1 +1 s3 0 0 0 ±1 0 ±1 ±1 ±1 figure 8. a closed-loop liquid level control system. notions: sp setpoint; cv control value; pi – proportional-and-integral controller; av positioner feedback signal. figure 9. the gp graph of the closed-loop system reflecting the qualitative impact of faults on the values of process variables. figure 10. example of a simulation of a triple fault. notations: f3 – actuator fault purple line; r3 – purple dotted line; s3 – diagnostic signal – purple line. the remaining notions as in figure 6. table 12. obtained diagnoses for closed-loop liquid level control system. time s ∙ 105 0.00 0.82 0.82 0.85 0.85 1.00 1.00 2.00 d ∅ 𝑓1 𝑓1 ∧ 𝑓2 𝑓1 ∧ 𝑓2 ∧ 𝑓3 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 53 further research will focus on developing a theoretical framework encompassing fault compensation aspects described in this paper. references [1] j. korbicz, j. m. kościelny, z. kowalczuk, w. cholewa, fault diagnosis. models. artificial intelligence. applications, springer 2004, isbn: 3540407677. [2] j. m. kościelny, process diagnostics methodology, in j. korbicz, j. m. kościelny, z. kowalczuk, w. cholewa, w. (eds.), fault diagnosis. models. artificial intelligence. applications, springer, 2004, isbn: 3540407677. [3] m. blanke, m. kinnaert, j. lunze, m. staroswiecki, diagnosis and fault tolerant control, springer verlag, new york, 2015, isbn: 978-3-540-35653-0. [4] k. patan, j. korbicz, nonlinear model predictive control of a boiler unit: a fault-tolerant control study, international journal of applied mathematics and computer science, 22(1) (2012), pp. 225-237. doi: 10.2478/v10006-012-0017-6 [5] m. bartyś, chosen issues of fault isolation, polish scientific publishers, pwn, 2014, isbn: 9788301178109. [6] j. gertler, fault detection and diagnosis in engineering systems, marcel dekker inc., new york, 1998, isbn: 0824794273. [7] m. bartyś, generalized reasoning about faults based on the diagnostic matrix, international journal of applied mathematics and computer sciences, 23(2) (2014), pp. 407-417. doi: 10.2478/amcs-2013-0031 [8] d. smith, k. simpson, functional safety, taylor & francis group, london, 2004, isbn: 9780080477923. doi: 10.1016/s0019-0578(01)00011-8 [9] e. marszal, tolerable risk guidelines, isa transactions, 40(3) (2001), pp. 391-399. [10] j. kościelny, m. bartyś, a. sztyber, diagnosing with a hybrid fuzzy–bayesian inference approach, engineering applications of artificial intelligence 104(2021), art. no. 104345, pp. 1-11. doi: 10.1016/j.engappai.2021.104345 [11] l. travĕ-massuyĕs, bridges between diagnosis theories from control and ai perspectives, in: intelligent systems in technical and medical diagnostics, springer, heidelberg, 230, 2014, pp. 441–452, isbn: 9783642398810. [12] j. de kleer, j. kurien, fundamentals of model-based diagnosis, ifac proceedings volumes 36(5) (2003), pp. 25–36. doi: 10.1016/s1474-6670(17)36467-4 [13] j. su, w. chen, model-based fault diagnosis system verification using reachability analysis, ieee transactions on systems, man, and cybernetics: systems 49(4) (2019), pp. 742–751. doi: 10.1109/tsmc.2017.2710132 [14] c. kaidis, wind turbine reliability prediction, uppsala university, report (2003), pp. 1-72. [online] accessed 27 august 2021 http://www.divaportal.org/smash/get/diva2:707833/fulltext01.pdfadditio nally [15] j. m. kościelny, m. syfert, fuzzy diagnostic reasoning that takes into account the uncertainty of the faults-symptoms relation, international journal of applied mathematics and computer science 16(1) (2006), pp. 27-35. [online] accessed 27 august 2021 http://matwbn.icm.edu.pl/ksiazki/amc/amc16/amc1612.pdf [16] w. navidi, statistics for engineers and scientists, mcgraw hill, education, 2014, isbn: 1259251608. [17] i. lira, evaluating the measurement uncertainty fundamentals and practical guidance, taylor & francis, 2002, isbn: 9780367801564. [18] m. catelani, a. zanobini, l. ciani, uncertainty interval evaluation using the chi-square and fisher distributions in the measurement process, metrology and measurement systems 17(2) (2010), pp.195-204. doi: 10.2478/v10178-010-0017-5 [19] m. bartyś, diagnosing multiple faults from fdi perspective, in: z. kowalczuk, m., domżalski, (eds.) advanced systems for automation and diagnostics, 2015, isbn: 9788363177003. [20] j. m. kościelny, m. bartyś, z. łabęda-grudziak, tri-valued evaluation of residuals as a method of addressing the problem of fault compensation effect, in j. korbicz, k. patan, m. luzar, (eds.) advances in diagnostics of processes and systems, springer, 313 (2021), pp. 31-44, isbn: 9783030589646. [21] s. jakubek, h. p. jorgl, fault-diagnosis and fault-compensation for nonlinear systems, proc. of the 2000 american control conference, chicago, il, usa, 28-30 june 2000, pp. 3198-3202. doi: 10.1109/acc.2000.879155 [22] e. wu, s. thavamania, y. zhang, m. blanke, sensor fault masking of a ship propulsion system, control engineering practice 14 (2006), pp. 1337–1345. doi: 10.1016/j.conengprac.2005.09.003 [23] m. g. gouda, j. a. cobb, c-h. huan, fault masking in triredundant systems, in a. k. datta, m. gradinariu, (eds.), lncs 4280, springer-verlag berlin heidelberg 2006, pp. 304–313, isbn 9783540490180. [24] j. m. kościelny, m. bartyś, the idea of smart diagnozers for decentralized diagnostics in industry 4.0, 2019 4th conference on control and fault tolerant systems (systol), casablanca, morocco, 18-20 sept. 2019, pp. 123-128. doi: 10.1109/systol.2019.8864791 [25] m. krysander, e. frisk, sensor placement for fault diagnosis, ieee transaction on systems, man, and cybernetics-part a, 38(6) (2008), pp. 1398-1410. doi: 10.1109/tsmca.2008.2003968 [26] s. s. carlisle, the role of measurement in the development of industrial automation, acta imeko, 3(1)3, 2014. doi: 10.21014/acta_imeko.v3i1.190 [27] k. takeda, b. shibata, y. tsuge, h. matsuyama, the improvement of fault diagnosis algorithm using a signed directed graph, ifac proceedings volumes 27(5) (1994), pp. 351–356. doi: 10.1016/s1474-6670(17)48052-9 [28] z. s. chen, y. m. yang, z. hu, a technical framework and roadmap of embedded diagnostics and prognostics for complex mechanical systems in prognostics and health management systems, ieee transactions on reliability, 61(2) (2012), pp. 314322. doi: 10.1109/tr.2012.2196171 https://content.sciendo.com/view/journals/amcs/22/1/article-p225.xml https://content.sciendo.com/view/journals/amcs/22/1/article-p225.xml https://doi.org/10.2478/v10006-012-0017-6 https://doi.org/10.2478/amcs-2013-0031 https://doi.org/10.1016/s0019-0578(01)00011-8 https://doi.org/10.1016/j.engappai.2021.104345 https://doi.org/10.1016/s1474-6670(17)36467-4 https://doi.org/10.1109/tsmc.2017.2710132 http://www.diva-portal.org/smash/get/diva2:707833/fulltext01.pdfadditionally http://www.diva-portal.org/smash/get/diva2:707833/fulltext01.pdfadditionally http://www.diva-portal.org/smash/get/diva2:707833/fulltext01.pdfadditionally http://matwbn.icm.edu.pl/ksiazki/amc/amc16/amc1612.pdf http://dx.doi.org/10.2478/v10178-010-0017-5 https://doi.org/10.1109/acc.2000.879155 https://doi.org/10.1016/j.conengprac.2005.09.003 https://doi.org/10.1109/systol.2019.8864791 https://doi.org/10.1109/tsmca.2008.2003968 http://dx.doi.org/10.21014/acta_imeko.v3i1.190 https://doi.org/10.1016/s1474-6670(17)48052-9 https://doi.org/10.1109/tr.2012.2196171 performance enhancement of a low-voltage microgrid by measuring the optimal size and location of distributed generation acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 8 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 performance enhancement of a low-voltage microgrid by measuring the optimal size and location of distributed generation ahmed jassim ahmed1, mohammed h. alkhafaji1, ali jafer mahdi2 1 electrical engineering department, university of technology, baghdad, iraq 2 electrical engineering department, university of kerbala, kerbala, iraq section: research paper keywords: microgrid; distributed generation integration; autoadd; power losses reduction; voltage profile improvement citation: ahmed jassim ahmed, mohammed h. alkhafaji, ali jafer mahdi, performance enhancement of a low-voltage microgrid by measuring the optimal size and location of distributed generation, acta imeko, vol. 11, no. 3, article 21, september 2022, identifier: imeko-acta-11 (2022)-03-21 section editor: francesco lamonaca, university of calabria, italy received march 25, 2022; in final form august 30, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: ahmed jassim ahmed, e-mail: ahmedjasem858@gmail.com 1. introduction the increment in energy demand is an indicator of economic growth, and this demand has been growing rapidly in many sectors, such as in the building, transportation, and manufacturing industries. however, the consumption of energy is linked directly to many environmental issues due to using fuel or coal frequently as the primary sources for electricity generation, as shown in figure 1, which is the main reason for emitting greenhouse gases (ghg). those gases are very harmful dangerous for the environment [1]. because of that, many global actors, such as world bank, started encouraging countries to use clean energy sources by supporting their projects financially [2]. therefore even during the pandemic, when the economy got affected by the lockdown, renewable energy sources kept growing fast [3]. integrating renewable energy sources (res) in low-voltage networks is creating significant changes in the electric power system's operation. in general, this integration occurs widely in low and medium-voltage networks. this leads to the microgrid (mg) concept, which can be defined as a complex energy system that needs a specific framework, coordination of information flows and energy resources, as well as protection and the assurance of reliable energy [4]. it is built by the integration of res, conventional generators, energy storage devices, and loads, as shown in figure 2. mgs can work in both the connected mode with the main grid and islanded mode [5]. distributed generation (dg) nowadays is gaining its reputation for becoming the main part of operating distribution networks. this is due to the technological improvement of many types of res, such as photovoltaic systems, fuel cells, combined heat and power sources, and wind energy sources. this integration of dgs has major importance in reducing the emissions of co2 and improving the efficiency and the security of distribution networks and achieving a reliable operation of these networks [6]. the uncontrolled allocation of abstract a power system in which the generation units such as renewable energy sources and other types of generation equipment are located near loads, thereby, reducing operation costs and losses and improving voltage is called a distributed generation (dg), and these generation units are called distributed energy resources. however, dgs must be located optimally to improve the power quality and minimize power loss of the system. the objective of this paper is to propose an approach for measuring the optimal size and location of dgs in a low voltage microgrid using the autoadd algorithm. the algorithm is validated by testing it on the ieee 33-bus standard system and compared with previous studies, the algorithm proved its efficiency and superiority on the other techniques. a significant improvement in voltage and reduction in losses were observed when the dgs are placed at the sites decided by the algorithm. therefore, autoadd can be used in finding the optimal sizes and locations of dgs in the distribution system, the possibility of isolating the low voltage microgrid is discussed by integrating distributed generation units and the results showed the possibility of this scenario during faults time and intermittency of energy time. mailto:ahmedjasem858@gmail.com acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 dgs in distribution networks brought in some serious challenges and problems. like the bidirectional flow of power in the distribution networks and the problems of power losses and voltage drop [7], [8]. researchers from the entire world are now focusing on these problems, and they have proposed many methods for selecting the optimal location and size of dgs with the aim to minimize or even eliminate the losses and improve the voltage of distribution networks with dg. in [7], the author proposed a new particle swarm optimization (pso) method to improve the power quality of the network by finding the number of dgs that are going to be connected and the optimal location of these dgs in the system. this method was validated by testing it on the standard 30-bus ieee system. the results showed a remarkable improvement in the buses' voltage profiles and a reduction in power losses of the system. in [9], integrating the ress was studied as it gives a significant comfort to smart grid technology in terms of cost. in [10], three types of pso algorithms were used to control the output of dg to find its optimum size. to overcome the issues of variation in ress, a model of energy optimization was proposed in [11]; it consists of a mathematical tool probability density function to model wind sources and solar. the author in[12] proposed a methodology for finding the optimal size and placement of many dgs. however, to determine the optimal location, the loss factor is used, and to find the optimal size, they used the algorithm of bacterial foraging. the objectives were to reduce operational costs and losses and to improve the voltage. their work was validated by testing it on ieee 119-bus and 33bus distribution systems. one of the major issues is intermittency in ress, ress integration is studied in [13] by conducting a survey on models from all over the world. the author found that communication systems, especially two-way communications have an important role in the sg's energy optimization. in [14], combined algorithms inspired by nature were used to optimally find the best place and size of dgs. for dg integration, an optimization technique with two steps got presented. during the first one, particle swarm optimization is used to find the best size of dg, and the results obtained are checked using the approach of negative load for reverse power flow. after that, the optimum location is found by the methods of weak bus and the loss sensitivity factor. during the second one, the optimal size of dgs is found using three algorithms based on nature, i.e., gravitational search algorithm, pso algorithm, and a combination of those two. by testing them on the 30-bus ieee system, the effectiveness of the technique has been proved. in this paper, the autoadd algorithm is used to find the optimal place and size of a dg in distribution networks. the proposed algorithm is simple, very flexible, easy to use, and supports all types of dg. it differs from the other algorithms by the time that it takes in processing, whereas the other algorithms take a lot of time that could be in some cases hours. this algorithm performs instantly and gives the best place for dgs to achieve the best performance. it is manipulated through the opendss program. the power flow analysis is executed by opendss through the matlab com interface. the algorithm and the opendss are validated by testing on the standard ieee 33-bus system. the results are compared with previous works, which proved that the opendss is reliable, and the autoadd algorithm gives better results in terms of losses and voltages compared to the earlier studies. after validating the tools, the low voltage microgrid of baghdad/al-ghazaliya-655 is analyzed, and the dgs are placed optimally to enhance its performance and to assess the capability of the microgrid to perform in the isolated mode with the objectives of reducing the cost, losses, and minimizing the impact on climate which contribute with the sustainable development goals (sdgs) 7 and 13. 2. impact of integrating distributed generation on losses and voltage 2.1. impact on losses integrating dgs proved that they could minimize the losses (real and reactive) due to being placed near the load. many early studies showed that the size and location of a dg play a significant role in eliminating power losses [15], the location and size of a dg in a distributed network that gives the minimum losses are identified in general as the optimal location and optimal size. the placement procedure of dgs is similar to the placement procedure of capacitors that aims to reduce losses. they differ in that the dg units affect real and reactive powers, whereas capacitors affect just reactive powers. installing a small dg unit has proven that it may reduce losses for the case of a network with an increment in losses [16]. 2.2. impact on voltage as known, dg supports and improves the system’s voltage [17], but that is not always accurate, as it has been shown that integrating dgs could cause undervoltage or overvoltage. additionally, some dgs change their produced power all the time, like wind generators and photovoltaics. the result of this figure 1. energy production sources. figure 2. microgrid architecture. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 affects the quality of power badly because of the fluctuation of voltage [8], [18]. in addition to that, undervoltage and overvoltage are reported in distribution networks integrated with dgs because of the unsuitability of integrated dgs with current methods of regulation. generally, for regulation, the distribution systems use tap-changing transformers, capacitors, and regulators. those methods were proved as reliable methods in the past for the unidirectional flow of power. nevertheless, today, integrating dgs with distribution networks has a significant impact on the performance of methods of voltage regulation because of the power flows in a bidirectional way caused by new dgs on distribution systems. meanwhile, dgs influence positively on distribution networks due to their contribution to frequency regulation and compensation of reactive power for voltage control. moreover, in case of faults in the main network, they can work as a spinning reserve [19]. 3. autoadd algorithm in this paper, the autoadd algorithm is used; it is an internal feature of opendss that works automatically to find the optimal location of capacitors and generators. the optimization problem of the analysis of the distribution system in the equation form as in equation (1) where g(𝑥, 𝑢) = 0 represents the equation of distribution power flow, pl represents power losses, whereas vi is the voltage at bus number ith [20]. the equation min f(𝑥, 𝑦) = 𝑃𝑙 calculates the amount of the active and reactive power for every node in order to reduce the losses of the system while keeping voltages within certain limits. in addition to that, opendss uses an iterative algorithm that calculates the unknown voltages and currents. then, autoadd accesses the array of injection currents in the solution directly and takes advantage of it [20]. to move the generators on all the buses, the opendss searches for the available bus that result in the best improvement for capacity and losses on equation (2) below [21]: minimize (𝑙𝑜𝑠𝑠 𝑤𝑒𝑖𝑔ℎ𝑡 ∙ 𝑙𝑜𝑠𝑠𝑒𝑠 + 𝑈𝐸 𝑤𝑒𝑖𝑔ℎ𝑡 ∙ 𝑈𝐸) (2) loss of weight is losses' weighting factor in the functions of autoadd. ue weight is unserved energy’s(ue) weighting factor.ue represents load energy that is considered unserved because the power exceeds maximum values the velocity of convergence of solutions in the autoadd algorithm increases for the reason that the system's admittance matrix is fixed and is not changed. generally, finding the location of any generator takes about 2-4 iterations for every solution. the improvement factor refers to the next location that is the best to supply power [22]. figure 3 shows the autoadd algorithm. 4. standard ieee 33-bus system figure 4 depicts the standard system of ieee 33-bus. it has thirty-two branches and thirty-three buses. the voltage level for all the buses is 12.66 kv. the voltage limits for all buses are considered at ± 5 % for maximum and minimum. a synchronous generator feeds the network, the load is 3.715 mw, and 2.3 mvar is distributed on thirty-two buses with different power factors. the line data and load data of the system are in table 1 [23]. 5. proposed microgrid of baghdad/al-ghazaliya655 the proposed microgrid model in figure 5 represents a distribution system in iraq baghdad/al-ghazaliya-655. it has fifty-eight buses and fifty-seven branches. the voltages level is 0.4 kv for all the buses. min f(𝑥, 𝑦) = 𝑃𝑙 subject − to g(𝑥, 𝑢) = 0 0.95 ≤ 𝑉𝑖 ≤ 1.05 , (1) figure 3. flow chart of autoadd algorithm. figure 4. single line diagram of 33-bus ieee system. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 table 1. electrical parameters of 33-bus ieee system (rij resistance, xij reactance, pj real power, qj reactive power). bus 1 bus 2 rij in ω/km xij in ω/km pj in kw qj in kvar length in km 1 2 0.0922 0.0477 100 60 1 2 3 0.4930 0.2511 90 40 1 3 4 0.3660 0.1864 120 80 1 4 5 0.3811 0.1941 60 30 1 5 6 0.8190 0.7070 60 20 1 6 7 0.1872 0.6188 200 100 1 7 8 1.7114 1.2351 200 100 1 8 9 1.0300 0.7400 60 20 1 9 10 1.0400 0.7400 60 20 1 10 11 0.1966 0.0650 45 30 1 11 12 0.3744 0.1238 60 35 1 12 13 1.4680 1.1550 60 35 1 13 14 0.5416 0.7129 120 80 1 14 15 0.5910 0.5260 60 10 1 15 16 0.7463 0.5450 60 20 1 16 17 1.2890 1.7210 60 20 1 17 18 0.7320 0.5740 90 40 1 2 19 0.1640 0.1565 90 40 1 19 20 1.5042 1.3554 90 40 1 20 21 0.4095 0.4784 90 40 1 21 22 0.7089 0.9373 90 40 1 3 23 0.4512 0.3083 90 50 1 23 24 0.8980 0.7091 420 200 1 24 25 0.8960 0.7011 420 200 1 6 26 0.2030 0.1034 60 25 1 26 27 0.2842 0.1447 60 25 1 27 28 1.0590 0.9337 60 20 1 28 29 0.8042 0.7006 120 70 1 29 30 0.5075 0.2585 200 600 1 30 31 0.9744 0.9630 150 70 1 31 32 0.3105 0.3619 210 100 1 32 33 0.3410 0.5302 60 40 1 figure 5. single line diagram of baghdad/al-ghazalya-655 microgrid. table 2. electrical parameters of baghdad/al-ghazalya 655 microgrid. bus 1 bus 2 resistance in ω/km reactance in ω/km length in km active power in kw reactive power in kvar 1 2 0.3416 0.3651 0.001 2 3 0.3416 0.3651 0.02 15 9.3 3 4 0.3416 0.3651 0.01 20 12.4 4 5 0.3416 0.3651 0.04 30 18.6 5 6 0.3416 0.3651 0.01 20 12.4 2 7 0.3416 0.3651 0.02 20 12.4 7 8 0.3416 0.3651 0.01 15 9.3 8 9 0.3416 0.3651 0.04 15 9.3 3 11 0.3416 0.3651 0.015 10 6.2 11 12 0.3416 0.3651 0.005 10 6.2 12 13 0.3416 0.3651 0.005 10 6.2 13 14 0.3416 0.3651 0.005 15 9.3 14 15 0.3416 0.3651 0.015 15 9.3 15 16 0.3416 0.3651 0.0075 15 9.3 16 17 0.3416 0.3651 0.0075 15 9.3 17 18 0.3416 0.3651 0.015 25 15.5 4 19 0.3416 0.3651 0.015 15 9.3 19 20 0.3416 0.3651 0.005 15 9.3 20 21 0.3416 0.3651 0.005 15 9.3 21 22 0.3416 0.3651 0.005 10 6.2 22 23 0.3416 0.3651 0.005 10 6.2 23 24 0.3416 0.3651 0.005 10 6.2 24 25 0.3416 0.3651 0.005 15 9.3 25 26 0.3416 0.3651 0.015 20 12.4 26 27 0.3416 0.3651 0.015 20 12.4 5 28 0.3416 0.3651 0.015 30 18.6 28 29 0.3416 0.3651 0.015 15 9.3 29 30 0.3416 0.3651 0.015 15 9.3 30 31 0.3416 0.3651 0.015 15 9.3 31 32 0.3416 0.3651 0.015 25 15.5 6 33 0.3416 0.3651 0.015 15 9.3 33 34 0.3416 0.3651 0.015 15 9.3 34 35 0.3416 0.3651 0.005 15 9.3 35 36 0.3416 0.3651 0.005 15 9.3 36 37 0.3416 0.3651 0.005 15 9.3 37 38 0.3416 0.3651 0.015 15 9.3 38 39 0.3416 0.3651 0.015 25 15.5 7 40 0.3416 0.3651 0.015 15 9.3 40 41 0.3416 0.3651 0.015 15 9.3 41 42 0.3416 0.3651 0.015 20 12.4 42 43 0.3416 0.3651 0.015 15 9.3 43 44 0.3416 0.3651 0.015 20 12.4 8 45 0.3416 0.3651 0.015 15 9.3 45 46 0.3416 0.3651 0.015 15 9.3 46 47 0.3416 0.3651 0.015 15 9.3 47 48 0.3416 0.3651 0.015 10 6.2 48 49 0.3416 0.3651 0.005 10 6.2 49 50 0.3416 0.3651 0.005 10 6.2 50 51 0.3416 0.3651 0.005 15 9.3 9 10 0.3416 0.3651 0.03 100 62 9 52 0.3416 0.3651 0.005 10 6.2 52 53 0.3416 0.3651 0.005 10 6.2 53 54 0.3416 0.3651 0.005 15 9.3 54 55 0.3416 0.3651 0.015 15 9.3 55 56 0.3416 0.3651 0.015 20 12.4 56 57 0.3416 0.3651 0.015 20 12.4 57 58 0.3416 0.3651 0.015 40 24.8 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 the limits of voltages are within ± 5 % for maximum and minimum. the load is 1 mw and 0.62 mvar distributed on fifty-five buses, as presented in table 2. 6.analysis and results 6.1. analysis of test system the 33-bus ieee system was modeled in the opendss program. voltages and losses were calculated using the newton method. a summary is given in figure 6 and table 3. the comparison shows that the results were the same as the results obtained by the methods in [24]-[26]. table 4 shows a list of solutions by researchers in [27]-[30] that find the optimal size and location of three distributed generators that improve losses on the 33-bus ieee system. in this table, when comparing the algorithm based on the minimum voltage, it can be seen that all the algorithms provide voltages within ± 5 %. for the losses, the proposed algorithm achieved the minimum losses compared to the others. in terms of construction and coding, all the other algorithms require previous knowledge in programming and forming codes to construct them, which is very complex, whereas the autoadd algorithm is built-in in the opendss program and does not need any coding in formatting it. as for the time of operation, the other algorithms need a high number of iterations to converge, especially the pso algorithm, and with that number, the computer used for this operation needs to be of high specification, whereas the autoadd could work with any computer and gives the results instantly. after adding 3 dgs with pf=0.85 the results of network analysis in figure 7 and table 5 show the improvement of voltages and reduction of losses 6.2. analysis of practical network 6.2.1. grid-connected mode the load flow is done on the presented network by the fixedpoint method for different loads 100 %, 90 %, 80 %, 70 %, 60 %, 50 %, and 40 %, at hours (12 pm, 7 pm, 8 pm, 10 am, 9 am, 12 am, 5 am) untill the voltage be within ± 5 %,then the size of generation will be selected based on that. the results are shown in table 6. after the analysis of the system, it was decided to add 3 dgs with a total size of 600 kw to minimize losses and improve voltages to be within ± 5 %. the optimal place and size of dgs were found by the autoadd algorithm as in table 7. after the addition of dgs, the voltage has been improved for all the buses of the proposed system for the four-level of loads, as presented in figure 8 to figure 14. moreover, the losses of the system have decreased significantly, as in figure 15, and that is because the distributed generators are located near the load. 6.2.2. isolated mode in this section, the possibility of isolating the microgrid during the fault time is going to be discussed. as the total load of the network equals 1 mw at peak time and 500 kw at low demand times, the total generation will be leveled up to 1.2 mw by adding two standby units for this scenario to cover the load as in table 8. a time-series load flow is applied for 24 hours and the cases of (100%,75%,50%) at hours(12 am, 10 am, 12 pm) were taken table 3. comparison of results of autoadd with other algorithms. algorithm losses in kw minimum voltage dg location size of dg in mw proposed method 71.4 kw 0.96839 (14, 24, 30) (0.76, 1.07, 1.02) acsa [27] fwa [28] aco–abc [29] pso [30] 74.26 kw 88.68 kw 71.4 kw 72.8 kw 0.9778 0.9680 0.9685 0.96868 (14, 24, 30) (14, 18, 32) (14, 24, 30) (13, 24, 30) (0.7798, 1.125, 1.349) (0.5897, 0.1895, 1.0146) (0.7547, 1.0999, 1.0714) (0.8, 1.09, 1.053) figure 6. pu bus voltages of 33-bus ieee system. figure 7. pu voltages before and after adding dgs with autoadd. table 4. power flow analysis on 33-bus ieee compared with other research. algorithm losses minimum voltage pu location of bus proposed method 202.6 kw 0.913 18 [24] 202.7 kw 0.9131 18 [25] 202.6 kw 0.913 18 [26] 202.6 kw 0.913 18 table 5. losses and minimum voltages before and after adding 3dgs. without dg with three dgs losses in kw minimum voltage & bus losses in kw minimum voltage & bus 202.6 kw 0.913 (18) 12.29 0.99178 (18) acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 to evaluate for different load and irradiance cases, the results are shown in figure 16 and figure 17. the minimum voltage for all the cases is within ± 5, and the system can operate successfully in the isolated mode. the losses are less in case of 100 % than 50 % even though the load is higher due to the pv isn't working at night. 7. discussion for increasing the performance of microgrids by improving the voltage and reducing the losses and at the same time reducing the effect of ghg by integrating res this paper is conducted. the results showed that dgs have a major impact on the voltage profiles. the dgs increased the level of voltage for all the studied cases, and it is proportional to the capacity of the dg. it can be noticed from the results of simulations that the location of the table 6. power flow results of the proposed system. percentage of load total load in kw losses in kw min voltage & bus 100 % 1000 74.4 0.873(39) 90 % 900 58.95 0.88796(39) 80 % 800 45.5 0.9017(39) 70 % 700 34.17 0.915(39) 60 % 600 24.59 0.92807(39) 50 % 500 16.7 0.94075(39) 40 % 400 10.52 0.95312(39) table 7. results of optimum size and location selection for connected mode. dg type size in mw location pf diesel engine 0.3 mw 5 0.85 pv 0.2 mw 10 1 fuelcell 0.1 mw 57 0.9 total 0.6 mw figure 8. voltages of all buses at 100% load before and after the addition. figure 9. voltages of all buses at 90% load before and after the addition. table 8. results of optimum size and location selection for isolated model dg type size in mw location pf diesel engine a 0.3 mw 5 0.85 diesel engine b 0.5 mw 7 0.85 pv 0.2 mw 10 1 fuel cell 0.1 mw 57 0.9 micro turbine 0.1 mw 25 0.9 total 1.2 mw figure 10. voltages of all buses at 80% load before and after the addition. figure 11. voltages of all buses at 70% load before and after the addition. figure 12. voltages of all buses at 70% load before and after the addition. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 7 dg is important for the whole network, and that is shown in the results. as for the losses, it can be seen from the results that the size of dg is important, and that has been noticed that where the bigger the size gets, the more reduction in losses. the autoadd algorithm is used to find the optimal size of dgs; this algorithm is fast, accurate, and very easy to use, so it does not need any prior experience or learning to use it. the proposed algorithm was applied to the ieee 33-bus, and the results were compared against the results of other research in table 4. the results showed that the suggested algorithm is effective in finding the optimal size and location of dgs and helps in achieving better results in terms of losses reduction and voltage improvement, so it was used for the real system to install many types of dgs that improved the voltage and losses as in the results. the scenario of isolating the low voltage microgrid is applied for the first time in an iraqi case to solve the problem of intermittency of power, and the result showed it was successful. some parameters have not been taken in this work that should be mentioned which are the annual variation of load and the economic issues related to the installation of the dg unit. the loads in the distribution network vary continually, and that results in the variation of the network’s losses and voltages. but on the other hand, a large pv system of 200 kw and a fuel-cell of 100 kw have been installed, which are pollution-free, so they don’t affect the nature and run on low cost. 8. conclusion in this paper, the reduction of losses and improvement of voltage has been discussed, and the autoadd algorithm has been introduced and used for finding the optimal size and location of dg. the ieee 33-bus has been used to test the proposed algorithm, and the results of the work were compared against results from other studies, the comparison proved that the proposed algorithm is acceptable and helps in achieving more efficient dg integration. the problem of intermittency of electrical power is solved for the iraqi case considering the maximum load condition. the future work will be using the practical microgrid in this work after the addition of the dgs and the improvement of voltage and losses to integrate the electric vehicles to prepare a suitable electrical environment for them to achieve zero pollution in the transportation sector and take into consideration the installation cost of dgs, charging stations and the variation of the loads. conflicts of interest the authors declare that there are no conflicts of interest. figure 13. voltages of all buses at 50% load before and after the addition. figure 14. voltages of all buses at 40% load before and after the addition figure 15. losses of all buses before and after the addition for 100% case figure 16. voltages of all buses for (50%, 75%, and 100%) load. figure 17. losses of all buses for (50%, 75%, and 100%) load. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 8 references [1] t. addabbo, a. fort, m. mugnaini, l. parri, s. parrino, a. pozzebon, v. vignoli, a low-power iot architecture for the monitoring of chemical emissions, acta imeko 8(2) (2019), pp. 53-61. doi: 10.21014/acta_imeko.v8i2.642 [2] y. zahraoui, m. r. basir khan, i. alhamrouni, s. mekhilef, m. ahmed., current status, scenario, and prospective of renewable energy in algeria: a review. energies, 14(9) (2021) doi: 10.3390/en14092354 [3] iea (2021), world energy outlook 2021, iea, paris, online [accessed 28 august 2022] https://www.iea.org/reports/world-energy-outlook-2021 [4] a. s. hassan, a. firrincieli, c. marmaras, l.m. cipcigan, m.a. pastorelli, integration of electric vehicles in a microgrid with distributed generation, 49th international universities power engineering conference (upec), 2014. [5] n. w. a. lidula, a. d. rajapakse, microgrids research: a review of experimental microgrids and test systems. renewable and sustainable energy reviews 15(1) (2011), pp. 186-202 doi: 10.1109/tii.2018.2819169 [6] y. yan; y. qian; h. sharif; d. tipper, a survey on smart grid communication infrastructures: motivations, requirements and challenges. ieee communications surveys & tutorials, 2013. 15(1), pp. 5-20. doi: 10.1109/surv.2012.021312.00034 [7] s. c. reddy, p. v. n. prasad, a. j. laxmi, power quality and reliability improvement of distribution system by optimal number, location and size of dgs using particle swarm optimization. in 2012 ieee 7th international conference on industrial and information systems (iciis), 2012 [8] v. vita, t. alimardan, l. ekonomou. the impact of distributed generation in the distribution networks' voltage profile and energy losses. in 2015 ieee european modelling symposium (ems), 2015. [9] m. husain rehmani; m. reisslein; a. rachedi; m. erol-kantarci; m. radenkovic., integrating renewable energy resources into the smart grid: recent developments in information and communication technologies. ieee transactions on industrial informatics 14(7) (2018), pp. 2814-2825 doi: 10.1109/tii.2018.2819169 [10] j. j. jamian; m. w. mustafa; h. mokhlis; m. n. abdullah, comparative study on distributed generator sizing using three types of particle swarm optimization, third int. conference on intelligent systems modelling and simulation. 2012. [11] d. mazzeo, g. oliveti, e. labonia, estimation of wind speed probability density function using a mixture of two truncated normal distributions. renewable energy 115 (2018) pp. 1260-1280 doi: 10.1016/j.renene.2017.09.043 [12] k. r. devabalaji, k. ravi, optimal size and siting of multiple dg and dstatcom in radial distribution system using bacterial foraging optimization algorithm. ain shams engineering journal, 2016. 7(3), p. 959-971 doi: 10.1016/j.asej.2015.07.002 [13] k. s. alimgeer, z. wadud, i. khan, m. usman, a. b. qazi, f. a. khan, an innovative optimization strategy for efficient energy management with day-ahead demand response signal and energy consumption forecasting in smart grid using artificial neural network. ieee access, 2020, 8: p. 84415-84433 doi: 10.1109/access.2020.2989316 [14] a. ramamoorthy, r. ramachandran, optimal siting and sizing of multiple dg units for the enhancement of voltage profile and loss minimization in transmission systems using nature inspired algorithms. the scientific world journal (2016), art. no. 1086579. doi: 10.1155/2016/1086579 [15] a. hasibuan, s. masri, w. a. f. w. b. othman, effect of distributed generation installation on power loss using genetic algorithm method. iop conference series: materials science and engineering 308 (2018) art. no. 012034 doi: 10.1088/1757-899x/308/1/012034 [16] a. nieto, power quality improvement in power grids with the integration of energy storage systems. international journal of engineering and technical research 5 (2016), pp. 438-443. [17] a. amos ogunsina, m. omolayo petinrin, o. olayemi petinrin, e. nelson offornedo, j. olawole petinrin, g. olusola asaolu, optimal distributed generation location and sizing for loss minimization and voltage profile optimization using ant colony algorithm, sn applied sciences, 2021. 3(2), p. 248. doi: 10.1007/s42452-021-04226-y [18] v. s. lopes, c. l. t. borges, impact of the combined integration of wind generation and small hydropower plants on the system reliability. ieee transactions on sustainable energy 6(3) (2016), pp. 1169-1177. doi: 10.1109/tste.2014.2335895 [19] s. habib, m. kamran,u. rashid, impact analysis of vehicle-to-grid technology and charging strategies of electric vehicles on distribution networks – a review, journal of power sources, 2015. 277, p. 205-214. doi: 10.1016/j.jpowsour.2014.12.020 [20] s. singh, d. shukla, s. p. singh, peak demand reduction in distribution network with smart grid-enabled cvr. in 2016 ieee innovative smart grid technologies asia (isgt-asia). 2016. [21] r. c. dugan, institute, epr opendss manual, train. mater., pp. 1–184, 2019. [22] m. nasser, i. ali, m. alkhafaji, optimal placement and size of distributed generators based on autoadd and pso to improve voltage profile and minimize power losses. engineering and technology journal 39(3a) (2021), pp. 453-464. doi: 10.30684/etj.v39i3a.1781 [23] o. d. montoya, w. gil-gonzález, c. orozco-henao, vortex search and chu-beasley genetic algorithms for optimal location and sizing of distributed generators in distribution networks: a novel hybrid approach, engineering science and technology, an international journal 23(6) (2020), pp. 1351-1363 doi: 10.1016/j.jestch.2020.08.002 [24] r. rao, s. narasimham, m. ramalingaraju, optimization of distribution network configuration for loss reduction using artificial bee colony algorithm. 2007, 1. [25] m. e. soliman, a. y. abdelaziz, r. m. el-hassani, distribution power system reconfiguration using whale optimization algorithm. int. journal of applied power engineering, 9 (2020), pp. 48-57. [26] a. y. abdelaziz, r. a. osama, s. m. elkhodary, distribution systems reconfiguration using ant colony optimization and harmony search algorithms. electric power components and systems 41(5) (2013), pp. 537-554. doi: 10.1080/15325008.2012.755232 [27] t. t. nguyen, a. v. truong, t. a. phung, a novel method based on adaptive cuckoo search for optimal network reconfiguration and distributed generation allocation in distribution network. international journal of electrical power & energy systems 78 (2016), pp. 801-815. doi: 10.1016/j.ijepes.2015.12.030 [28] a. mohamed imran, m. kowsalya, d. p. kothari, a novel integration technique for optimal network reconfiguration and distributed generation placement in power distribution networks. int. journal of electrical power & energy systems 63 (2014), pp. 461-472. doi: 10.1016/j.ijepes.2014.06.011 [29] m. r. alrashidi, m. f. alhajri, optimal planning of multiple distributed generation sources in distribution networks: a new approach, energy conversion and management, 2011. 52(11), p. 3301-3308. doi: 10.1016/j.enconman.2011.06.001 [30] m. kumar, p. nallagownden, i. elamvazuthi, optimal placement and sizing of distributed generators for voltage-dependent load model in radial distribution system. renewable energy focus (1920) (2017), pp. 23-37. doi: 10.1016/j.ref.2017.05.003 http://dx.doi.org/10.21014/acta_imeko.v8i2.642 https://doi.org/10.3390/en14092354 https://www.iea.org/reports/world-energy-outlook-2021 https://doi.org/10.1109/tii.2018.2819169 https://doi.org/10.1109/surv.2012.021312.00034 https://doi.org/10.1109/tii.2018.2819169 https://doi.org/10.1016/j.renene.2017.09.043 https://doi.org/10.1016/j.asej.2015.07.002 https://doi.org/10.1109/access.2020.2989316 https://doi.org/10.1155/2016/1086579 https://doi.org/10.1088/1757-899x/308/1/012034 https://doi.org/10.1007/s42452-021-04226-y https://doi.org/10.1109/tste.2014.2335895 https://doi.org/10.1016/j.jpowsour.2014.12.020 https://doi.org/10.30684/etj.v39i3a.1781 https://doi.org/10.1016/j.jestch.2020.08.002 https://doi.org/10.1080/15325008.2012.755232 https://doi.org/10.1016/j.ijepes.2015.12.030 https://doi.org/10.1016/j.ijepes.2014.06.011 https://doi.org/10.1016/j.enconman.2011.06.001 https://doi.org/10.1016/j.ref.2017.05.003 design of a non-invasive sensing system for diagnosing gastric disorders acta imeko issn: 2221-870x december 2021, volume 10, number 4, 73 79 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 73 design of a non-invasive sensing system for diagnosing gastric disorders rosario morello1, laura fabbiano2, paolo oresta2, claudio de capua1 1 diies, university mediterranea of reggio calabria, italy 2 dmmm, politecnico di bari university, italy section: research paper keywords: gastric disorders; gastric slow wave; egg; myoelectrical measurements citation: rosario morello, laura fabbiano, paolo oresta, claudio de capua, design of design of a non-invasive sensing system for diagnosing gastric disorders, acta imeko, vol. 10, no. 4, article 14, december 2021, identifier: imeko-acta-10 (2021)-04-14 section editor: francesco lamonaca, university of calabria, italy received october 2, 2021; in final form october 24, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: rosario morello, e-mail: rosario.morello@unirc.it 1. introduction the rapid advancement of information technologies now allows the physicians to be able to smartly follow and assist the patient in real time even remotely through simple dedicated applications [1]-[3]. in the same perspective, the case of gastrointestinal pathologies has been addressed here. dyspepsia, stomach ulcer, gastritis, esophageal reflux are some examples of gastrointestinal motility disorders. such pathologies are widely spread among the population and their symptoms can become strongly debilitating. gastric disorders include several dysfunctions of the stomach digestive activity. gastroesophageal scintigraphy and endoscopy (gastroscopy) are at the moment two invasive techniques extensively used in the practice to diagnose gastric disorders. during the digestive function, the stomach muscles contract rhythmically to allow digestive activity to be performed. this activity is regulated from myoelectrical waves. such waves in presence of additional stimuli induce muscles to contract, [4]-[8]. electromyographic measurements of such gastric slow waves can provide important information on the stomach activity, [9]. electrogastrography (egg) is a technique known from several years based on recording stomach muscle contractions by means of skin electrodes, [10], [11]. such technique suffers from inappropriate data processing algorithms and interpretation errors, so showing poor reliability in the use of it as a diagnostic method of gastrointestinal motility disorders. nevertheless, recent medical trials have highlighted a clear correlation between abnormal gastric electrical activity and the onset of specific dysfunctions, abstract gastric disorders are widely spread among the population of any age. at the moment, the diagnosis is made by using invasive systems that cause several side effects. the present manuscript proposes an innovative non-invasive sensing system for diagnosing gastric dysfunctions. the electro-gastrography (egg) technique is used to record myoelectrical signals of stomach activities. although egg technique is well known for a long time, several issues concerning the signal processing and the definition of suitable diagnostic criteria are still unresolved. so, egg is to this day a trial practice. the authors want to overcome the current limitations of the technique and improve its relevance. to this purpose, a smart egg sensing system has been designed to non-invasively diagnose gastric disorders. in detail, the system records the gastric slow waves by means of skin surface electrodes placed in the epigastric area. cutaneous myoelectrical signals are so acquired from the body surface in proximity of stomach. electro-gastrographic record is then processed. according to the diagnostic model designed from the authors, the system estimates specific diagnostic parameters in time and frequency domains. it uses discrete wavelet transform to obtain power spectral density diagrams. the frequency and power of the egg waveform and the dominant frequency components are so analyzed. the defined diagnostic parameters are put in comparison with the reference values of a normal egg in order to estimate the presence of gastric pathologies by the analysis of arrhythmias (tachygastria, bradygastria and irregular rhythm). the paper aims to describe the design of the system and of the arrhythmias detection algorithm. protot ype development and experimental data will be presented in future works. preliminary results show an interesting relevance of the suggested technique so that it can be considered as a promising non-invasive tool for diagnosing gastrointestinal motility disorders. mailto:rosario.morello@unirc.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 74 [12], so that the gastroenterologists have recently reconsidered it as a potential non-invasive screening technique. in addition, the american gastroenterological association (aga) states the clinical relevance of egg in demonstrating gastric myoelectric abnormalities in patients with unexplained nausea and vomiting or functional dyspepsia, [13], [14]. it represents a promising and interesting alternative method for gastric screening since it has no side effects and is painless, [15]. nevertheless, further advances on processing algorithms and on the project of more accurate measurement systems are required in order to improve the reliability and evidence of the method, [16]. at present, there are no standardized diagnostic criteria and the state of art highlights poor care about this issue. therefore, several aspects must be still investigated, and additional studies are needed to assess the use of egg as an alternative test to the actually used invasive techniques, [17]-[19]. as it has been assessed above, an egg system records the stomach myoelectrical activity by means of cutaneous leads placed over the gastric area. in this way, it is possible to estimate patient’s gastrointestinal conditions by analysing the slow waves in time and frequency domains. in presence of gastric disorders, myoelectrical abnormalities can be revealed and characterized in the egg records, due to a decreased activity of the stomach muscles and nerves. in healthy individuals, standard egg record is characterized by regular electrical rhythm. in detail, it consists of periodic waveforms with a predominant frequency of 3 cycles per minute (cpm) at rest. during the digestive activity, frequency and intensity of gastric waves increase. in individuals suffering from gastrointestinal motility disorders, electro-gastrographic measurements have instead an irregular rhythm. in addition, post meal, sometimes, no increase of frequency and intensity of the waveform is observed. these and further features must be analysed in order to define suitable diagnostic criteria for characterizing the occurrence of gastric motility disorders. another interesting application of the electro-gastrographic technique concerns the study of patients affected from vomiting, unexplained nausea, improper digestion of food and gastroparesis. medical trials have been carried out to get important information on the mechanism that regulates the activity of stomach muscles and nerves in presence of those disorders, [19]-[21]. so, for example, it can be an helpful technique in understanding the origin of the unexplained contractions which cause vomiting in patients affected by anorexia, [22], [23]. that would allow gastroenterologists to schedule new therapies so to reduce the vomiting stimulus. therefore, more and more physicians show renewed interest in this technique. nevertheless, careful studies have still to highlight its potentialities. in this sight, the authors have focused their research activity on such aspects in order to overcome limitations and lack in the interpretation of the egg waveforms. so, the authors have proposed in [24] an innovative diagnostic model for characterizing gastric myoelectrical abnormalities due to disorders, and gained long-standing experience in recording of myoelectrical signals, [25]-[27]. in the present manuscript, the authors describe the developments of the model previously proposed. in detail, a smart and automated egg sensing system has been designed and its project is described in the following. by the embedded diagnostic criteria, the system is able to recognize an abnormal gastric activity. the used methodical approach starts with the study of the electro-gastrographic technique. standard egg records of healthy persons have been analysed in order to define suitable diagnostic reference parameters. such parameters contribute to define and recognize the onset of gastric disorders. then, diagnostic criteria have been defined to optimize the analysis of the egg waves in patients affected from gastric disorders. the diagnosis is based on a multifactorial analysis of the defined diagnostic parameters, which are compared with the respective reference values of a standard egg signal. the egg sensing system has been projected and developed according to the measurement system design model described in [28]. the system acquires and processes gastric myoelectrical waves in compliance with the diagnostic model presented in [24]. then the system decides among five alternative diagnoses. in the next section an overview of the electro-gastrographic technique is reported, and main gastric disorders and some applications of egg in medical practice are described. the third and fourth sections respectively analyses the phenomenon and describes the project of the smart egg sensing system and the embedded diagnostic algorithm. next, some results are presented, and conclusions follow. 2. electrogastrography the electro-gastrographic technique is known in medical field for a long time. it has common features with the electrocardiogram, as both techniques are based on myoelectrical signal measurements. the egg is a non-invasive technique based on recording the gastric myoelectrical activity. now, it cannot be considered in effect as a diagnostic tool because of lack of its standardization. inaccurate instrumentation, interpretation errors and lack of approved diagnostic criteria are some reasons. so, the authors have carefully examined the state of art of the technique in order to understand the current use of it. the method has been used in medical practice to study patients affected from unexplained persistent or episodic symptoms related to gastric motility disorders. further studies have been carried out by analysing gastric waveforms of patients with unexplained nausea and vomiting. they have shown interesting and promising results. the same analysis cannot be performed by means of invasive diagnostic tools, such as endoscopy, because of the artefacts introduced during the examination. in fact, endoscopy can be cause of further vomiting stimuli which overlap to patient’s nausea. other clinical trials highlight that functional dyspepsia and gastroparesis can be characterized by analysing gastric myoelectrical activity. in such cases, arrhythmias of the egg waveform can be clearly observed. further studies have singled out the occurrence of abnormalities in the egg waves of patients with other specific gastric disorders such as stomach ulcer, gastritis, oesophageal reflux, early satiety, anorexia. experimental results have shown an interesting correlation between gastric myoelectrical impulses and stomach diseases. for this reason, this technique can be considered as a promising and practical screening tool for the evaluation of several gastrointestinal motility disorders. nevertheless, before considering egg as a reliable diagnostic test, several aspects need still to be explained and highlighted. the authors have so focused their attention on these issues. the final aim is to propose the project of non-invasive sensing system with embedded diagnostic criteria. in order to use a methodical approach to the matter, we must understand the mechanism which regulates the gastric myoelectrical impulses and the stomach muscles contraction. therefore, the behaviour of the stomach during gastric function (digestion) in healthy persons has been analysed. once the myoelectrical activity of the stomach has been investigated, it has been possible to characterize the acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 75 standard egg waveform so to correctly interpret the occurrence of possible abnormalities in the myoelectrical activity, [29]-[31]. in detail, stomach’s muscles contraction and nerves movement are regulated by myoelectrical impulses, [30], [31]. such gastric waves control and coordinate the stomach activity. periodic waves, within specific frequency and amplitude ranges, are usual and allow stomach to digest food. electrogastrographic signals have amplitude relatively low, about 200 − 5000 v . consequently, the acquired signal must be amplified before of the processing stage. the frequency range is 0.016 − 0.25 hz equal to 1 − 15 cpm, see figure 1 for reference. at rest, slow waves depolarize the gastric smooth muscles without causing contraction. amplitude of egg waves increases, with the ingestion of food and when digestion starts, due to the increased activity of the stomach’s muscles. during digestion, indeed, the contraction of the muscles is caused by additional depolarization. so gastric slow waves control the fundamental frequency and the direction of contraction. this behaviour describes the regular mechanism of the gastric function. differently, in presence of gastrointestinal disorders, arrhythmias can be observed in the egg waveform due to an incorrect stomach’s activity. such abnormal myoelectrical activity is cause of changes of the fundamental frequency component and of its intensity. for instance, when a reduced contractile function of the stomach is observed, the egg waveform is characterized by lower frequency values of the fundamental component known as bradygastria. it is due to a reduced number of contractions. conversely, higher frequency values of the fundamental component, or tachygastria, cause stomach atonicity. generally, the pathogenesis of the gastric myoelectrical arrhythmias is due to the delayed stomach emptying which occurs when the individual is affected from motility gastrointestinal disorders. consequently, it is cause of a reduced stomach activity 3. standard egg waveform clinicians suggest collecting egg recordings after overnight fasting and during food digestion, in order to analyse the gastric function at rest and during digestive activity. the patient must be in a comfortable position to prevent movement artefacts and should remain motionless during the whole egg acquisition. a preliminary recording is performed with empty stomach, 15 to 60 minutes. subsequently the patient must consume a caloric meal (300 kcal), and a further egg 30 to 120 minutes recording is acquired. normally physicians suggest a fasting recording of 30 minutes and a postprandial recording of 60 minutes. in this way, it is possible to evaluate the gastric response during meal digestion. a set of egg signals, recorded during fasting and postprandial stages, has been analysed in time and frequency domains by using discrete wavelet transform. frequency components and their amplitude (power) have been considered. by means of power/frequency spectral analysis, the postprandial and fasting records have been compared. commonly it is assumed that a normal egg waveform is characterized by an averaged dominant frequency of about 3 cpm. during digestion, both frequency and associated amplitude increase. rhythm abnormalities include bradygastria (lower dominant frequency), tachygastria (higher dominant frequency), and irregular rhythm (dysrhythmia). nevertheless, such averaged values are not reliable because they can significantly change from individual to individual (physical constitution, age, general healthy status, etc.). consequently, a careful study of the literature and further analyses of egg signals of healthy individuals have allowed us to characterize a reference model. two quantities must be considered: frequency and amplitude. in a standard egg waveform, the fasting dominant frequency of gastric waves has to belong to the interval 2-4 cpm. in the postprandial recording, dominant frequency must belong to the normal frequency range 2-4 cpm for at least 75 % of time. this percentage time depends on the type of meal consumed. if the dominant frequency belongs to the previous interval only for 25 % of the egg recording time, then it can be considered index of dysrhythmia. this occurrence happens because of an altered gastric emptying. lower frequency values or equal to 2 cpm are index of bradygastria. higher frequency values or equal to 4 cpm define tachygastria. in a regular recording segment, different zones with bradygastria and tachygastria may be characterized. dysrhythmia can be characterized if the relative abnormal frequency waveform takes at least 5 minutes. the recognition of these patterns is simple, the only parameter to be considered is the dominant frequency value. further relevant parameters in the time domain can be used to characterize an irregular gastric activity. for example, an abnormal egg record can be characterized by the presence of bradygastria or/and tachygastria regions over 30 % of the recording time. as an alternative, the egg waveform can be considered irregular if the percentage of power distribution in the bradygastria or tachygastria regions is greater than 20 %. with reference to the signal intensity, we can estimate the absolute amplitude or power of egg waveform by means of the weighted summation of the gastric waves. differently, the percentage power distribution is obtained by summing waveform power for each frequency band and dividing by the total signal power of the recording, the result is multiplied by 100. typically, a power ratio between postprandial and fasting signals lower or equal to 1 may suggest a decreased gastric response to the meal. on the contrary, it is expected an increase in the myoelectrical activity of the stomach during digestion. finally, nausea and early satiety, are typically cause of gastric dysrhythmia, but this occurrence does not include necessarily an altered gastric emptying rate. 4. the smart egg sensing system in this section, the project of the egg sensing system is described. the system has been projected according to the measurement system design model in [28]. through an algorithm, the smart system is able to extract information from the egg signal according to the criteria reported in the table 2. it is a smart and patient-adaptive system which can detect gastrointestinal motility disorders. the system project has a microcontroller architecture in order to manage the data flow figure 1. egg waveform. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 76 and the data processing. three cutaneous ag/agcl electrodes are used to acquire and record the gastric myoelectrical signals, [32]. electrodes must be placed on the anterior abdominal wall over the stomach (epigastrium antrum). it is preferable that specialized medical staff perform the electrodes displacement. however, for further information, a brief description of the procedure is provided here. being the stomach located in proximity of the end of the rib section, it is helpful to divide it in three zones in order to suitably place the electrodes: the fundus (upper region), the stomach body or middle, and the pylorus (end of the stomach), see figure 2. two electrodes must be placed under the ribs in proximity of the fundus and mid corpus of the stomach (along the antral axis); the third one, the ground reference, is placed at the end of the stomach (see black circles in figure 3). this configuration allows signal-to-noise ratio to be minimized. since the electro-gastrographic signal has a relatively low amplitude, it is amplified before of the processing stage by means of an analog device amplifier ad524. according to the frequency range (1 − 15 cpm), a band pass filter, with cutoff frequencies equal to 0.010 and 0.3 hz, has been used to eliminate frequency components which are lower than 1 cpm and higher than 15 cpm. in this way, it is possible to remove the baseline drift and to exclude signals from other sources: possible myoelectrical interferences can be due to the heart, colon and small intestine. further interferences or artefacts are due to breathing, movements or electrical noise. these artefacts have commonly frequency components which are lower than 1 cpm (motion artefacts) and higher than 9 cpm (respiratory artefacts). such overlapping signals could cause an erroneous estimation of signal amplitude and dominant frequency. therefore, the filter has been carefully designed to reject both further myoelectrical contributions due to other organs and artefacts and noise. the signal is sampled with a sampling frequency of 2 𝐻𝑧, and is subsequently processed by a discrete wavelet transform, [33], [34]. by means of spectral analysis, it is possible to estimate the power and amplitude of the signal frequency components, [35], [36]. the waveform analysis in time and frequency domains allows the system to get information on the power trends as function of frequency and/or time. in this way, it is possible to characterize the arrhythmias of the myoelectrical signal. specific memory devices have been used to store information concerning the metrological characteristics of the system and patient’s clinical history. in detail, information on measurement uncertainty and calibration curve is stored in a first memory device in order to estimate the reliability of measurement results. in addition, a further writable and readable storage device stores private and medical data concerning the case-history of the patient in order to improve the diagnosis reliability. figure 4 shows the flow diagram of the egg signal processing. the amplification and filtering blocks allow the system to perform a preliminary pre-processing of the input signal. the filtering stage performs the rejection of noise and artefact signals overlapped to the egg waveform. two amplification stages have been used to amplify the voltage levels. once the electrodes are properly displaced, the system performs a fasting recording 30 minutes long and a postprandial recording lasting 60 minutes. subsequently, the acquired signals are processed according to the diagnostic model, [24]. abnormalities in the egg record can be characterized by considering the power vs frequency trend. gastrointestinal motility disorders cause, in fact, arrhythmias that, if characterized by frequency components above the normal range, indicate tachygastria, by frequency components below normal range indicate bradygastria; if several frequency contributions arise, then that indicate dysrhythmia; further, lack of signal power increases during postprandial recording. consequently, five patterns are considered: i) normal egg; ii) bradygastria; iii) tachygastria; iv) dysrhythmia; v) lack of postprandial power increase. the analysis in section 3 has allowed us to define specific diagnostic parameters which are representative of the gastric myoelectrical signal features, see table 1. these parameters provide a complete description of the egg waveform in terms of spectral and power analysis. basic requirements are: tdf, f_tdf and p_tdf with a time duration which has to be higher than 5 minutes. figure 2. sections of the stomach. figure 3. egg electrodes displacement. figure 4. flow diagram of the egg signal path. sensing device filtering 2 stage amplification a/d converter data processing metrological status memory patient casehistory acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 77 firstly, the embedded algorithm allows the system to estimate the dominant frequency (df), the time length of dominant frequency recording (tdf), the associated amplitude and power distribution (pdf). in this way it is possible to verify the rhythm of the egg waveform. in detail, the system estimates the fasting and postprandial dominant frequencies (f_df, p_df) and their recording time (f_tdf, p_tdf, f_t, p_t), amplitude and power distribution (f_pdf, p_pdf). subsequently, the ratio of postprandial to fasting power (rdf) is evaluated in order to assess the occurrence of a decreased gastric response to the meal. in order to verify the possible occurrence of dysrhythmias, the recording time and power distribution of egg frequencies in/above/below the 2 − 4 cpm intervals are computed (t3f, p_t3f, f_t3f, p3f). the recording time and power distribution of tachygastria and bradygastria ranges (p3f, ttf, ptf, tbf, pbf) are subsequently estimated. finally, the percentage of power distribution in the three frequency ranges (%p3f, %ptf, %pbf) can be obtained by estimating the weighted summation of power contributions divided by the total power. the last parameters allow us to characterize the presence of tachygastria and bradygastria patterns. once the previous diagnostic parameters have been estimated, the smart egg sensing system verifies the presence of possible irregular gastric activities. to this aim, each parameter is compared with the homologous reference value of a normal egg record, described in section iii. five alternative diagnoses (normal egg, bradygastria, tachygastria, dysrhythmia and lack of postprandial power increase) are available. specific conditions about time, frequency and power must be satisfied simultaneously in order to make a specific diagnosis. table 2 summarizes the embedded diagnosis criteria. table 1. defined diagnostic parameters. parameter description df in cpm dominant frequency tdf in s recording time of dominant frequency pdf in dbm power distribution of dominant frequency f_df in cpm fasting dominant frequency f_tdf in s recording time of fasting dominant frequency f_pdf in dbm power distribution of fasting dominant frequency f_t in s fasting recording time p_df in cpm postprandial dominant frequency p_tdf in s recording time of postprandial dominant frequency p_pdf in dbm power distribution of postprandial dominant frequency p_t in s postprandial recording time rdf ratio of postprandial to fasting power of df t3f in s recording time of [2-4] cpm frequency range p_t3f in s recording time of postprandial [2-4] cpm frequency range f_t3f in s recording time of fasting [2-4] cpm frequency range p3f in dbm power distribution of [2-4] cpm frequency range ttf in s total recording time of tachygastria frequency ptf in dbm power distribution of tachygastria frequency tbf in s total recording time of bradygastria frequency pbf in dbm power distribution of bradygastria frequency %p3f percentage of power distribution of [2-4] cpm frequency range %ptf percentage of power distribution of tachygastria frequency %pbf percentage of power distribution of bradygastria frequency table 2. diagnosis criteria. alternatives diagnosis criteria time frequency in cpm power normal 100 𝑝t3f 𝑝t > 75 and 100 𝑓t3f 𝑓t > 75 2 < 𝐷𝐹 < 4 and 2 < 𝑓𝐷𝐹 < 4 and 2 < 𝑝_𝐷𝐹 < 4 𝑝pdf > 𝑓pdf; 𝑃3𝐹 > 𝑃𝑇𝐹; 𝑃3𝐹 > 𝑃𝐵𝐹 and %𝑃3𝐹 > 75 % bradygastria 100 tbf (f_t) + (p_t) > 30 𝐷𝐹 < 2 or 𝑓_𝐷𝐹 < 2 or 𝑝_𝐷𝐹 < 2 %𝑃𝐵𝐹 > 20 % tachygastria 100 ttf (f_t) + (p_t) > 30 𝐷𝐹 > 4 or 𝑓_𝐷𝐹 > 4 or 𝑝_𝐷𝐹 > 4 %𝑃𝑇𝐹 > 20 % dysrhythmia 100 𝑝t3f 𝑝t < 25 and (𝑝t) − (𝑝t3f) > 300 s variable df pdf>p3f lack of postprandial power increase 𝑝tdf > 300 s 𝑅𝐷𝐹 ≤ 1 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 78 5. discussion the previous parameters and the diagnosis criteria have been characterized by analysing egg records of healthy individuals, [24]. standard egg signals have been considered to define the normal range of each parameter. the design of the egg sensing system embeds such diagnosis criteria which have been validated by means of simulations. preliminary results have been carried out by using standard egg records in order to verify the possible occurrence of falsepositive diagnoses. egg records have been generated by means of an arbitrary waveform generator in laboratory. twenty cases have been considered. in all cases, the system has properly detected the absence of arrhythmias in compliance with the expected behaviour. further tests have been performed to verify the system capability to reject the artefacts. motion and respiratory artefacts have been reproduced and added to a normal egg record. the noisy signal has been processed and the artefacts have been properly removed obtaining the initial egg signal. additional simulations have been performed in order to test the sensitivity of the system. egg records with gastric disorders have been generated in matlab environment to verify the degree to which the embedded numerical algorithm responds to slight changes of the diagnostic parameters. each pattern occurrence has been reproduced in order to prove the effectiveness and accuracy of diagnosis results. then, the signals have been generated by using an arbitrary waveform generator. therefore, the experimental results do not regard specific patients, but they are the consequence of simulations. the system response has been observed. in detail, egg waveforms have been generated starting from standard waves. the single diagnostic parameter has been modified with progressive percentage deviations. in this way, it has been possible to characterize the sensitivity of the system. the presence of gastric arrhythmias has been detected in presence of deviations above 7 % of the reference values in table 2. the egg system has shown a good capability to detect each pattern. the total sensitivity was above 94 %. 6. conclusions in this paper the electro-gastrographic technique (egg) is proposed to record and process myoelectrical signals of the stomach activity. several studies in literature show an interesting relevance of electro-gastrography to diagnose gastric disorders. although egg technique is well known, several issues are still unresolved. so, egg is, up to day, a trial practice. the authors propose the design of a smart egg sensing system in order to overcome the current limitations of the technique and improve its relevance and evidence. the system has been projected according to the ieee 1451 standard. it is able to acquire the egg signal by means of skin electrodes and to process it according to the embedded diagnostic model previously defined by the authors. diagnostic parameters and their reference values have been characterized by analysing egg records of healthy individuals. by using the diagnostic model, the smart system is able to assess the occurrence of abnormal myoelectrical activity of the stomach due to gastric pathologies. five alternative diagnoses have been considered: normal egg, bradygastria, tachygastria, dysrhythmia, lack of postprandial power increase. preliminary simulations have shown interesting results. in detail, the proposed system has been tested by using normal egg records and simulated waveforms, so to verify its sensitivity and selectivity. experimental data have shown promising results. the proposed sensing system may be considered a noninvasive tool for diagnosing gastrointestinal motility disorders as an alternative to invasive techniques such as gastroscopy. at the moment, the research activity is expecting for funding in order to develop the system and carry out experimentation on real case-studies. medical trials have been scheduled and results will be reported in future works. references [1] h. m. shamim, g. muhammad, a. alamri, smart healthcare monitoring: a voice pathology detection paradigm for smart cities. multimedia systems 25.5 (2019), 565-575. doi: 10.1007/s00530-017-0561-x [2] r. morello, c. de capua, l. fabbiano, g. vacca, image-based detection of kayser-fleischer ring in patient with wilson disease, 2013 ieee international symposium on medical measurements and applications (memea). doi: 10.1109/memea.2013.6549715 [3] g. muhammad, m. f. alhamid, x. long, computing and processing on the edge: smart pathology detection for connected healthcare. ieee network 33.6 (2019), pp. 44-49. doi: 10.1109/mnet.001.1900045 [4] j. j. baker, e. scheme, k. englehart, d. t. hutchinson, b. greger, continuous detection and decoding of dexterous finger flexions with implantable myoelectric sensors, ieee transactions on neural systems and rehabilitation engineering, vol. 18, no.4, 2010, pp. 424-432. doi: 10.1109/tnsre.2010.2047590 [5] byung woo lee, chungkeun lee, jinkwon kim, jong-ho lee, comparison of conductive fabric electrode with electromyography to evaluate knee joint movement, ieee sensors journal, vol.12, no.2, 2012, pp. 410-411. doi: 10.1109/jsen.2011.2161076 [6] g. imperatori, p. cunzolo, d. cvetkov, d. barrettino, wireless surface electromyography probes with four high-speed channels, ieee sensors journal, vol.13, no.8, 2013, pp. 29542961. doi: 10.1109/jsen.2013.2260145 [7] john w. arkwright, neil g. blenman, ian d. underhill, simon a. maunder, nick j. spencer, marcello costa, simon j. brookes, michal m. szczesniak, phil g. dinning, measurement of muscular activity associated with peristalsis in the human gut using fiber bragg grating arrays, ieee sensors journal, vol.12, no.1, 2012, pp. 113-117. doi: 10.1109/jsen.2011.2123883 [8] a. lay-ekuakille, p. vergallo, a. trabacca, m. de rinaldis, f. angelillo, f. conversano, s. casciaro, low-frequency detection in ecg signals and joint eeg-ergospirometric measurements for precautionary diagnosis, measurement, vol. 46, issue 1, 2013, pp 97-107. doi: 10.1016/j.measurement.2012.05.024 [9] s. somarajan, n. muszynski, j. olson, a. comstock, a. russell, l. walker, s. acra, l. bradshaw, the effect of chronic nausea on gastric slow wave spatiotemporal dynamics in children, neurogastroenterology & motility 33.5 (2021): e14035. doi: 10.1111/nmo.14035 [10] h. p. parkman, w. l. hasler, j. l. barnett, e. y. eaker, electrogastrography: a document prepared by the gastric section of the american motility society clinical gi motility testing task force, neurogastroenterol motil, blackwell publishing ltd, 2003, pp. 89–102. [11] b. pfaffenbach, r. adamek, k kuhn, m. wegener, electrogastrography in healthy subjects, digestive diseases and sciences journal, springer, vol. 40, issue 7, 1995, pp. 1445-1450. doi: 10.1007/bf02285190 [12] c. varghese, d. a. carson, s. bhat, t. c. l. hayes, a. a. gharibans, c. n. andrews, g. o'grady, clinical associations of functional dyspepsia with gastric dysrhythmia on https://doi.org/10.1007/s00530-017-0561-x https://doi.org/10.1109/memea.2013.6549715 https://doi.org/10.1109/mnet.001.1900045 http://dx.doi.org/10.1109/tnsre.2010.2047590 https://doi.org/10.1109/jsen.2011.2161076 https://doi.org/10.1109/jsen.2013.2260145 https://doi.org/10.1109/jsen.2011.2123883 https://doi.org/10.1016/j.measurement.2012.05.024 http://dx.doi.org/10.1111/nmo.14035 https://doi.org/10.1007/bf02285190 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 79 electrogastrography: a comprehensive systematic review and meta‐analysis, neurogastroenterology & motility (2021), e14151. doi: 10.1111/nmo.14151 [13] report of american gastroenterological association (aga), american gastroenterological association medical position statement: nausea and vomiting, gastroenterology, vol. 120, issue 1, 2001, pp. 261-263. [14] a. ravelli, gastric motility and electrogastrography (egg). in: h. till, m. thomson, j. foker., g. holcomb iii, k. khan (eds) esophageal and gastric disorders in infancy and childhood. springer, berlin, heidelberg, isbn : 978-3-642-11201-0. [15] g. gopu, r. neelaveni, k. porkumaran, investigation of digestive system disorders using electrogastrogram, proc. international ieee conference on computer and communication engineering, kuala lumpur, 13-15 may 2008, pp 201-205. doi: 10.1109/iccce.2008.4580596 [16] f. y. chang, c. l. lu, s. d. lee, g. l. yu, an improved electrogastrographic system in measuring myoelectrical parameters, journal of gastroenterology and hepatology, vol. 13, issue 10, 1998, pp. 1027–1032. doi: 10.1111/j.1440-1746.1998.tb00565.x [17] j. l. gonzalez-guillaumin, d. c. sadowski, o. yadid-pecht, k. v. i. s. kaler, m. p. mintchev, multichannel pressure, bolus transit, and ph esophageal catheter, ieee sensors journal, vol.6, no.3, 2006, pp. 796-803. doi: 10.1109/jsen.2006.874437 [18] a. cysewska-sobusiak, p. skrzywanek, a. sowier, utilization of miniprobes in modern endoscopic ultrasonography, ieee sensors journal, vol.6, no.5, 2006, pp. 1323-1330. doi: 10.1109/jsen.2006.877985 [19] j. d. z. chen, non-invasive measurement of gastric myoelectrical activity and its analysis and applications, proc. 20th international conference of the ieee engineering in medicine and biology society, vol. 6, hong kong, november 1998, pp. 2802-2807. doi: 10.1109/iembs.1998.746065 [20] r. yoshida; k. takahashi; h. inoue; a. kobayashi, a study on diagnostic capability of simultaneous measurement of electrogastrography and heart rate variability for gastroesophageal reflux disease, proc. ieee sice annual conference (sice), akita, japan, august 2012, pp. 2157–2162. [21] zhang-yong li, chao-shi ren, shu zhao, hong sha, juan deng, gastric motility functional study based on electrical bioimpedance measurements and simultaneous electrogastrography, journal of zhejiang university science b, springer, vol. 12, issue 12, 2011, pp. 983-989. doi: 10.1631/jzus.b1000436 [22] d. a. carson, s. bhat, t. c. l. hayes, a. a. gharibans, c. n. andrews, g. o'grady, c. varghese, abnormalities on electrogastrography in nausea and vomiting syndromes: a systematic review, meta-analysis, and comparison to other gastric disorders. dig dis sci (2021). doi: 10.1007/s10620-021-07026-x [23] panyko, arpád, marián vician, martin dubovský, massive acute gastric dilatation in a patient with anorexia nervosa, journal of gastrointestinal surgery 25.3 (2021), pp. 856-858. doi: 10.1007/s11605-020-04715-2 [24] r. morello, c. de capua, f. lamonaca, diagnosis of gastric disorders by non-invasive myoelectrical measurements, proc. 2013 ieee international instrumentation and measurement technology conference (i2mtc 2013), minneapolis, mn, 6-9 may 2013, pp. 1324-1328. doi: 10.1109/i2mtc.2013.6555628 [25] c. de capua, a. meduri, r. morello, a remote doctor for homecare and medical diagnoses on cardiac patients by an adaptive ecg analysis, proc. ieee 4th international workshop on medical measurement and applications (memea 2009), cetraro, italy, may 2009, pp.31-36. doi: 10.1109/memea.2009.5167949 [26] c. de capua, a. meduri, r. morello, a smart ecg measurement system based on web service oriented architecture for telemedicine applications, ieee transactions on instrumentation and measurement, vol. 59, issue 10, 2010, pp. 2530-2538. doi: 10.1109/tim.2010.2057652 [27] c. de capua, a. battaglia, a. meduri, r. morello, a patientadaptive ecg measurement system for fault-tolerant diagnoses of heart abnormalities, proc. 24th ieee instrumentation and measurement technology conference (imtc 2007), warsaw, poland, 1-3 may 2007, pp. 1-5. doi: 10.1109/imtc.2007.379434 [28] r. morello, c. de capua, a measurement system design technique for improving performances and reliability of smart and fault-tolerant biomedical systems, lecture notes in electrical engineering, eds. a. lay-ekuakille, s. c. mukhopadhyay, vol. 75, 2010, pp. 207-217. [29] m. inoue, s. iwamura, m. yoshida, egg measurement under various situations, proc. 23rd international conference of the ieee engineering in medicine and biology society, vol.4, 2001, pp. 3356-3358. doi: 10.1109/iembs.2001.1019546 [30] b. o. familoni, t. l. abell, k. l. bowes, a model of gastric electrical activity in health and disease, ieee transactions on biomedical engineering, vol. 42, issue 7, 1995, pp. 647-657. doi: 10.1109/10.391163 [31] wei ding, shujia qin, lei miao, ning xi, hongyi li, yuechao wang, processing and analysis of bio-signals from human stomach, proc. ieee international conference on robotics and biomimetics (robio) , tianjin, china, december 2010, pp. 769– 772. doi: 10.1109/robio.2010.5723423 [32] j. garcia-casado, j. l. martinez-de-juan, j. l. ponce, noninvasive measurement and analysis of intestinal myoelectrical activity using surface electrodes, ieee transactions on biomedical engineering, vol. 52, no. 6, 2005, pp. 983-991. doi: 10.1109/tbme.2005.846730 [33] i. v. tchervensky, r. j. de sobral cintra, e. neshev, v. s. dimitrov, d. c. sadowski, m. p. mintchev, centre-specific multichannel electrogastrographic testing utilizing wavelet-based decomposition, physiological measurement (iop science), vol. 27, no. 7, 2006, pp. 569-584. doi: 10.1088/0967-3334/27/7/002 [34] r. j. sobral cintra, i. v. tchervensky, v. s. dimitrov, m. r. mintchev, optimal wavelets for electrogastrography, proc. 26th international conference of the ieee engineering in medicine and biology society, san francisco, ca, 1-5 september 2004, pp. 329-332. doi: 10.1109/iembs.2004.1403159 [35] s. casciaro, f. conversano, l. massoptier, r. franchini, r. casciaro, a. lay-ekuakille, a quantitative and automatic echographic method for real-time localization of endovascular devices, ieee transactions on ultrasonics, ferroelectrics, and frequency control, vol. 58, n.10, pp. 2107-17, 2011. doi: 10.1109/tuffc.2011.2060 [36] s. urooj, m. khan, a. ansari, a. lay-ekuakille, a. k. salhan, prediction of quantitative intrathoracic fluid volume to diagnose pulmonary edema using labview, computer methods in biomechanics and biomedical engineering, 2011, pp.1-6. doi: 10.1080/10255842.2011.565054 https://doi.org/10.1111/nmo.14151 https://doi.org/10.1109/iccce.2008.4580596 https://doi.org/10.1111/j.1440-1746.1998.tb00565.x https://doi.org/10.1109/jsen.2006.874437 http://dx.doi.org/10.1109/jsen.2006.877985 https://doi.org/10.1109/iembs.1998.746065 https://doi.org/10.1631/jzus.b1000436 https://doi.org/10.1007/s10620-021-07026-x https://doi.org/10.1007/s11605-020-04715-2 https://doi.org/10.1109/i2mtc.2013.6555628 https://doi.org/10.1109/memea.2009.5167949 https://doi.org/10.1109/tim.2010.2057652 https://doi.org/10.1109/imtc.2007.379434 http://dx.doi.org/10.1109/iembs.2001.1019546 https://doi.org/10.1109/10.391163 https://doi.org/10.1109/robio.2010.5723423 https://doi.org/10.1109/tbme.2005.846730 https://doi.org/10.1088/0967-3334/27/7/002 https://doi.org/10.1109/iembs.2004.1403159 https://doi.org/10.1109/tuffc.2011.2060 http://dx.doi.org/10.1080/10255842.2011.565054 a low-acceleration measurement using anti-vibration table with low-frequency resonance acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 369 acta imeko issn: 2221-870x december 2020, volume 9, number 5, 369 373 a low-acceleration measurement using anti-vibration table with low-frequency resonance t. shimoda1, w. kokuyama2, h. nozato3 1 national metrology institute of japan (nmij/aist), tsukuba, japan, tomofumi.shimoda@aist.go.jp 2 national metrology institute of japan (nmij/aist), tsukuba, japan, wataru.kokuyama@aist.go,jp 3 national metrology institute of japan (nmij/aist), tsukuba, japan, hideaki.nozato@aist.go.jp abstract: this manuscript describes how nmij isolates interferometer optics from the ground vibration for low-acceleration measurement by installing an antivibration table. such a vibration isolation system is designed for an accelerometer calibration system to reduce vibration noise from the microtremor or from reaction of the vibration exciter. mitigating the vibration of optics enables evaluation of accelerometers at small amplitudes, which is required in aerospace or infrastructure monitoring applications. in this manuscript, vibration transmissibility of the anti-vibration table is measured using a triaxial seismometer, and its benefit in the calibration system is discussed. keywords: laser interferometer, anti-vibration table, microtremor, ground noise, low-acceleration measurement 1. introduction low-acceleration measurements are gradually required from the viewpoint of various industrial fields such as aerospace or infrastructure monitoring. the resolution of earth observation by satellite imaging, which is one of the main aerospace applications, is suffering from micro-vibrations [1]. in an on-orbit satellite, moving instruments such as mechanical gyroscopes or reaction wheels can generate micro-vibration, which is typically smaller than 10−2 m/s2 [2]. in infrastructure monitoring, continuous measurement of eigenfrequency of structure using microtremor, which is in the order of 10−3 m/s2 or less, is proposed [4, 5]. the demand for such measurements is gradually increasing with the aging of much of the infrastructure. as the basis for these applications, evaluation of accelerometer is essential to verify the reliability of measurements. a calibration system using a laser interferometer and a vibration exciter is used to determine the response of an accelerometer [6]. in this system, a target accelerometer is vibrated by the shaker, and its displacement is precisely monitored by the interferometer for calibration. for lowacceleration, the calibration result can suffer from the background noise of the interferometer, which originates from seismic vibration, self-noise of the interferometer, and so on. mitigation of such noise is necessary to meet the demands of lowacceleration measurements. for these purposes, we try to reduce the noise from microtremor, which typically appears below a few hundred hz. as the first step, it is evaluated how to reduce the microtremor noise in the lowfrequency calibration system in nmij [7]. overview of the microtremor issue and schematics of noise reduction are presented in section 2. next, in section 3, experimental results using an antivibration table with a low resonant frequency are reported. the conclusion is described in section 4. 2. seismic noise in a calibration system 2.1. overview of accelerometer calibration figure 1: schematics of a low-frequency accelerometer calibration system. figure 1 shows the schematics of the lowfrequency calibration system. a target accelerometer is fixed to a vibration exciter and vibrated at a certain frequency and amplitude. at the same time, the displacement of the accelerometer is measured by a laser interferometer constructed on http://www.imeko.org/ mailto:tomofumi.shimoda@aist.go.jp mailto:wataru.kokuyama@aist.go,jp mailto:hideaki.nozato@aist.go.jp acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 370 another table. the output signal from the accelerometer is then compared to the measured displacement to evaluate the sensitivity. in this process, the accelerometer responds the absolute vibration 𝑥a with respect to the inertial frame, while the laser interferometer measures the relative vibration (𝑥a − 𝑥o) between the optical table and the reflection surface on the exciter. these two vibrations differ by the vibration of the optical table 𝑥o, which originates from the reaction of the shaker or the microtremor; 𝑥o = 𝑟𝐻o𝑥a + 𝐻o𝑥g , (1) where 𝑟 denotes the fraction of the reaction, 𝐻o the vibration transmissibility of the optical table, and 𝑥g the microtremor. if the reaction is carefully suppressed, this discrepancy is not important in usual cases because the vibration amplitude of the shaker is sufficiently larger than that of the optical table induced by the microtremor; 𝑥a ≫ 𝐻o𝑥g . however, the effect of the microtremor becomes relatively non-negligible in the low-acceleration measurement mentioned in section 369. figure 2 presents the amplitude spectral density (asd) of the optical table vibration 𝑥o of the current system induced by the microtremor, measured with a broadband triaxial seismometer (trillium compact 120s) in nmij. the vibration is on the order of 10−4 − 10−6 m/s2/√hz. figure 2: asd of the optical table vibration in x (green), y (orange), and z (blue) directions, which was measured in nmij, japan. self-noise of the seismometer is plotted with the dashed black line. the dashed red line shows 10−5 m/s2/√hz for comparison. figure 3 shows the asds of the interferometer signal measuring (𝑥a − 𝑥o) of the low-frequency calibration system and of the seismometer signal measuring 𝑥o, when the vibration exciter is turned off. sum of these two signals corresponds to the noise in determining 𝑥a. the vibration of the optical table dominates the total noise below 10 hz. therefore, suppression of vibration is one of the primary requirements for accelerometer calibration at small acceleration amplitudes. figure 3: asd of the interferometer signal of the lowfrequency calibration system (red), circuit noise of the interferometer (grey), and the optical table vibration (green). the dashed blue line shows the self-noise of the interferometer, which was estimated by subtracting the table vibration from the interferometer signal. additionally, signal from other than vibration, which is plotted with dashed blue line in figure 3, limits the current performance above 10 hz. cyclic error of the interferometer, which originates from non-linearity of the signal, is a suspicious noise source between 10 hz and 100 hz. the electrical circuit noise is dominant above 100 hz. mitigation of such noise sources is also essential to improve the overall performance. to determine 10−3 m/s2 amplitude for calibration with 1 % accuracy in 1 second, the asd of the background noise needs to be below 10−5 m/s2/ √hz at the oscillation frequency. in this manuscript, we focus on the reduction of the vibration noise. 2.2. low-frequency vibration isolation it is straightforward to reduce the vibration transmissibility 𝐻o for low-acceleration measurement. 𝐻o of a single mass-spring-damper model has a form of 𝐻o(𝑓) = 𝑓0 2 𝑓0 2+𝑖𝑓𝑓0/𝑄−𝑓 2 , (2) which is characterized by the resonant frequency 𝑓0 and the quality factor 𝑄 . as the equation shows, vibration is suppressed above the resonant frequency in proportional to 𝑓 −2 in an ideally simple system. the current table has 𝑓0 ∼ 7 hz and 𝑄 ∼ 10 . in this case, the microtremor is not sufficiently isolated below 10 hz, where the vibration noise is dominant. low resonant frequency below 1 hz is desired to suppress the excess of the vibration in figure 2 down to 1 hz. to achieve the low resonant frequency, we are installing an anti-vibration system, which has resonant frequencies of 0.25 hz in horizontal and vertical directions. the system consists of a springantispring system, in which the restoring force of the suspension is partially cancelled by the antirestoring force from gravity or the elastic part. this enables the low resonant frequency with relatively http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 371 small size (~0.7 m). for comparison, a simple pendulum-type isolation system requires 4 m size to achieve 0.25 hz. the optical table in figure 1 is replaced with the low-frequency anti-vibration system shown in figure 4. figure 4: schematic diagram of the low-frequency antivibration system for the laser interferometer. as equation () shows, suppression of the transmissibility also contributes to the reduction of the reaction from vibration excitation through the ground vibration. however, it should be noted that response to external force fluctuation such as sound or airflow can be enhanced below the resonant frequency, because the low-frequency system has low stiffness. excess of low-frequency fluctuation induces alignment fluctuation, which affects the calibration result at higher frequencies. therefore, the environmental disturbance is also an important factor that determines the performance. figure 5: vibration measurement on the anti-vibration stage. 3. evaluation of an anti-vibration system 3.1. vibration transmissibility measurement in order to evaluate the performance of the system, a seismometer was placed on the isolated stage as shown in figure 5. after the vibration measurement on the stage, the seismometer was then moved to the table where the anti-vibration system is placed. the vibration spectrums at these points were compared to estimate the vibration transmissibility of the system. measured vibration spectrums are presented in figure 6. the vibration on the isolated stage was almost as expected from the vibration on the table (blue line) and the theoretical vibration transmissibility (equation (2)). here a resonant frequency 𝑓0 = 0.3 hz and a quality factor 𝑄 = 1 were assumed. the microtremor is successfully suppressed by ~100 times around a few hz, and the residual noise level is below 10−5 m/s2/√hz above 1 hz except for the peak at 7 hz, though the seismometer signal was not measured correctly above 20 hz because of the data logger noise. figure 6: performance of the anti-vibration system. the blue and green lines show the spectrum on the table and on the isolated stage, respectively. the orange line is expected spectrum on the stage. the black dashed and grey lines are the self-noise of the seismometer and the data logger, respectively. on the other hand, vibration below 0.2 hz was increased on the isolated stage, by about ten times than on the table. such excess may originate from external force disturbances, because the antivibration system has low stiffness to lower the resonant frequency, as mentioned in section 2.2. a wind shield is planned to be installed around the anti-vibration system to protect it from the airflow or sound. the other possible disturbances, such as tilt fluctuation of the stage or temperature fluctuation, should also be evaluated. seismometer isolated stage table http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 372 3.2. expected improvement of interferometer background noise as explained in section 2.1, the interferometer measures (𝑥a − 𝑥o), and 𝑥o becomes background noise in accelerometer calibration. therefore, total background noise of the interferometer can be estimated by the sum of the vibration spectrum 𝑥o shown in figure 6 and the interferometer self-noise spectrum shown in figure 7. figure 7 presents the estimated noise level of the low-frequency accelerometer calibration system with the anti-vibration system. the noise is expected to be suppressed by a few tens of times between 1 and 10 hz, where the vibration of the interferometer stage is a dominant background noise. such frequencies are important for the monitoring of infrastructure, which typically has a resonance frequency around 1 hz. in this frequency range, the noise level will be below 10−5 m/s2/√hz. on the other hand, the self-noise of the interferometer dominates above 10 hz, hence the vibration isolation will not reduce the noise there. as mentioned in section 2.1, suppression of the interferometer self-noise is required for further reduction of the background noise, especially above 10 hz. alternatively, it will be easier to replace the interferometer to a low-noise commercial product to improve the calibration capacity. 4. conclusion an anti-vibration table is being installed into the low-frequency calibration system at nmij/aist for low-acceleration measurement. the table has a low resonant frequency of ~0.25 hz to isolate the vibration at low frequencies. to evaluate the performance of the table, the vibration transmissibility was measured using a seismometer. the vibration was successfully isolated on the table by 100 times around a few hz. as a result, the interferometer background noise is expected to be lower than 10−5 m/s2/ √hz below 10 hz, which enables the calibration system to determine the acceleration amplitude of 10−3 m/s2 with 1 % accuracy in a reasonable time. to extend the frequency range to above 10 hz, reduction of the interferometer noise other than the microtremor is necessary. 5. acknowledgement this work is partially based on the results obtained from a project commissioned by the new energy and industrial technology development organization (nedo), japan. the authors thank tamio ishigami, koichiro hattori, akihiro ota, takashi usuda, hiromi mitsumori, yoshiteru kusano (nmij) for useful discussions and cooperation. 6. references [1] k. komatsu, h. uchida, “microvibration in spacecraft”, mechanical engineering reviews, vol. 1, no. 2, 2014 [2] m. privat, “on ground and in orbit microvibration measurement comparison”, proceedings of the 8th european conference on spacecraft structures, material and mechanical testing, 1998 [3] d. yu, g. wang, y. zhao, “on-orbit measurement and analysis of the micro-vibration in a remotesensing satellite”. adv. astronaut. sci. technol. vol. 1, pp. 191–195, 2018 [4] y. ikeda, s. yoshitaka, s. yasutsugu, “damage detection of actual building structures through singular value decomposition of power spectral figure 7: expected improvement of the background noise of the calibration system by using the low-frequency antivibration system (thick blue). the contribution from the vibration of the interferometer stage (green) and from the selfnoise of the interferometer (orange) are also presented. for comparison, the current noise level is plotted with the dashed blue line. http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 373 density matrices of microtremor responses”, aij journal of technology, vol. 16, no. 32, pp. 69–74, 2010 (in japanese) [5] y. jiang, y. gao, x. wu, “the nature frequency identification of tunnel lining based on the microtremor method”, underground space, vol. 1, no. 2, pp. 108-113, 2016 [6] iso 16063-11 1999 “methods for the calibration of vibration and shock transducers—part 11: primary vibration calibration by laser interferometry” [7] w. kokuyama, t. ishigami, h. nozato, a. ota, “improvement of very low-frequency primary vibration calibration system at nmij/aist”, in proc. of xxi imeko world congress, prague, czech republic, 2015 http://www.imeko.org/ low-cost, high-resolution and no-manning distributed sensing system for the continuous monitoring of fruit growth in precision farming acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 11 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 low-cost, high-resolution and no-manning distributed sensing system for the continuous monitoring of fruit growth in precision farming lorenzo mistral peppi1, matteo zauli1, luigi manfrini2, luca corelli grappadelli2, luca de marchi1, pier andrea traverso1 1 dei department of electrical, electronic and information engineering ”guglielmo marconi”, university of bologna, 40136 bologna, italy 2 distal department of agricultural and food science, university of bologna, university of bologna, 40127 bologna, italy section: research paper keywords: smart farming technologies; smart agriculture; agricultural iot; autonomous sensor node; lora citation: lorenzo mistral peppi, matteo zauli, luigi manfrini, luca corelli grappadelli, luca de marchi , pier andrea traverso, low-cost, high-resolution and no-manning distributed sensing system for the continuous monitoring of fruit growth in precision farming, acta imeko, vol. 12, no. 2, article 17, june 2023, identifier: imeko-acta-12 (2023)-02-17 section editor: francesco lamonaca, university of calabria, italy received july 11, 2022; in final form february 24, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was funded by the “italian departments of excellence” initiative sponsored by the italian ministry of university (miur). corresponding author: lorenzo mistral peppi, e-mail: lorenzomistral.pepp2@unibo.it 1. introduction the rise of new technologies makes possible to approach classical issues with modern, smart solutions. agriculture is definitely benefiting from these innovations: quick global development, population growth, climate change and a new awareness of food, its safety and the impact its production has on the environment have led to terms such as “precision farming”, “digital farming” or “agriculture 4.0” becoming more and more widespread and considered as a real added value to the agricultural product. the term precision agriculture (pa) encompasses many disciplines and technologies employed: artificial intelligence, autonomous control of agricultural equipment, automatic decision-making systems, quality control and application of treatments by drones, and so on, in order to increase productivity and environmental quality [1]. the key element, therefore, is the ability to acquire, transmit and process information in order to monitor crops, to make autonomous decisions through decision-support systems (dss) (as an example see [2]) and/or to provide data about a particular situation in order to allow for a decision to be made that is as correct and spatially and temporally customised as possible [1], [3]. the advent of iot [4] networks has further increased the pervasiveness of in-field sensing due to the possibility, particularly through lorawan networks, to cover large areas abstract accurate, continuous and reliable data gathering and recording about crop growth and state of health, by means of a network of autonomous sensor nodes that require minimal management by the farmer will be essential in future precision agriculture. in this paper, a low-cost multi-channel sensor-node architecture is proposed for the distributed monitoring of fruit growth throughout the entire ripening season. the prototype presented is equipped with five independent sensing elements that can be attached each to a sample fruit at the beginning of the season and are capable of estimating the fruit diameter from the first formation up to the harvest. the sensor-node is provided with a lora transceiver for wireless communication with the decision making central, is energetically autonomous thanks to a dedicated energy harvester and an accurate design of power consumption, and each measuring channel provides sub-mm 9.0-enob effective resolution with a full-scale range of 12 cm. the accurate calibration procedure of the sensor-node and its elements is described in the paper, which allows for the compensation of temperature dispersion, noise and non-linearities. the prototype was tested on field in real application, in the framework of the research activity for next-generation precision farming performed at the experimental farm of the department of agricultural and food science of the university of bologna, cadriano, italy. mailto:lorenzomistral.pepp2@unibo.it acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 using extremely low-power, adequate reliability and low-cost hardware [5], [6]. in addition, smart devices allow farmers to be employed for different tasks, for which human presence is essential, reducing the need for human intervention and therefore lowering overall farm operating costs. however, farmers often perceive new technologies more as a complication than an advantage, mainly because of their inexperience, the lack of interoperability between various technologies, the complexity, the often very high costs, and the inability to handle such large amounts of data [7], [8]. consequently, efforts must be made to make these technologies easy for the farmer to install, understand, successfully exploit, and maintain. the monitoring of fruit growth does not only help to estimate the yield [9], [10] but also provides additional information, such as the water stress status of the plants [11], [12]. in addition, by comparing the data acquired in real time with predictive growth models, corrective actions can be taken in the orchard. the most common techniques for measuring fruit dimension employ lvdt (linear variable displacement transducer) sensors and strain gauges or potentiometers [13], [14] which, despite being highly accurate, are limited by a usually very small measurable range. therefore, in order to keep on with the measurement process, frequent re-positioning of the sensor on the fruit is required. this timeand labourconsuming activity has a cost, and therefore an impact on farmer’s income. there are alternative solutions that do not require relocation over time, such as those employing optical sensors directly on the fruit, as in [15], but they are usually affected by low accuracy and the inability to detect fruit shrinkage. currently, systems capable of agricultural analysis have been developed using computer vision [16] and artificial intelligence methodologies [17]. in particular, techniques to estimate the number of fruits and their size [18], [19], [20], [21], [22] have been introduced. however, the accuracy achievable by these devices is still low compared to traditional techniques. this work aims to present the design, implementation and experimental assessment of a low-cost multi-channel distributed sensing system, aimed at the high-resolution, continuous estimation and recording of the diameter of fruits directly on the tree throughout the whole growth process. the system is noninvasive and does not either damage or warp the fruits, is energetically autonomous and is strongly uninfluenced by environmental conditions. relocation of sensing elements is not needed either during the ripening season, as required by the sensors used so far because of their small measurable range, thanks to an adequate full-scale range that is greater than the size of the fruit (e.g., apple, orange) once it has reached the typical ripe dimension. in comparison with the preliminary work presented in [23], in which a first, single-channel basic prototype was shown to illustrate the concept, a complete multi-channel sensor-node solution is presented here, with also the addition of an energy harvesting section able to guarantee season-long autonomous operation and a lora transceiver to allow data communication even at far distance from the receiving gateway. the full calibration of the multi-channel system was refined and is described step-by-step in details. two in-field measurement campaigns are also presented, carried out at an apple orchard at the experimental farm of the department of agricultural and food science of bologna, cadriano, italy. the paper is organized as follows. in section 2 the operating principle of the device is explained, and system architecture is shown. in section 3 all the main components of the multichannel sensor-node are described in details, with particular emphasis to the architectural/design solutions exploited to maximize final reliability and accuracy. section 4 describes the refined full calibration procedure of the prototype. finally, in section 5 results of in-field tests, conducted on an apple orchard, are shown. 2. system architecture and operating principle unlike the conventional methods proposed in literature, where the growth of the fruit determines a stimulus directly detected by a sensitive element (potentiometer, lvdt, etc.), in this application the i-th sensing element (i.e., the front-end of the i-th measuring channel) is made of a structure of known dimensions. the structure is composed of two solid arms bounded together at one end with a bold (figure 1 for a fivechannel sensor-node). it is used to interface the object to be measured to the element itself. the plier is kept in place by means of a spring while a reference voltage-supplied (vref) potentiometer, which is placed within the fulcrum of the plier and is rigidly connected to one of the two arms of the clamp, converts the opening angle α into a voltage acquired by the analog-to-digital converter (adc) integrated into the microcontroller unit (mcu) governing the overall node. the voltage at the output of each sensing element is proportional to the opening angle of the clamp, since the partition ratio of the potentiometer is directly proportional to α, and the linear width d that represents the measurand is indirectly obtained from the a/d acquired voltage according to (1): 𝑑 = 2 𝑅 ⋅ cos ( π 2 − 𝛼(𝑉out) 2 ) . (1) the task of the mcu is to trigger readings, to perform all the in-site calibration and compensation processes on the acquired raw data and to manage storage and/or wireless transmission of data about the fruit growth evolution to a remote decisional unit. figure 1. architecture of the sensor-node, with details of one of the five sensing elements. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 in addition, the mcu also manages the power supply timing of potentiometers, ensuring minimum power consumption of the entire sensor-node. 3. sensor-node prototype in this section, the main components that were exploited in the realization of a prototype for a complete five-sensing element sensor-node will be described, together with their characteristics and related design strategies. 3.1. abs pliers the plier in each sensing element consists of two arms, which are bolted together. a rotary potentiometer is located in a recess within the pivot, to sense the opening angle α between the two arms. the plier acts as an adapter to allow to easily optimize the full-scale for the width d during the design phase, while preserving the same angular full-scale. more precisely, to vary the maximum measurable distance dfs, given the same maximum opening angle, it is only necessary to re-scale the effective length r of the plier arms. after preliminary tests with different materials, the plier was manufactured using acrylonitrile butadiene styrene (abs), through a 3d printing process. the use of abs makes it possible to obtain a structure that is robust but at the same time lightweighted: in fact, the warping of fruits due to an excessively heavy plier, which would lead to a distorted estimated growth profile, must be avoided at all costs. in order to avoid any interference with the growth of the fruit, a “half horseshoe” shape (shown in figure 2) was chosen for the arms, while the mechanical elements that rest on the fruit are cup-shaped to increase the contact surface, their edges being coated with silicone to prevent slipping. on the side of the cups, slits are located to prevent the accumulation of moisture and the consequent possible development of fungal diseases. the dimensional parameters of the plier are necessarily customized according to the species of fruit, in order to allow for the capability of monitoring the entire ripening process, optimize the nominal resolution and maximize the overall accuracy of each channel. the size of these prototype calipers is such as to allow for the monitoring of apple growth, therefore they have been designed to offer a full-scale range of dfs = 12 cm, starting from the requirement of a maximum opening angle of αfs = 60 deg (see below), which resulted in an effective length r = 12 cm. 3.2. angular sensor as angular sensor, a single turn potentiometer with an electrical angle of 60 deg was chosen. the potentiometer, connected as a voltage divider, is supplied by a reference voltage vref shared with the adc vref in order to minimize gain errors. in such a configuration the output voltage vout of the sensing element is proportional to the opening angle, and thus algebraically related to the measurand d according to (1). the phs11-1dbr10ke60 [24], from tt electronics, was the commercial component adopted. although tests conducted over a period of months have not shown any particular weathering issues for the potentiometer, in order to protect from dust, dirt and moisture it was coated with a layer of waterresistant insulating grease. 3.3. mcu, adc and temperature sensor for this application a microcontroller with low power consumption and integrated a/d channels was required. at first cmwx1zzabz-091 murata modules were selected. this device integrates, together with an stm32l072 mcu, a semtech sx1272 lora transceiver, freeing the designer from problems related to the management of rf circuits. however, components supply shortage related to the 2020-22 pandemic forced the use of a stmicroelectronics nucleo64 demo board [25] equipped with a stm32l152re mcu, whose power supply circuit was exploited and whose pin headers were used as interface to the five potentiometers and the sd card for local data storage. the mcu can be easily integrated with a lora module (e.g., mbed sx1272mb2das) as well. stm32l152re is a 32-bit microprocessor incorporating a 12-bit successive approximation adc, spi and i2c peripherals, and featuring low power modes [26]. as already mentioned, thanks to the presence of pmos devices acting as power switches, the same reference voltage used by the a/d stage was employed to supply each potentiometer: in this way it was possible to take advantage of the entire fullscale range of the adc and, at the same time, compensate for both shortand long-time dispersion effects of the reference voltage. by adopting 3.3 v as vref, the nominal quantization step of the adc is 806 µv, which is approximately equivalent to a nominal resolution of 30 µm for each sensing channel. the mcu makes internally available a temperature sensor, which was used for thermal compensation in the framework of the real-time overall calibration procedure (section 4) implemented for each channel. during the production of the mcu this sensor is calibrated, and calibration coefficients made available within read-only portions of the memory [26]. nonetheless, a comparison was made for performance assessment between this sensor and a reference one (namely, hts221 from stmicroelectronics) characterised by an ac curacy of 1.0 °c in the 0 °c to 60 °c range [27]. the maximum figure 2. 3d rendering of the plier designed for the realization of each sensing element. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 estimated deviation between the two sensors in the 24 °c to 72 °c range was 1.34 °c, which allowed to consider the accuracy of the internal mcu sensor adequate enough for the thermal compensation cited above. 3.4. power supply the voltage provided by the battery and harvester (see next subsections) had to be adjusted down to 3.3 v, which is a compatible level with the operation of the mcu and peripherals. to this aim, the ld39050pu33r ldo from stmicroelectronics available in the nucleo board has been used, which allows for a stable, low-noise power supply. this device is characterized by very good performance in terms of output voltage noise (30 µvrms over 10 hz to 100 khz bandwidth) and thermal stability [28]. in order to switch off the supply to potentiometers and to sd card, when not in use, pmoss acting as switches are used, as described in subsection 3.7. 3.5. data storage and communication the lora transmitter uses an mbed sx1272mb2das [29] demoboard, fitted with a semtech sx1272 [30] transceiver. all rf-related circuitry is relegated to this board, which is connected to the nucleo64 via a set of arduino-compatible connectors. the sx1272 transceiver is designed to operate in the 868 mhz frequency band, and it communicates with the microcontroller via an spi interface [30]. in addition, dio lines are used by the semtech loramac.node stack to interface with the transceiver. the power supply line of this board is connected directly to the nucleo board to exploit the power supply stage of the latter. as an alternative to sending data over the air, these can be saved locally by means of an sd card (figure 1). 3.6. energy harvester to ensure maintenance-free operation of the sensornode during the entire fruit growth period, it is essential to rely on a low-cost source of energy that does not run out in a short interval of time. by considering consumption estimates and planned duty cycles, it is possible to determine the most suitable energy source to ensure high system autonomy. a single acquisition cycle, consisting of interrogating all five sensing elements every 600 s, processing and storing of data on the sd card (local storage is considered in the following discussion), requires a total of 1.63 j, such estimation being performed by averaging data from different sd card manufacturers [31]. as an example, with a 500 mah battery the operating time of the prototype would be around 25 days, without taking into account self-discharge phenomena, while for many fruit species the growth and ripening times are much longer [9], [32]. therefore, such a local battery is not suitable for a virtually maintenance-free node. increasing the battery capacity is not practical, either: at the beginning of the season all the batteries should be charged, and if the number of nodes is high many chargers would be needed, which could be a significant cost, because of their price and required manpower, even if wireless power transfer was used, such as in [33]. for this reason, the adoption of a photovoltaic harvesting sub-system, with a small rechargeable back-up battery, was considered as the optimal solution. this avoids the use of external chargers or nonrechargeable batteries, which are by their nature also a major source of pollution. as it will be described in the following, when the photovoltaic panel is exposed to full sunlight it is possible to keep the back-up battery fully charged. however, in the case of indirect exposure to light, such as in greenhouses applications, recovery of the energy consumed by the sensor-node is not always guaranteed. for this reason, the battery voltage level is monitored and forwarded every time a data transmission takes place. for the prototype a steval-isv012v1 evaluation board from st microelectronics was used. included in the evaluation board are a 400 mwpeak photovoltaic panel, namely szgd60604p from nbszgd, a spv1040 step-up converter with mppt (maximum power point tracker) and a l6924d charge controller for li-ion batteries, both from stmicroelectronics [34]. the use of a step-up converter with mppt algorithm allows to continuously track output voltage and current of the solar cell and therefore allows to maintain its maximum power point while changing lighting and load conditions, thus maximizing the harvested energy and increasing the overall efficiency of the stage. tests carried out in open field, at the farm of the faculty of agriculture of university of bologna (cadriano, bologna) between june and august 2021, have shown how this solution is able to ensure a sufficient power supply to power the prototype when used outdoor: the harvester is able to keep the battery fully charged during the day, while during the night the discharge is minimal, as it is possible to appreciate in figure 3. it is worth noting that the test was carried out in rows protected by rain-hail nets and the photovoltaic panel oriented in the east direction. 3.7. power gating when a measurement is performed, the potentiometers are powered by a reference voltage. without power gating systems, the energy wasted during quiescent operation would be much higher than the one needed to keep the mcu in stand-by: a current of 330 µa would flow continuously in each one of the 5 potentiometers, and the overall consumption would not be negligible. to implement the power gating function pmos devices are used: a low logic value to the gate pin allows the pmos to start conducting, powering the potentiometers. by using a pmos, it is not required to force a high logic value to the gate to keep the load disconnected: by using this configuration i/o interfaces of the mcu can be disabled, thus saving energy and leaving the figure 3. battery voltage of the energy harvester during open field tests. it is worth noting that the battery is always fully charged, and even during bad weather days no significant battery discharge is noticeable. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 gpio pins floating while allowing potentiometers to be not powered. this is due to the fact that, by inserting a resistor rg (figure 4) of appropriate value between gate and source, the device will always remain off as vgs = vrg ≈ 0 v because igate ≈ 0 v. even a high value of rg keeps the device off and allows, when vgs < 0 v or when the pmos is conducting, to dissipate over rg a negligible amount of energy. a key factor in the choice of mos is the low rdson of the device: the lower its value, the lower the influence of this parameter on the final measurement. this aspect will be discussed in more detail in sec. 4.2 . the commercial device chosen is dmp2200udw pmos, from diodes incorporated. this device has two extremely low resistivity [35] (spice simulations show a resistivity of 207 mω at 27.5 °c in actual operative point) pmos placed within the same package. each mos is used to activate two potentiometers at a time: in this way, it is possible to power only those potentiometers whose output must be activated, thus resulting in an overall lower power consumption. 4. real-time calibration procedure several sources of uncertainty, mainly related to the variations of temperature in which the system will operate, have been identified in the uncalibrated prototype. in an open field or greenhouse, at least a temperature range of a few tens of degrees has to be considered. furthermore, this system must be able to operate in the most varied conditions, in direct sunlight and during all months of the year. therefore, temperature stability is essential and temperature dispersion must be carefully compensated. in addition, the real-time calibration procedure described in subsections 4.2, 4.3 and 4.4 and implemented in the prototype compensates also for short-term instability (e.g., noise) and non-linearities affecting each sensing channel (figure 5). 4.1. a/d acquisitions and data averaging for a given sensing channel, in order to obtain a single fruit size reading the adc is activated in continuous mode every 600 s to acquire m sequences of n samples each of the voltage vout at the output of the potentiometer in the channel. a 𝑉𝑅 estimator is then computed as the averaging of the m ∙ n samples 𝑉𝑅,𝑛 (𝑚) where (m, n) denote the m−th sequence and the n−th sample in the sequence, respectively: 𝑉𝑅 = 1 𝑀 ∙ 𝑁 ∑ ∑ 𝑉𝑅,𝑛 (𝑚) 𝑁 𝑛=1 𝑀 𝑚=1 . (2) the estimator in (2) could be used directly as the vout value in (1) to compensate for short-term instabilities (mainly noise) in the a/d conversion process. however, the procedure would not take into account thermal dispersion and non-linearities, thus additional correction steps were implemented, as discussed in the next subsections. a trade-off between accuracy in rejecting shortterm instability and power consumption needs to be considered when choosing the values of n, m and the sampling period. indeed, the higher the number of samples and the longer the sampling time are, the higher the adc consumption will be to obtain a single reading from the channel. to assess the quality of the conversion, the standard deviation of estimator 𝑉𝑅 was evaluated while varying the parameter m and the sampling period (and maintaining n=250, which represents the lowest number of samples that can be read in a single-shot acquisition) in the range 16 ∙ adcclock period < adcsamplingperiod < 192 ∙ adcclock period, where adcclock freq = 16 mhz. the best trade-off between power consumption and quality of averaged reading 𝑉𝑅 was found for m = 10 and adcsamplingperiod = 96 adcclock period. indeed, with this configuration the standard deviation of the estimator of (2) is equal to 0.02 lsb, thus negligible compared to the other sources of uncertainty in the channel. figure 4. schematic of pmos switch stage. figure 5. stages of the overall real-time calibration procedure implemented for each channel in the prototype acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 4.2. compensation for thermal dispersion a climatic chamber has been exploited to carry out an extensive experimental investigation on the dispersion of the prototype sensor-node channel response for temperature variations in the interval 20 °c 70 °c. during each test, only some parts of the sensor-node were exposed to the temperature cycles, while all the other blocks were kept in constant environment conditions, in order to separately characterize the sources of uncertainty due to thermal variations. these tests included thermal cycles on the nucleo64 board (i.e., affecting mainly the adc and the reference voltage generator), the potentiometers, and the abs pliers. final tests were then performed on the entire multi-channel system. as far as the potentiometers are concerned, no significant thermal dispersion was observed, except for some very small variations probably caused by thermal inertia. adc and ref voltage generator showed instead, as expected, a significant dispersion, even though the dispersion of the adc vref is inherently compensated, at least for the most part, by the adopted strategy of using it to also supply the potentiometers. for a given sensing channel, �̅�𝑅 is defined as the estimator obtained by inserting 𝑉𝑅̅̅ ̅ as the adc averaged reading for vout into (1). the dispersion of �̅�𝑅 due to temperature variations affecting the adc, the ref voltage generator and, from a general standpoint, all the electronic hardware on the boards has been found through the tests to follow the model: �̅�𝑅 = �̅�𝑅 (0) (1 + 𝐶th ∙ ∆𝑇) (3) where ∆t is the variation of the temperature estimated by the mcu internal sensor with respect to a reference t(0) = 25 °c, 𝑑𝑅 (0) is the value of the estimator at t(0) and cth is an experimentally estimated temperature coefficient characterizing the channel. thanks to the adoption of the already mentioned ref voltage supply strategy, the power gating method described in subsection 3.7, and an accurate optimization in the design of the entire system, cth has been found adequately independent from temperature and aperture for a given channel. regarding the power gating, more precisely, an effective solution was to use low rdson external pmos (207 mω at 27.5 °c) as switches. rdson variation of these pmos, estimated using spice simulation, is equal to 11.9 mω over a temperature range between 15 °c and 50 °c. this variation has negligible effects on the final measurement: in the worst case the maximum difference in supply voltage at the potentiometers is 7.9 µv, while the quantization step of the adc is 806 µv. the experimental extraction of the coefficient cth for each sensing channel was carried out by obtaining �̅�𝑅 on a set of aperture values from a minimum up to the full-scale, applying for each aperture value an entire temperature cycle. during a cycle, the clamp was kept mechanically blocked to the test aperture, so that to not involve in the dispersion of �̅�𝑅 the thermal expansion of abs (see next subsection). the data collected allowed to write, by means of (3), an overdetermined system of linear equations, whose solution (least-square method) provided the temperature coefficient for the channel under test. an example of temperature cycle applied to the five sensing channels for the test aperture value d ≅ 9 cm is reported in figure 6 and table 1. a weak thermal inertia effect was recorded in some tests, due to the need of speeding up the cycle: however, this effect did not cause any major issue during the calibration phase since the resulting pattern is, in almost all cases, symmetrical to the interpolation line and because the points are almost superimposable on the line once thermal equilibrium is reached. it is worth noting that in on-field applications, temperature slope is never so high, thus avoiding thermal inertia. once the estimate for the coefficient cth is available for every channel, (3) can be used to convert (according to the model and the assumptions discussed) the uncalibrated averaged reading 𝑑𝑅 obtained at temperature te (monitored by the mcu internal sensor) into the value that would be obtained at the reference t(0). this thermally-calibrated estimate is indicated as 𝑑𝑅−𝑇 in the following. figure 6. temperature cycles imposed to the overall final prototype, equipped with pmos instead of mic5365-3.3 ldo, for all the five sensing elements. table 1. cth and r2 values, starting with sensing element num. 1 up to sensing element num. 5. coefficients of the regression lines are computed by means of the least-squared method. channel cth r2 1 -0.29 0.776 2 -0.0362 0.641 3 -0.55 0.913 4 -0.51 0.925 5 -0.061 0.754 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 7 4.3. thermal expansion of the pliers the material used to manufacture the pliers, namely abs, has a known coefficient of linear expansion λ. therefore it is possible to estimate, given a reference length r(0) at a reference temperature t(0), and knowing the temperature variation ∆t from the latter, the variation of effective length ∆r of the clamp arm and, consequently, rewrite (1) in the following form 𝑑 = 𝑑(0) + 2 ⋅ ∆𝑅 ⋅ cos ( π 2 − 𝛼(𝑉out) 2 ) (4) the calibration of all electronic components with respect to thermal dispersion discussed in the previous subsection can be thus supplemented with the calibration for thermal expansion of the pliers: it suffices to rewrite and estimate (4) in the form: �̂� = 𝑑𝑅−𝑇 + 2 ⋅ ∆𝑅 ⋅ cos ( π 2 − 𝛼(𝑉r) 2 ) (5) to obtain an estimate �̂� fully compensated in temperature. 4.4. non-linearities calibration up to now, issues associated with system nonlinearities (adc nonlinearity, potentiometer resistive taper nonlinearity, geometric dispersion in pliers printing process, etc.) that may affect the �̂� estimator have not been addressed. in order to complete the real-time calibration process, it is only required to extract a single calibration curve across the entire measuring range, at constant temperature (t(0) = 25 °c in this specific example), using a precision reference caliber. by means of a cubic spline interpolation process a set of coefficients is extracted from this curve and saved in the memory of the mcu. at each query on the diameter of the fruit these coefficients are recalled and the linearized output 𝑑�̂� (i.e., the final reading of the channel) is computed from �̂� in real time. a special-purpose laboratory set-up was arranged to calibrate the opening of the pliers. all the pliers of a single node are installed on a structure constrained to the carriage of a cnc. as the carriage moves, it allows the pliers to be opened according to user-defined dimensions, simulating the presence of a fruit of known size. a specially developed matlab script allows the entire opening range of the pliers to be swept at known spatial intervals, while at the same time the opening value detected in the absence of calibration are acquired. the point couples detected in this way are automatically stored into the flash memory of the microcontroller, thus completing the non-linearity calibration process. in this way each node can be programmed with the same firmware, without the need to modify the code according to the set of calibration coefficients needed for each set of pliers. 5. experimental results and in-field tests 5.1. static tests a set of static tests were performed on the prototype sensing channels at room temperature in order to estimate the effective resolution obtained. during the tests, estimates 𝑑�̂� on sets of aperture points randomly distributed within the full range were collected and compared with the readings taken with a precision caliper adopted as reference. by means of simple algorithms the rms deviation was estimated, leading to a standard uncertainty corresponding to 70 µm. based on this value, the average effective quantisation step dq ≅ 250 µm was derived, and enob = 9.0 bit estimated for a full-scale range of 12 cm. 5.2. in-field test a first trial was conducted between june and august 2021, in an apple orchard (malus x domestica borkh., cv “fuji-fujiko”) of the experimental farm of the department of agricultural and food science of university of bologna (cadriano, 44.5543 n 11.41872 e, figure 7). the orchard, established in early 2019 in a deep silt-clay soil, was characterized by orchard row spacing of 2 m and tree spacing of 3 m. trees, grafted onto m9 rootstock and trained to a multi-axis system (10 axis/tree) with a ns orientation, are watered by means of an irrigation system consisted of drip irrigation with a distance between emitters of 0.5 m and emitter flow of 2.3 lh-1. commercial practices, such as manual fruit thinning to achieve optimal crop load, winter pruning to maintain the desired training system, mineral fertilization, pest control and diseases, herbicide application below trees and mowing of inter-canopy cover crop, are employed to manage the orchard. within the orchard 18 fruits, randomly selected along a whole row at the beginning of the season, were monitored cyclically and their growth measured by hand using gauges. the five pliers of the prototype were instead placed simultaneously on a different set of five fruits of two adjacent trees, located at about one third of the length of the orchard, in an inner row. measurements 𝑑�̂� by means of the prototype were performed automatically every 10 minutes, while manual readings were carried out depending on the weather conditions and the activities to be done in the field, without any prescheduled periodicity, according to a traditional approach. such a short time interval between two acquisitions was decided because one of the purposes of the device is to evaluate the variation in fruit size between day and night. this time interval obviously can have an impact in terms of amount of data transmitted or stored. however, it would be straightforward to vary the data acquisition rate according to different, specific needs. in-field results of the five sensing elements can be seen in figure 8. as can be noted in the graph, three sensing elements (i.e., #1, #2 and #4, left) out of five correctly monitored and recorded (apart from recoverable perturbations) the fruit growth throughout the ripening season without the need for any figure 7. position of the five pliers of the prototype within the orchard (red dot), and position of the 18 apples manually measured as reference (yellow box). acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 8 manning (apart from a single event, detailed below), while sensing elements #3 and #5 (right) suffered from unrecoverable issues. these were not related to the performance of the prototype itself, but to the capability of the pliers to remain reliably fixed to the fruit without any shift in position: normal manual activities, such as pruning the trees or applying spray treatments, or particularly adverse weather conditions, can actually result in a shifting of the pliers. these two records will therefore be disregarded. in the case of sensing element #1, instead, the spikes observed are caused by non-disruptive, recoverable manual activities around the fruit, and can be easily filtered out by software post-processing, while in sensing element #2 the single discontinuity observed at the turn of the 30th of july was caused by strong winds, which caused the sensor to be detached from the fruit. in this case, a manual re-positioning was needed. such a macroscopic issue can be successfully dealt with by fast visual inspection on the fruits under test, and the exceptionality of the event still allows to state that the monitoring process required a practically negligible work overhead from the orchard staff during the whole ripening season. due to the difficulty of obtaining repeatable reference measurements on fruits still attached to the tree (the nonsphericity and non-rigidity of the apple makes it practically impossible to get perfectly repeatable measurements by manual gauges), in order to perform a comparative analysis with the results from the prototype the absolute growth rate (agr), expressed in mm/day and computed on intervals of several days, was used as an averaged estimator for comparisons. the duration of the intervals over which the agr was estimated is not constant but depends on the weather conditions and on the other activities required in the field according to common practices. firstly, the average agr from the gauge measurements was computed for each individual fruit in the row (see in figure 9 the statistical distribution obtained) over the total time of the trial (starting from 21/06/2021 to 06/08/2021). similarly, the average agr involving all fruits was calculated on each time interval considered. the same was done, for the same time intervals, by averaging the values obtained with sensing elements #1, #2 and #4 (table 2). the values of table 2 have been graphed in figure 10, where it is possible to appreciate that the trends of the data collected by figure 8. data acquired by the five sensing-element fully calibrated prototype during the in-field test in the apple orchard at the experimental farm of the department of agriculture and food sciences of university of bologna between june and august 2021. on the left (a), successful records that show a correct growth rate and that are not characterized by unrecoverable spikes or disturbances are shown. on the right (b), records that were not useful for this study, due to unrecoverable disturbances (i.e., the clamps suffered from shifts on the fruit due to either weathering or manual activities), are shown. table 2. from left to right: average agr values computed, over three time intervals, considering prototype sensing elements 1-4, 1-2-4 and all 18 handmeasured fruits, respectively. time interval agr 1-4 in mm/day agr 1-2-4 in mm/day agr gauges in mm/day 21/06/2021 08/07/2021 0,56 0,55 0,41 08/07/2021 23/07/2021 0,58 0,52 0,41 23/07/2021 06/08/2021 0,43 0,44 0,29 table 3. agr estimated over the whole duration of the test considering sensing elements 1-4, 1-2-4 and all hand-measured fruits using a gauge. time interval agr in mm/day deviation in % 21/06/21-06/08/21 | pliers 1-4 0,53 36 21/06/21-06/08/21 | pliers 1-2-4 0,51 31 21/06/21-06/08/21 | gauges 0,39 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 9 the sensing elements and the hand-measured readings are very close. table 3 reports the comparison between the agr estimated by the sensor-node prototype throughout the ripening season and the corresponding value from the reference manual readings. on average, the prototype value is slightly higher than the manually measured one. however, looking at the histogram in figure 9, it can be appreciated that the agr from the sensing elements, which are positioned on the fruits of two adjacent trees and, thus, is representative of a local sampling of growth rate, is nonetheless within the statistics of the events detected on a larger scale in the orchard, i.e., an entire row. according to these results, and by considering the statistical distribution in figure 9, 4-5 equally-spaced sensor-nodes could suffice for an adequate sampling of fruit growth along an entire row with these characteristics. a further test was performed using 3 nodes, each consisting of 5 pliers, between august 2022 and october 2022. the orchard was the same of the first test and sensor nodes were randomly distributed over nonadjacent trees on the same row where previous year’s test was conducted. sensor-nodes, whose pliers were shared between adjacent trees, were programmed with same settings as the one used in the previous year's test. thanks to the experience gained with the tests of the previous year, this campaign was performed with a new version of the pliers. structural changes in the shape were introduced in order to lighten it, to improve its adhesion to the fruit and to simplify the 3d printing process while minimising production tolerances. (figure 11) the average of the agr factor estimated using these devices was compared to the average of the same factor computed on 22 hand measured fruits, randomly selected, of the same orchard. of 15 total pliers, only 3 revealed problems with slipping on the fruit. the values measured by these pliers will therefore not be considered for the calculation of the agr. the remaining 12 channels all show approximately the same growth trend (figure 12). thanks to the new design of the pliers, a more accurate positioning of them on the fruits made it possible to decrease the number of required relocations, making the measurements less perturbed than in the previous test. in fact, significant spikes are no longer present in the acquisitions (figure 12). for agr determination, manually acquired data were collected on different dates, selected according to weather conditions and field operations. therefore, two time intervals were obtained within which the agr was computed (table 4). these values were compared with the ones obtained from the data acquired with the multi-channel sensor-nodes. in this case, figure 9. agr distribution of all the 18 apples manually measured, computed over the total time of the trial. figure 10. comparison between averaged agr computed with data collected by the prototype sensing elements and reference data collected with manual caliper. figure 11. new plier rendering. it can be seen that the structure has been lightened and the cut-out for the potentiometer has been included into the right-hand body of the plier. the spring is not shown. figure 12. measured diameter of apples on 2022 test. only working pliers are shown. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 10 it was not considered useful to perform an analysis of the statistical distribution of agr between fruits as this had already been done in the past experimental campaign, but rather to verify the validity of the measurement with a statistically representative number of samples representing the conditions of the entire orchard. results are shown in table 5. the time intervals considered for calculating agr factor using hand gauges are different from those considered for calculating the same parameter with our devices. this, however, is not a source of error since growth pattern of an apple follows different phenological stages, each characterised by a typical growth profile. therefore, while remaining within each phenological phase, the agr value is approximately the same. in particular, periods 01/08/2022 30/08/2022, considered for the manual acquisition, and 18/08/2022 30/08/2022, used for estimation of agr with our system, belong to the same phenological phase and can therefore be compared without appreciable errors. the delay in the start of the comparison is caused by the need for the pliers to adapt to mechanical clearances. in fact, looking at figure 12, it can be seen how the growth in the first few days seems to be extremely slow, which is not the actual case. in the two time periods, the agr value is very similar experimentally confirming the good performance of the device. 6. conclusions in this paper, a multi-channel sensor-node architecture and related calibration procedure for continuous distributed monitoring of fruit growth were presented. the low-cost prototype implemented for validation purposes, which was realized by means of commercial boards and 3d-printed devices, showed nonetheless adequately high effective resolution with a full-scale range suitable for no-manning operation throughout the entire ripening season of apple-sized fruit species. by adopting a multi-step calibration procedure, it was possible to compensate each channel in real-time for temperature dispersion, noise and non-linearity, which made the system rugged to environmental conditions and suitable for all-season operation. thanks to the energy harvesting sub-system and the possibility of data transmission in real-time over the air, the sensor-node and its sensing elements can be positioned at the beginning of the season and operate, without any maintenance required, until harvest time. the design of a custom board is planned, in order to integrate the harvesting stage, the mcu and the lora transceiver for performance and power consumption optimization. references [1] f. j. pierce, p. nowak, aspects of precision agriculture, ser. advances in agronomy, d. l. sparks, ed. academic press, 1999, vol. 67, pp. 1–85. doi: 10.1016/s0065-2113(08)60513-1 [2] r. khan, m. zakarya, v. balasubramanian, m. a. jan, v. g. menon, smart sensing-enabled decision support system for water scheduling in orange orchard, ieee sensors journal, vol. 21, no. 16, 2021, pp. 17492– 17499. doi: 10.1109/jsen.2020.3012511 [3] n. zhang, m. wang, n. wang, precision agriculture a worldwide overview, computers and electronics in agriculture, vol. 36, no. 2, 2002, pp. 113–132. doi: 10.1016/s0168-1699(02)00096-0 [4] b.-y. ooi, s. shirmohammadi, the potential of iot for instrumentation and measurement, ieee instrumentation & measurement magazine, vol. 23, no. 3, 2020, pp. 21–26. doi: 10.1109/mim.2020.9082794 [5] d. davcev, k. mitreski, s. trajkovic, v. nikolovski, n. koteli, iot agriculture system based on lorawan, 2018 14th ieee international workshop on factory communication systems (wfcs), imperia, italy, 13-15 june 2018, pp. 1–4. doi: 10.1109/wfcs.2018.8402368 [6] m. j. faber, k. m. van der zwaag, w. g. v. dos santos, h. r.d. o. rocha, m. e. v. segatto, j. a. l. silva, a theoretical and experimental evaluation on the performance of lora technology, ieee sensors journal, vol. 20, no. 16, 2020, pp. 9480–9489. doi: 10.1109/jsen.2020.2987776 [7] m. reichardt, c. jürgens, adoption and future perspective of precision farming in germany: results of several surveys among different agricultural target groups, precision agriculture, vol. 10, no. 1, 2009, pp. 73– 94. doi: 10.1007/s11119-008-9101-1 [8] m. kernecker, a. knierim, a. wurbs, common framework on innovation processes and farmers’ interests. online [accessed 2 june 2023] https://ec.europa.eu/research/participants/documents/downloa dpublic?documentids=080166e5a96e5296&appid=ppgms [9] a. lakso, l. corelli grappadelli, j. barnard, m. goffinet, an expolinear model of the growth pattern of the apple fruit, journal of horticultural science, vol. 70, no. 3, 1995, pp. 389–394. doi: 10.1080/14620316.1995.11515308 [10] h. welte, forecasting harvest fruit size during the growing season, acta horticulturae 276: ii int. symposium on computer modelling in fruit research and orchard management 276, 1989, pp. 275–282. doi: 10.17660/actahortic.1990.276.32 [11] b. morandi, f. boselli, a. boini, l. manfrini, l. corelli, the fruit as a potential indicator of plant water status in apple, acta horticulturae: viii int. symposium on irrigation of horticultural crops 1150, 2017, pp. 83–90. doi: 10.17660/actahortic.2017.1150.12 [12] a. boini, l. manfrini, g. bortolotti, l. corelli-grappadelli, b. morandi, monitoring fruit daily growth indicates the onset of mild drought stress in apple, scientia horticulturae, vol. 256, 2019, p. 108520 doi: 10.1016/j.scienta.2019.05.047 [13] s. o. link, m. e. thiede, m. g. v. bavel, an improved straingauge device for continuous field measurement of stem and fruit diameter, journal of experimental botany, vol. 49, no. 326, 1998, pp. 1583–1587. doi: 10.1093/jexbot/49.326.1583 [14] b. morandi, l. manfrini, m. zibordi, m. noferini, g. fiori, l. c. grappadelli, a low-cost device for accurate and continuous measurements of fruit diameter, hortscience, vol. 42, no. 6, 2007, pp. 1380– 1382. doi: 10.21273/hortsci.42.6.1380 [15] m. thalheimer, a new optoelectronic sensor for monitoring fruit or stem radial growth, computers and electronics in agriculture, table 4. average agr values of hand measured fruits computed, over two times intervals in 2022 test, with a gauge. time interval manual acquisition agr in mm/day 01/08/2022 30/08/2022 0,34 30/08/2022 03/10/2022 0,20 table 5. average agr values of fruits, estimated with the sensor node, over two time intervals during 2022 test, and deviation from agr computed on hand measured fruits. time interval prototype acquisition agr in mm/day deviation in % 18/08/2022 30/08/2022 0,33 -2 30/08/2022 04/10/2022 0,17 -16 https://doi.org/10.1016/s0065-2113(08)60513-1 https://doi.org/10.1109/jsen.2020.3012511 https://doi.org/10.1016/s0168-1699(02)00096-0 http://dx.doi.org/10.1109/mim.2020.9082794 https://doi.org/10.1109/wfcs.2018.8402368 https://doi.org/10.1109/jsen.2020.2987776 https://doi.org/10.1007/s11119-008-9101-1 https://ec.europa.eu/research/participants/documents/downloadpublic?documentids=080166e5a96e5296&appid=ppgms https://ec.europa.eu/research/participants/documents/downloadpublic?documentids=080166e5a96e5296&appid=ppgms https://doi.org/10.1080/14620316.1995.11515308 https://doi.org/10.17660/actahortic.1990.276.32 http://dx.doi.org/10.17660/actahortic.2017.1150.12 https://doi.org/10.1016/j.scienta.2019.05.047 http://dx.doi.org/10.1093/jexbot/49.326.1583 https://doi.org/10.21273/hortsci.42.6.1380 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 11 vol. 123, 2016, pp. 149–153. doi: 10.1016/j.compag.2016.02.028 [16] c. nandi, b. tudu, c. koley, machine vision based techniques for automatic mango fruit sorting and grading based on maturity level and size maturity level and size, in: mason, a., mukhopadhyay, s., jayasundera, k., bhattacharyya, n. (eds) sensing technology: current status and future trends ii. smart sensors, measurement and instrumentation, vol. 8, 2014. doi: 10.1007/978-3-319-02315-1_2 [17] d. shadrin, a. menshchikov, a. somov, g. bornemann, j. hauslage, m. fedorov, enabling precision agriculture through embedded sensing with artificial intelligence, ieee transactions on instrumentation and measurement, vol. 69, no. 7, 2020, pp. 4103–4113. doi: 10.1109/tim.2019.2947125 [18] d. wang, c. li, h. song, h. xiong, c. liu, d. he, deep learning approach for apple edge detection to remotely monitor apple growth in orchards, ieee access, vol. 8, 2020, pp. 26911–26925. doi: 10.1109/access.2020.2971524 [19] d. stajnko, m. lakota, m. hočevar, estimation of number and diameter of apple fruits in an orchard during the growing season by thermal imaging, computers and electronics in agriculture, vol. 42, no. 1, 2004, pp. 31–42. doi: 10.1016/s0168-1699(03)00086-3 [20] z. wang, k. walsh, b. verma, on-tree mango fruit size estimation using rgb-d images, sensors, vol. 17, 11 2017, p. 2738. doi: 10.3390/s17122738 [21] k. bresilla, g. d. perulli, a. boini, b. morandi, l. corelli grappadelli, l. manfrini, single-shot convolution neural networks for real-time fruit detection within the tree, frontiers in plant science, vol. 10, 2019, p. 611. doi: 10.1088/0031-9120/30/5/007 [22] f. rossi, l. manfrini, m. venturi, l. corelli grappadelli, b. morandi, fruit transpiration drives interspecific variability in fruit growth strategies, horticulture research 9, 2022, pp. 1–10. doi: 10.1093/hr/uhac036 [23] l. m. peppi, m. zauli, l. manfrini, l. c. grappadelli, l. de marchi, p. a. traverso, implementation and calibration of a lowcost sensor node for high-resolution, continuous and no-manning recording of fruit growth, 2021 ieee int. instrumentation and measurement technology conference (i2mtc), glasgow, united kingdom, 17-20 may 2021, pp. 1–6. doi: 10.1109/i2mtc50364.2021.9459851 [24] tt electronics, rotary position sensor phs11 series. online [accessed 2 june 2023] https://www.ttelectronics.com/ttelectronics/media/productf iles/datasheets/phs11.pdf [25] stmicroelectronics, um1724 user manual. online [accessed 2 june 2023] https://www.st.com/resource/en/user_manual/um1724-stm32nucleo64-boards-mb1136-stmicroelectronics.pdf [26] stmicroelectronics, rm0038 reference manual. stm32l100xx, stm32l151xx, stm32l152xx and stm32l162xx advanced arm-based 32–bit mcus. online [accessed 2 june 2023] https://www.st.com/resource/en/reference_manual/rm0038stm32l100xx-stm32l151xx-stm32l152xx-and-stm32l162xxadvanced-armbased-32bit-mcus-stmicroelectronics.pdf [27] stmicroelectronics, hts221 datasheet. online [accessed 2 june 2023] https://www.st.com/resource/en/datasheet/hts221.pdf [28] stmicroelectronics, ld39050 500 ma low quiescent current and low noise voltage regulator. online [accessed 2 june 2023] https://www.st.com/resource/en/datasheet/ld39050.pdf [29] arm limited, sx1272 mbed shield. online [accessed 2 june 2023] https://os.mbed.com/components/sx1272mb2xas/ [30] semtech, sx1272 datasheet. online [accessed 2 june 2023] https://www.semtech.com/products/wireless-rf/loracore/sx1272#download-resources [31] gough's tech zone, microsd card power consumption & spi performance. online [accessed 2 june 2023] https://goughlui.com/2021/02/27/experiment-microsd-cardpower-consumption-spi-performance/ [32] c. pratt, apple flower and fruit: morphology and anatomy, horticultural reviews, vol. 10, 2011, pp. 273–308. doi: 10.1002/9781118060834.ch8 [33] r. w. porto, v. j. brusamarello, i. müller, f. l. c. riano, f. r. de sousa, wireless power transfer for contactless instrumentation and measurement, ieee instrumentation & measurement magazine, vol. 20, no. 4, pp. 49–54, 2017. doi: 10.1109/mim.2017.8006394 [34] stmicroelectronics, steval-isv012v1, data brief. online [accessed 2 june 2023] https://www.st.com/resource/en/data_brief/stevalisv012v1.pdf [35] diodes incorporated, dmp2200udw dual p-channel enhancement mode mosfet, online [accessed 2 june 2023] https://www.diodes.com/assets/datasheets/dmp2200udw.p df https://doi.org/10.1016/j.compag.2016.02.028 https://doi.org/10.1007/978-3-319-02315-1_2 https://doi.org/10.1109/tim.2019.2947125 https://doi.org/10.1109/access.2020.2971524 https://doi.org/10.1016/s0168-1699(03)00086-3 https://doi.org/10.3390/s17122738 https://doi.org/10.1088/0031-9120/30/5/007 https://doi.org/10.1093/hr/uhac036 https://doi.org/10.1109/i2mtc50364.2021.9459851 https://www.ttelectronics.com/ttelectronics/media/productfiles/datasheets/phs11.pdf https://www.ttelectronics.com/ttelectronics/media/productfiles/datasheets/phs11.pdf https://www.st.com/resource/en/user_manual/um1724-stm32-nucleo64-boards-mb1136-stmicroelectronics.pdf https://www.st.com/resource/en/user_manual/um1724-stm32-nucleo64-boards-mb1136-stmicroelectronics.pdf https://www.st.com/resource/en/reference_manual/rm0038-stm32l100xx-stm32l151xx-stm32l152xx-and-stm32l162xx-advanced-armbased-32bit-mcus-stmicroelectronics.pdf https://www.st.com/resource/en/reference_manual/rm0038-stm32l100xx-stm32l151xx-stm32l152xx-and-stm32l162xx-advanced-armbased-32bit-mcus-stmicroelectronics.pdf https://www.st.com/resource/en/reference_manual/rm0038-stm32l100xx-stm32l151xx-stm32l152xx-and-stm32l162xx-advanced-armbased-32bit-mcus-stmicroelectronics.pdf https://www.st.com/resource/en/datasheet/hts221.pdf https://www.st.com/resource/en/datasheet/ld39050.pdf https://os.mbed.com/components/sx1272mb2xas/ https://www.semtech.com/products/wireless-rf/lora-core/sx1272%23download-resources https://www.semtech.com/products/wireless-rf/lora-core/sx1272%23download-resources https://goughlui.com/2021/02/27/experiment-microsd-card-power-consumption-spi-performance/ https://goughlui.com/2021/02/27/experiment-microsd-card-power-consumption-spi-performance/ http://dx.doi.org/10.1002/9781118060834.ch8 https://doi.org/10.1109/mim.2017.8006394 https://www.st.com/resource/en/data_brief/steval-isv012v1.pdf https://www.st.com/resource/en/data_brief/steval-isv012v1.pdf https://www.diodes.com/assets/datasheets/dmp2200udw.pdf https://www.diodes.com/assets/datasheets/dmp2200udw.pdf mathmet measurement uncertainty training activity – overview of courses, software, and classroom examples acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 6 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 mathmet measurement uncertainty training activity – overview of courses, software, and classroom examples francesca r. pennecchi1, peter m. harris2 1 istituto nazionale di ricerca metrologica (inrim), strada delle cacce 91, 10135 torino, italy 2 national physical laboratory (npl), hampton road, teddington tw11 0lw, uk section: technical note keywords: measurement uncertainty (mu); training; mathmet; overview; mu courses; classroom examples and software citation: francesca r. pennecchi, peter m. harris, mathmet measurement uncertainty training activity – overview of courses, software, and classroom examples, acta imeko, vol. 12, no. 2, article 12, june 2023, identifier: imeko-acta-12 (2023)-02-12 section editor: eric benoit, université savoie mont blanc, france received july 1, 2022; in final form march 14, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: npl’s work was supported by the uk government’s department for business, energy, and industrial strategy (beis) as part of its national measurement system (nms) programme. corresponding author: francesca pennecchi, e-mail: f.pennecchi@inrim.it 1. introduction in october 2021, a two-year mathmet [1] activity was launched [2] with the aim of developing new training material and establishing an active community for those involved in teaching measurement uncertainty. this “measurement uncertainty training activity” [3] is conducted by a consortium of mathmet and non-mathmet members who committed themselves, on a voluntary basis, to develop new training material on measurement uncertainty (mu) and to strengthen collaborations among experts and interested people at metrology institutes, universities, industry and within accreditation and legal metrology communities. concerning the development of new material for mu training, this will include an overview of existing courses, software and examples, which can guide trainees across the tools and materials already available at different levels and in different fields of application. it is also planned to prepare some short videos explaining the need for, and common difficulties in, evaluating mu. all material will be made publicly available on the dedicated webpage [3] of the mathmet website and will be actively disseminated to a large set of practitioners in metrology, academia, and industry. in the present abstract, we will focus on the survey of existing courses and software for mu evaluation, together with the review of selected examples, suitable for mu training, which will be revisited in the form of proper classroom examples. further new training material that will be developed from scratch by the mu training activity is presented separately at this joint symposium [4] and will not be detailed here. 2. overview of existing courses and software starting from the plethora of courses on mu usually offered by the partners of the consortium, the first step was to undertake a review of such courses, in order to inform the wider audience about their availability and characteristics. in this respect, mathmet will serve as a reference point to make connections among trainers and trainees at a european level and beyond. the abstract a collaborative activity on “measurement uncertainty (mu) training” under the auspices of the european metrology network for mathematics and statistics (mathmet) is underway. this abstract reports on the progress to undertake surveys of existing training courses on mu, software for mu evaluation, and classroom examples to support the understanding of methods for mu evaluation. an appreciable number of training courses, software and examples have been identified and are currently under review. these tools and materials will be analysed and categorised according to their main features and characteristics. special attention will be given to their adherence to the jcgm guidelines (i.e., jcgm 100:2008, jcgm 101:2008 and jcgm 102:2011). it is hoped that the knowledge assembled in this activity will help practitioners to make good choices about appropriate material to support their training needs, as well as help developers of training material to ensure good coverage of their training products and target them at user needs. mailto:f.pennecchi@ acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 courses will be categorized according to their main features to enable the audience to easily identify which course would fit the best with their need. analogously, based on the availability of a variety of software (sw) performing mu evaluation, a critical overview of such tools is currently underway to analyse their characteristics, such as the status, the kind of methods they implement and the main operating conditions. 2.1. existing courses on mu so far, information on 41 training courses taught by 14 partners and 1 stakeholder of the activity consortium has been collected, for a total amount of more than 880 hours of lessons per year. for each course, a specifically developed template was completed by a reference contact, who was free to provide the following details: general information (title of the course, its integration into a training framework or project, specific field of application, organizing body, website advertising the course, duration, frequency, language(s), location, material provided to attendees, attendance fee, final examination, kind of certification, etc.) audience (target audience, specific constraints and prerequisites, average number of attendees) teacher/s or technical contact/s technical contents classroom examples from a preliminary analysis of the features undertaken it is worth noting that most of the courses are specifically dedicated to mu evaluation but a good number have a broader scope (covering, for example, metrology in general or for a specific si quantity). these courses were included in the review as they make a strong effort in teaching mu evaluation. figure 1 gives an idea of their specific fields of application. the majority of courses (78 %) is given on a recurring basis but some are elearning courses and are available on demand. 15 % are offered in more than one language (see the languages distribution in figure 2). 50 % require some sort of final examination and 50 % have an enrolment fee. 15 % of courses are aimed at legal metrology, 37 % at nmis, 46 % at calibration and testing laboratories and 24 % at academia (with overlapping categories). concerning the “technical contents”, the contact person completing the template was required to describe the main topics of the course and the extent to which they comply with the prescriptions of the jcgm wg1 suite of documents [5], with a special focus on the teaching of the law of propagation of uncertainty (lpu) and the monte carlo method (mcm) for propagation of distributions: review of mathematical tools (linear algebra, partial derivatives, linear regression, …) review of probability concepts (random variables, distributions, …) basic metrological concepts (measurand, measurement model, error, accuracy, precision, repeatability, reproducibility, …) input standard uncertainties and covariances (gum type a and type b) lpu (gum 1st or also higher-order taylor series expansion, expanded uncertainty) lpu (jcgm 102 multivariate models) mcm for propagation of distributions (jcgm 101 univariate models) mcm for propagation of distributions (jcgm 102 multivariate models) validating lpu against mcm reporting the measurement result as a result, 34 % of courses provide a review of mathematical concepts, 85 % of probabilistic topics and 95 % of metrological topics. almost all discuss how to model and evaluate input standard uncertainties and covariances, as well as the application of lpu to univariate models. interestingly, though, only 20 % address lpu for multivariate models (jcgm 102). concerning the teaching of mcm, 44 % of courses treat mcm for univariate models (jcgm 101) but only 15 % for multivariate models (jcgm 102): see figure 3 and figure 4. moreover, the training on mcm is not homogenous across the audience: it is almost never taught in courses for the legal metrology community, a third of the time to calibration and testing laboratory personnel, half of the time to nmi employees and most of the time in courses for academia. as a general comment, it seems there is a gap in the treatment of multivariate models, both from the side of lpu and even worse for what concerns application of mcm. this implies that little attention is given to the training on calculation of covariances among measurands depending on some common input quantities and hence being correlated. this seems in figure 1. fields of application of the courses. figure 2. languages used in the courses (the total number of occurrences of languages used in the courses, also considering the various combinations of languages, was 49). acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 contrast with the fact that the main target audience (46 %) are calibration and testing laboratories and calibration procedures often involve multivariate models. an encouraging result is that 75 % of the courses dealing with lpu for multivariate models address the problem by teaching also the corresponding mcm. in the questionnaire, it was also possible to specify which references are used and if any software is applied or mentioned in the course. the references mainly reported are documents of the jcgm wg1 suite, iso and oiml standards, euramet and ilac guides, as well as documents by ea, ukas, din, eurachem/citac, etc. 68 % of the courses rely on, or at least mention, use of some sw or programming language, like excel, matlab, r, labview, origin, the nist uncertainty machine and the gum workbench. among the technical topics treated on top of standard ones (i.e., lpu and mcm), the following are also mentioned: bayesian inference, conformity assessment, linear regression, and quality control. comments concerning the “classroom examples” are left until section 3. 2.2. software for mu evaluation a further survey was initiated by a subset of the activity partners, i.e., inrim, npl, lne, ipq, imbih, metas and polito, with the aim of categorizing available software related to mu and summarizing the methods offered by such software to the end users. a list of software was agreed within the consortium encompassing 50 sw for mu evaluation, coming from several sources (mainly from a wikipedia webpage [6]). 35 sw were already analysed by involved partners, by filling in an agreed list of characteristics/features. the software ranged from basic uncertainty calculators to quite complex, broad-scope software, and from user-friendly web applications to comprehensive collections of libraries and tools for uncertainty quantification. some of those sw are currently under analysis by the partners, considering the following characteristics: general information technical features adherence to jcgm 100:2008 adherence to jcgm 101:2008 adherence to jcgm 102:2011 for the “general information” and “technical features” items, information is reported on license, version, programming language, whether the sw is computer-based or a web application, its language(s), documentation, and evidence of verification and validation. concerning the sw so far analysed, 74 % of them are cross-platform, 85 % computer-based (15 % web application), 54 % provide some evidence of validation, and all are available in english version (some also in other languages). the distribution of the programming languages is shown in figure 5. in the questionnaire, moreover, it has to be stated whether the sw is able to handle correlated input quantities, nonlinear models, more than one output quantity (most of the analysed sw have these features, i.e., 86 %, 89 % and 57 %, respectively), complex-valued quantities, implicit models, symbolic uncertainty evaluation, repeated input observations, and input imported from previous analyses (these features, instead, are generally less covered by the sw). information will be also given on the output results and their format. the adherence of the implemented methods with the jcgm documents [5] is investigated in some detail. the aim is to assess the metrological relevance of each sw and its level of compliance with recognized guidelines. concerning jcgm 100:2008, the sw is checked against its ability to implement the lpu (without or with correlation among input quantities and implementing the first or higher-order taylor series approximation), to (analytically or numerically) calculate the sensitivity coefficients, to provide a summary of standard uncertainty components, and to calculate the effective degrees of freedom and the expanded uncertainty at a prescribed coverage probability. in this respect, the majority of the sw implement lpu based on the first-order taylor approximation of the model (71 %) and provide sensitivity coefficients (55 %) and expanded uncertainties (54 %). the remaining capabilities are less frequently addressed. concerning the jcgm 101:2008 and jcgm 102:2011 documents, the main features under investigation are the maximum numbers of monte carlo trials and of input quantities, figure 3. courses teaching mcm (jcgm 101). figure 4. courses teaching mcm (jcgm 102). figure 5. programming languages. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 the gallery of available (univariate or multivariate) input probability density functions, the application of lpu to explicit or implicit (univariate or multivariate) measurement models, the ability to provide a coverage interval for the output quantity at a prescribed coverage probability (also considering probabilistically symmetric and shortest coverage intervals), to perform an adaptive monte carlo procedure, and to validate the gum uncertainty framework against the monte carlo method. as concerns the adherence to jcgm 101:2008, most of the analysed sw respect the document prescriptions on how to assign the input probability density functions (60 %) and calculate an estimate and the associated uncertainty from the simulated output distribution (60 %). the other features are less well covered. as concerns the adherence to jcgm 102:2011, it is evident a large gap exists in this respect: only 20 % of sw implement lpu for explicit multivariate models, and a meagre 6 % for implicit ones; 31 % of sw apply a monte carlo procedure to calculate an estimate and the associated covariance matrix from the simulated multivariate distribution of the measurand. in parallel with the above-described revision of the available sw for mu evaluation, it is worth mentioning that mathmet is developing a quality management system (qms) for software, data and guidelines as one of the main outputs of the mathmetrelated joint network project. relevant output is available in a dedicated publication [7] and uploaded on a mathmet webpage dedicated to quality assurance tools [8]. 3. overview of classroom examples concerning the “classroom examples” offered in existing courses on mu, contact persons were asked to provide information about some of the main examples treated and their characteristics, comprising: title short description application area (calibration, testing, conformity assessment, etc.) metrology area (mass, length, etc.) approach to mu evaluation (jcgm 100, jcgm 101, etc.) level of difficulty (simple, medium, difficult) existing supporting material exists or to be developed so far, 69 examples have been collected from 36 training courses, of which 46 identify “calibration” as the main application area. the examples are spread over 15 different metrology areas (including “not specified”), with the top two listed as “dimensional” (18/69) and “temperature” (13/69) accounting for almost one-half of the examples. there is a focus on applying the lpu approach of jcgm 100:2008, either on its own (40/69) or in combination with other approaches (55/69) for comparison. very few examples are classified as “difficult” (4/69) with most classified as “simple” (33/69), but in this regard the classification for the different levels of difficulty is likely to be quite subjective. it is planned to review other sources of examples used for teaching the principles of mu and for demonstrating different methods for mu evaluation. one such source is the compendium [9] of examples that was the main output of the emue project [10]. the compendium presents 41 examples from six broad application areas: industry and society quality of life energy environment conformity assessment calibration, measurement, and testing figure 6, figure 7 and figure 8 present graphically a comparison of the examples taken from the two sources in terms of the categories of metrology area, approach (to mu evaluation) and level (of difficulty). the examples taken from the emue project are spread over 19 different metrology areas, with a more uniform distribution across them, and the top two listed are “chemistry” (8/41) and “flow metrology” (6/41). in terms of metrology area, the examples from the two sources appear complementary, perhaps reflecting in existing courses how examples from the “dimensional” and “temperature” areas are more accessible to a general audience and easier to teach, whereas the examples collected in the emue project reflect the wider interests of the partners involved in the project. there is again a focus on applying the lpu approach of jcgm 100:2008, either on its own or in combination with other approaches, but the examples from the emue project offer a wider range of approaches, including bayesian, regression and “top-down” approaches to mu evaluation. finally, a judgment about the level of difficulty of each example was made by one of the authors regarding how a “non-expert” faced with the example in a training course might perceive the example. the result was that all examples were classified as “medium” to “difficult” and so, in this regard, the examples from the sources again can be considered complementary. the data collected from these different sources serve as a basis for identification of interesting cases to be further developed in the form of classroom examples. in general, the analysis of the results, and the comparison for different sources, will support the identification of needs not covered by existing training courses, or deficiencies in those courses, and it will facilitate the exchange of knowledge between people teaching mu. 4. conclusions a collaborative activity on “measurement uncertainty training” under the auspices of the european metrology network for mathematics and statistics (mathmet) is underway. this abstract reports on the progress to undertake surveys of existing training courses on mu, software for mu evaluation, and examples to support the understanding of methods for mu evaluation. it appears that an appreciable amount of material is available: 41 training courses, 69 examples, and 50 items of software. it is hoped that the knowledge assembled in this activity will help practitioners to make good choices about appropriate material to support their training needs, as well as help developers of training material to ensure good coverage of their training products and target them at user needs. actual and future updated versions of the three surveys will be published on a dedicated webpage [11]. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 figure 6. classification according to “metrology area” for the examples taken from existing training courses (top) and developed in the emue project (bottom). acknowledgement the authors of this abstract wish to thank the colleagues of all the partners of the activity consortium for participating in the survey of courses, software and classroom examples. the consortium comprises [3]: ptb (coordination), cem, gum, imbih, ims sas, inrim, ipq, lne, metas, npl, smd, accredia ente italiano di accreditamento, deutsche akademie für metrologie (dam), national standards authority of ireland (nsai), politecnico di torino, university of konstanz. figure 7. as figure 6 but using the classification of “approach (to mu evaluation)”. figure 8. as figure 6 but using the classification of “level (of difficulty)”. references [1] european metrology network for mathematics and statistics home page. online [accessed 27 february 2023] https://www.euramet.org/european-metrologynetworks/mathmet [2] emn mathmet starts initiative on measurement uncertainty training news story. online [accessed 27 february 2023] https://www.euramet.org/european-metrologynetworks/mathmet/bugermenu/news [3] mathmet measurement uncertainty training activity home page. online [accessed 27 february 2023] https://www.euramet.org/goto/g1-e34bee [4] k. klauenberg, introduction to the mathmet activity mu training, presented at the joint imeko tc1-tc7-tc13-tc18 & 0 5 10 15 20 biology chemistry dimensional electrical force frequency general illuminance mass material properties pressure radiometry, photometry temperature time not specified course examples atmospheric measurement chemistry dimensional electrical flow metrology gas metrology general hardness imaging key comparisons mass material combustion precipitation pressure temperature thermal comfort thrust torque viscosity emue examples https://www.euramet.org/european-metrology-networks/mathmet https://www.euramet.org/european-metrology-networks/mathmet https://www.euramet.org/european-metrology-networks/mathmet/bugermenu/news https://www.euramet.org/european-metrology-networks/mathmet/bugermenu/news https://www.euramet.org/goto/g1-e34bee acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 mathmet symposium, porto, portugal, 31 august-2 september 2022. [5] jcgm publications: guides in metrology home page. online [accessed 27 february 2023] https://www.bipm.org/en/committees/jc/jcgm/publications [6] list of uncertainty propagation software webpage. online [accessed 27 february 2023] https://en.wikipedia.org/wiki/list_of_uncertainty_propagation _software [7] keith lines, jean-laurent hippolyte, indhu george and peter harris, a mathmet quality management system for data, software, and guidelines, acta imeko vol. 11 no. 4 (2022). doi: 10.21014/actaimeko.v11i4.1348 [8] quality assurance tools webpage. online [accessed 27 february 2023] https://www.euramet.org/european-metrologynetworks/mathmet/activities/quality-assurance-tools [9] good practice in evaluating measurement uncertainty, compendium of examples, adriaan m.h. van der veen and maurice g. cox (editors), 27 july 2021. online [accessed 27 february 2023] http://empir.npl.co.uk/emue/wpcontent/uploads/sites/49/2021/07/compendium_m36.pdf [10] examples of measurement uncertainty evaluation project home page. online [accessed 27 february 2023] http://empir.npl.co.uk/emue/ [11] mathmet measurement uncertainty training activity, for trainees measurement uncertainty training webpage. online [accessed 27 february 2023] https://www.euramet.org/european-metrologynetworks/mathmet/activities/measurement-uncertainty-trainingactivity/for-trainees-measurement-uncertainty-training https://www.bipm.org/en/committees/jc/jcgm/publications https://en.wikipedia.org/wiki/list_of_uncertainty_propagation_software https://en.wikipedia.org/wiki/list_of_uncertainty_propagation_software https://doi.org/10.21014/actaimeko.v11i4.1348 https://www.euramet.org/european-metrology-networks/mathmet/activities/quality-assurance-tools https://www.euramet.org/european-metrology-networks/mathmet/activities/quality-assurance-tools http://empir.npl.co.uk/emue/wp-content/uploads/sites/49/2021/07/compendium_m36.pdf http://empir.npl.co.uk/emue/wp-content/uploads/sites/49/2021/07/compendium_m36.pdf http://empir.npl.co.uk/emue/ https://www.euramet.org/european-metrology-networks/mathmet/activities/measurement-uncertainty-training-activity/for-trainees-measurement-uncertainty-training https://www.euramet.org/european-metrology-networks/mathmet/activities/measurement-uncertainty-training-activity/for-trainees-measurement-uncertainty-training https://www.euramet.org/european-metrology-networks/mathmet/activities/measurement-uncertainty-training-activity/for-trainees-measurement-uncertainty-training doppler flow phantom failure detection by combining empirical mode decomposition and independent component analysis with short time fourier transform acta imeko issn: 2221-870x december 2021, volume 10, number 4, 185 193 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 185 doppler flow phantom failure detection by combining empirical mode decomposition and independent component analysis with short time fourier transform giorgia fiori1, fabio fuiano1, andrea scorza1, maurizio schmid1, silvia conforto1, salvatore a. sciuto1 1 department of industrial, electronic and mechanical engineering, roma tre university, via della vasca navale 79, 00146 rome, italy section: research paper keywords: emd; ica; stft; flow phantom failures; pw doppler citation: giorgia fiori, fabio fuiano, andrea scorza, maurizio schmid, silvia conforto, salvatore andrea sciuto, doppler flow phantom failure detection by combining empirical mode decomposition and independent component analysis with short time fourier transform, acta imeko, vol. 10, no. 4, article 29, december 2021, identifier: imeko-acta-10 (2021)-04-29 section editor: roberto montanini, università di messina and alfredo cigada, politecnico di milano, italy received august 2, 2021; in final form december 4, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: giorgia fiori, e-mail: giorgia.fiori@uniroma3.it 1. introduction doppler flow phantoms are standard reference test devices usually employed in quality controls (qcs) for ultrasound (us) system performance evaluation [1]-[3]. they can simulate the main acoustic characteristics of biological tissues and reproduce repeatable flows whose regimes are similar to those in blood vessels [4]-[7]. to date, the lack of a generally accepted standard for b-mode and doppler [8]-[12], has led to a limitation on us phantoms use. it should be noticed that, even though such devices are widespread on the market, they are still not included in a shared standard that focuses in detail on periodic and objective checks of their metrological and functional characteristics. this seems to be odd, since the existing commercial doppler phantoms show several technical limitations [2], [3], [13], [14] affecting their reliability and traceability for doppler qc testing. the main drawbacks of the most commonly used doppler phantom model, which is the flow phantom, are the desiccation over time of the tissue mimicking material (tmm), the tendency of blood mimicking fluid (bmf) particles to form agglomerates and/or air bubbles, and the inconsistency of phantom acoustic and pump mechanical properties over time [3]. despite the awareness of such limitations, objective protocols and criteria for the monitoring of the phantom defects degree are still lacking in literature. in particular, some studies focused their attention on two different kinds of doppler phantom stability, i.e., tmm and bmf stability [15]-[18]. the former refers to any physical modification in the tmm, while the latter indicates the presence of any solid and/or gaseous element in the bmf. more in detail, tmm stability can be compromised because of (a) tmm fracture e.g., due to desiccation over time, (b) tmm erosion or bmf leakage e.g., due to bmf action through time on tmm in wall-less flow phantoms or tubing material rupture. on the other hand, bmf stability highly depends on (a) any presence of air bubbles, particles abstract nowadays, objective protocols and criteria for the monitoring of phantoms failures are still lacking in literature, despite their technical limitations. in such a context, the present work aims at providing an improvement of a previously proposed method for the doppler flow phantom failures detection. such failures were classified as low frequency oscillations, high velocity pulses and velocity drifts. the novel objective method, named emodica-stft, is based on the combined application of the empirical mode decomposition (emd), independent component analysis (ica) and short time fourier transform (stft) techniques on pulsed wave (pw) doppler spectrograms. after a first series of simulations and the determination of adaptive thresholds, phantom failures were detected on real pw spectrograms through the emodica-stft method. data were acquired from two flow phantom models set at five flow regimes, through a single ultrasound (us) diagnostic system equipped with a linear, a convex and a phased array probe, as well as with two configuration settings. despite the promising outcomes, further studies should be carried out on a greater number of doppler phantoms and us systems as well as including an in-depth investigation of the proposed method uncertainty. mailto:giorgia.fiori@uniroma3.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 186 agglomerates or tmm debris, and (b) unwanted variations of flow velocity regimes. nevertheless, stability assessment is carried out through a visual qualitative evaluation [15], [16], or without a specific and rigorous protocol [17], [18]. in the existing literature, there are other studies investigating and detecting the failures that could possibly compromise the stability of devices used in both biomedical and mechanical fields. however, there are some issues that nowadays should be taken into account: in [19], for example, it has been pointed out that a specific standard for mechanical heart pumps testing procedures was still missing. investigations into such issues were limited to the early evaluation of the failures using an analysis technique along with device testing before surgical implantation. on the other hand, in [20] centrifugal pump failures have been reviewed, highlighting the lack of an integrated system able to monitor all the major pump failures. in such a context, the present study focuses on the improvement and testing of a previously developed short time fourier transform-based image analysis method [21] for the automatic detection of the main doppler flow instabilities that may arise. the proposed improved method is based on the application of empirical mode decomposition (emd) and independent component analysis (ica) techniques combined with the short time fourier transform (stft), namely emodica-stft, to automatically evaluate the phantom failures through pulsed wave (pw) doppler spectrograms. emd is a single-channel technique [22], [23], firstly introduced in [24], to obtain the decomposition of a signal in time. it is widely used in combination with ica for the effective processing of electrophysiological signals [25], [26]. an interesting feature of such a combination is the possibility of successfully extracting both oscillatory and spike-like sources [22]. stft is a time-frequency spectral analysis technique, widely used in several scientific fields, such as structural mechanics, aeronautics, and biomedical engineering. it has been applied in structural health monitoring fields, to detect damages in existing structures [27], to classify and predict delamination in smart composite laminates [28], to reveal corrosion and fatigue cracks in aircraft structures [29], and to analyse physiological signal characteristics and determine relevant parameters [30]-[33]. the goal of the present work is the implementation and testing of the emodica-stft method: it processes pw doppler spectrograms collected from two different doppler flow phantoms through a single intermediate technology level us system equipped with three array probes (linear, convex, and phased array) at their central doppler frequency. in section 2, a brief overview of the techniques adopted in the proposed method is provided, and their combined application on three simulation cases is described. in section 3, the experimental setup used in this study and the emodicastft method application to pw spectrograms is discussed. in section 4 results are presented and discussed on the basis of infraand inter-phantom differences in the detected failures. finally, in the concluding section, the major achievements and future developments of the research hereby presented will be reported. 2. emodica-stft method application to phantom failures detection bmf instability sources can be identified as any presence of air bubbles, particle agglomerates or tmm debris, and unwanted variations of flow velocity regimes. consequently, the present study focuses on their detection in pw spectrograms, with particular reference to the following phantom failures: 1. low frequency oscillations caused by any pump or hydraulic dampener inability to deliver a constant flow velocity in correspondence of a continuous flow regime setting; 2. high velocity pulses caused by any particle agglomerates or tmm debris in the phantom flow; 3. flow velocity drifts due to the unwanted onset of the pump acceleration (e.g., deriving from a failure in the control system). 2.1. emd, ica and stft techniques empirical mode decomposition [22], [24] is a signalprocessing tool that, through an iterative process, decomposes a signal x(t) into a finite set of intrinsic mode functions (imfs) and a residual rm(t), as follows: 𝑥(𝑡) = ∑ 𝐼𝑀𝐹𝑖 (𝑡) 𝑀 𝑖=1 + 𝑅𝑀 (𝑡), (1) where imfi(t) is the i-th oscillatory mode. each imfs has the following properties [22]: (a) it contains one frequency only, which is referred to as the instantaneous frequency, (b) its frequency is different from all the ones of the other functions, (c) it has zero mean value, and (d) it is an oscillatory function. one of the main advantages of emd, as compared to other decomposition techniques [22], is that it does not require any apriori knowledge of the signal to be decomposed. in turn, independent component analysis is a blind source separation technique [34], applied to a set of recorded signals yi(t) whose aim is the extraction of unknown sources si(t), named independent components (ics), under the assumption of statistical independency. in particular, yi(t) can be expressed as a linear combination of si(t), as follows: �⃗�(𝑡) = [𝐴] ∙ 𝑠(𝑡) , (2) where [a] is an unknown matrix, called mixing matrix. in the present study, a computationally improved ica method was applied, namely fastica [35], which was implemented in matlab environment as a software package [36]. finally, short time fourier transform is a set of fourier transforms performed on a signal, which is subdivided into overlapped or non-overlapped temporal segments, through a translating window (e.g., rectangular, hanning) in time. the fourier transform is applied under the hypothesis of pseudostationarity of the temporal segments [37], which is achieved through the choice of a short translating window. the stft expression for a generic discrete signal x(n), is the following: 𝑆𝑇𝐹𝑇(𝑛, 𝜔) = ∑ 𝑥(𝑛 + ℎ)𝑤𝑁 (ℎ) +∞ ℎ=−∞ 𝑒 −𝑗𝜔ℎ , (3) where wn(n) is the translating window. the corresponding normalized, real-valued, non-negative spectrogram sx(n,) can be computed through the following expression: 𝑆𝑥 (𝑛, 𝜔) = 2 𝑁 ∙ 𝐶𝐹 ∙ |𝑆𝑇𝐹𝑇(𝑛, 𝜔)|2, (4) where n is the sample window length and cf is a correction factor varying according to the chosen window amplitude [21]. therefore, (4) has the advantage of taking the applied window type into account. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 187 2.2. emodica-stft simulated application this subsection describes the emodica-stft method proposed to assess the phantom failures, through the combination of the previously described techniques, as shown in the block diagram (figure 1). the following steps were applied to a real flow maximum velocity signal vp(t) lasting  60 s, which was derived from pw spectrograms according to the procedure which will be described in section 3.2. it was chosen among the available data because it is a representative case study: in fact, the signal shows both high-velocity pulses and low-frequency oscillations. a simulated velocity drift vdr(t) of 0.06 cm·s-2 was added to vp(t) as an increasing trend in time, obtaining a velocity signal vp,tot(t) (figure 2), and then, the steps described in the following and represented in figure 1, were applied. first step. the first step is the application of emd to vp,tot(t): the imfs and the residual r(t) are retrieved on the basis of (1). second step. the second step is the application of fastica to the imfs in order to compute the ics and the mixing matrix [a]. at this point, the mean frequency fmean of each independent component is obtained, and a frequency threshold thf = 0.5 hz is selected to discriminate between highand low-frequency content ics. therefore, two different groups of ics are obtained, namely iclow and ichigh. the latter are multiplied for the mixing matrix [a] according to (2), to back-reconstruct the corresponding oscillatory modes imflow and imfhigh. finally, the modes of each group are summed together to reconstruct two signals vp,low(t) and vp,high(t) derived from the signal vp,tot(t), where the first one has frequency contents lower than thf (figure 3 a), while the second one has frequency contents higher than thf (figure 3 b). third step. the third step is the stft application, according to (3) and (4), to vp,low(t) and vp,high(t), with the settings reported in table 1. as already done in [21], the spectral window chosen is the hanning window, whose expression is the following: 𝑤𝑁 (𝑛) = 1 2 (1 − cos 2 π 𝑛 𝑁 ). (5) after the stft application, two mesh plots are obtained. in this way, it is possible to carry out the failure detection by distinguishing the contributions of the low frequency oscillations and high velocity pulses in the spectrograms of vp,low(t) and vp,high(t), respectively. low frequency oscillations are represented in the mesh plot of vp,low(t) normalized spectrogram as frequency pulses (figure 4 a, b). therefore, the detection of an oscillation occurs when sx figure 1. block diagram of the proposed emodica-stft method for flow phantom failures detection. figure 2. real flow maximum velocity signal with low frequency oscillations, high velocity pulses with a 0.06 cm·s-2 velocity drift. a) b) figure 3. (a) reconstructed signal vp,low(t) with low frequency contents superimposed to vp,tot(t) after the offset removal; (b) reconstructed signal vp,high(t) with high frequency contents superimposed to vp,tot(t) after the offset removal. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 188 shows a pulse both limited in time (according to the oscillation period) and frequency ( 0 hz), whose amplitude is higher than the threshold thlf, automatically determined as follows: 𝑡ℎ𝐿𝐹 = 𝜎𝑣 2 ∆𝑓 ∙ 𝐺, (6) where v is the flow velocity standard deviation depending on the phantom model, f is the stft frequency resolution, and g is a safety factor that in this study was chosen equal to 10. in turn, high velocity pulses are represented, as shown in the normalized spectrogram of vp,high(t), by a window covering almost all the frequency components in the mesh plot (figure 4 c, d). therefore, the detection of a pulse occurs when the average amplitude of the frequency components, between 5 and 30 hz, related to a single temporal instant is higher than the threshold thhf. the latter can be automatically determined, by considering the sampling frequency fs of vp(t), as follows: 𝑡ℎ𝐻𝐹 = 𝜎𝑣 2 ∆𝑓 ∙ 𝐺 ∙ 𝐹𝑓𝑟 𝑓𝑠/2 , (7) where ffr is a factor that considers the entity of the frequency range in which the failure occurs in the normalized spectrogram. in this case, where a frequency range between 5 and 30 hz was considered, ffr = 25 hz. the choice to reduce the frequency range in which the detection is carried out was necessary to compensate for non-ideal pulses [21]. fourth step. the fourth step of the emodica-stft method is the detection of the velocity drifts from the emd residual r(t). after the application of the least squares method to r(t) (figure 5), it is possible to evaluate any velocity drift through the computation of the angular coefficient m of the straight line that best approximates the residual trend. in particular, the detection of a velocity drift occurs when |m| is higher than the threshold thdr, that can be automatically determined as: 𝑡ℎ𝑑𝑟 = 𝜎𝑣 𝑡𝑃𝑊 ∙ 𝐺, (8) where tpw is the velocity signal duration, while the safety factor g for the velocity drift detection was chosen equal to 2. the advantage of (8) relies on its dependence on the phantom flow velocity standard deviation v so that the velocity drift perception is not affected by human eye subjectivity. the retrieved angular coefficient of r(t), shown in figure 5, is lower than the simulated velocity drift added to vp(t). this is likely due to the combination of vdr(t) with the pre-existing trend of the real velocity signal under analysis. table 1. stft parameters setting. parameter symbol value sampling frequency (hz) fs 100 spectral window hanning window length (samples) n 100 overlap (samples) noverlap 60 zero-padding (samples) nzero-pad 50 correction factor cf 2 temporal resolution (s) t 0.4 frequency resolution (hz) f  0.7 a) b) c) d) figure 4. low frequency contents signal vp,low(t) represented through (a) a mesh plot with the detected frequency peaks (thlf = 60 cm2·s-2·hz-1) and (b) its temporal evolution together with the signal vp,tot(t); high frequency contents signal vp,high(t) represented through (c) a mesh plot with the detected frequency windows (thhf = 30 cm2·s-2·hz-1) and (d) its temporal evolution together with the signal vp,tot(t). acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 189 finally, this simulation was repeated in two further cases to test the proposed method for different starting conditions: (a) vp(t) with an additional velocity drift of 0.03 cm·s-2 and (b) vp(t) with an additional velocity drift of 0.09 cm·s-2. the emodicastft method identified the same low frequency oscillations and high velocity pulses retrieved for the first simulation, while the obtained velocity drift angular coefficients were (164.1  0.4)·104 cm·s-2 (r2 = 0.97) and (684.9  0.7)·10-4 cm·s-2 (r2 = 0.99) for case (a) and (b), respectively. 3. materials and experimental data in the present study, the emodica-stft method for the phantom failures assessment, implemented in matlab, is proposed as an improvement of a procedure previously described in [21]. this is achieved through the experimental setup described in the following section. 3.1. experimental setup data were collected by using two doppler flow phantoms, whose main characteristics are reported in table 2. the first phantom under test (put) is gammex, optimizer® 1425a, a self-contained device [38] able to provide constant or pulsatile flow in the 1.7-12.5 ml·s-1 range, through an electric flow controller. the second put is cirs, model 069, an easy-toassemble simulator [39], able to provide an average flow velocity between 5 and 68 cm·s-1, through the action of a peristaltic pump, providing a pulsatile flow. the latter can be converted into a constant flow through the connection of a dampener. the acquisition started after 2 hours of phantom warm-up. in order to test their stability, five constant flow rate values (low lf, low-medium lmf, medium mf, medium-high mhf and high hf) were set, as shown in table 3. doppler phantom characteristics are not consistent, therefore, flow values were differently set to guarantee the same mean velocity regimes (vgammex = vcirs), as detailed in [21]. a single us system equipped with three us probes (linear, convex, and phased array) was used to acquire six pw spectrograms lasting  10 s for each flow regime. data were collected with two different pw doppler settings, namely set a and set b, reported in table 4, in order to make a comparison of the stability performance between the two test objects under different setting conditions. the sample volume length was maintained fixed for both phantoms and settings, whereas the sample volume depth was changed according to the phantom model attenuation, and kept consistent for set a and set b. as regards the insonification angle, it was varied according both to the probe positioning on the scanning surface and to the different tube slopes of the two phantoms. figure 5. least squares method applied to the emd residual in the case of a 0.06 cm·s-2 velocity drift added to the real maximum velocity signal vp(t). table 2. main characteristics of the two doppler flow phantoms [38], [39]. parameter gammex optimazer® 1425a cirs model 069 tissue mimicking material water-based mimicking gel zerdine tissue mimicking gel attenuation 0.50 dbcm-1mhz-1 0.70 dbcm-1mhz-1 tmm sound speed 1540  10 ms-1 1540  10 ms-1 tube inner diameter 5.0 mm 4.8 mm flow velocity standard deviation (*) 2 cms-1 3 cms-1 tube slope 40° 70° dimensions 40.722.935.6 cm 2012.527.5 cm (*) the flow velocity standard deviations were estimated from the specifications reported in the phantoms datasheets. table 3. doppler phantoms flow rate and mean flow velocity settings. flow phantom flow regime flow rate q (mls-1) mean flow velocity v (cms-1) gammex optimazer 1425a low lf 2.6 13.2 low-medium lmf 3.7 18.8 medium mf 4.8 24.4 medium-high mhf 5.9 30.0 high hf 7.0 35.7 cirs model 069 low lf 2.4 13.3 low-medium lmf 3.4 18.8 medium mf 4.4 24.3 medium-high mhf 5.4 29.8 high hf 6.4 35.4 table 4. pw doppler main configuration settings. parameter set a set b gammex cirs gammex cirs doppler frequency (mhz) l = 5.21 c = 2.50 p = 2.50 wall filter minimum medium spectrum resolution minimum l = medium c, p = minimum sample volume length (cm) 3.0 sample volume depth (mm) 48 40 48 40 insonification angle (°) 52 l, c = 70 p = 55 52 l, c = 70 p = 55 pw spectrogram duration (s)  10 pw spectrogram total duration (s)  60 l = linear, c = phased, p = phased array probe. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 190 3.2. emodica-stft on pw spectrograms each pw spectrogram was processed for the detection of the maximum velocity waveform, as in [21]. pixel coordinates pxmax associated to the maximum velocity values were detected through a gray level adaptive threshold thmax automatically determined as 10% of the maximum gray level value [40]. at this point, pxmax were associated to the corresponding flow velocity values vmax for each temporal instant, taking into consideration the maximum value displayed on the pw velocity scale. then, the six vmax signals obtained for the same flow regime were juxtaposed. the emodica-stft application was implemented according to the block diagram in figure 1 and with the stft settings reported in table 1. the thresholds applied for the detection of failures show different values between the doppler phantoms under test because of the different flow velocity standard deviation (table 2). according to (6)-(8), the computed threshold values were: thlf = 60 cm2·s-2·hz-1, thhf = 30 cm2· s-2·hz-1 and thdr = 7·10-2 cm·s-2, for gammex 1425a and thlf = 135 cm2·s-2·hz-1, thhf = 67 cm2·s-2·hz-1 and thdr = 10·10-2 cm· s-2, for cirs model 069. it is worth noting that, due to the threshold dependency on flow standard deviation, the higher threshold values retrieved for cirs phantom are a first indicator of its lower performance as compared to the gammex phantom. 4. results and discussion the number of phantom failures detected for the two test objects according to the us probe, the flow regime (lf, lmf, mf, mhf and hf) and the pw doppler settings (set a and set b) is reported in table 5 and table 6. according to [41], the standard deviation values can be computed as the square root of the counted value. the emodica-stft method did not detect any velocity drift failure on the two phantoms, because the angular coefficients |m| of the straight lines, obtained by applying the least squares method to all the emd residuals, were always lower than the determined thdr. as regards the gammex 1425a phantom, it should be noted that, independently from the us probe considered, the number of low frequency oscillations is globally limited, except for the lmf flow regime, in set a and set b. as shown in figure 6, a sinusoidal trend is clearly visible, therefore suggesting that the phantom electric flow controller seems no longer able to guarantee a constant flow regime of 3.7 ml·s-1. on the other hand, both convex and phased array probes show a higher number of high velocity pulses with respect to the linear array probe, suggesting a probe-dependent sensitivity to bmf particle agglomerates. furthermore, independently from the probe, the low flow regime lf shows the highest number of high velocity pulses. this may be due to the fact that the flow velocity is too low to dissolve the particle agglomerates. table 5. number of detected failures according to the us probe, the flow regime and pw doppler settings for gammex 1425a. us probe flow regime pw doppler setting low frequency oscillation thlf = 60 cm2·s-2·hz-1 high velocity pulse thhf = 30 cm2·s-2·hz-1 velocity drift thdr = 7·10-2 cm·s-2 angular coefficient m (cm·s-2) r2 linear lf set a 6  2 7  3 − -0.9·10-2 0.99 set b − 3  2 − -1.0·10-2 0.94 lmf set a 49  7 − − -0.9·10-2 0.99 set b 45  7 − − 2.1·10-4 0.99 mf set a − − − -3.7·10-3 0.99 set b − − − -2.3·10-3 0.98 mhf set a − − − 4.0·10-3 0.99 set b − 8  3 − -3.8·10-3 0.89 hf set a − − − -0.9·10-2 0.98 set b − 1  1 − 2.8·10-3 0.99 convex lf set a 2  1 44  7 − 4.0·10-2 0.98 set b − 14  4 − -4.3·10-4 0.98 lmf set a 47  7 2  1 − -4.9·10-3 0.99 set b 44  7 2  1 − -1.2·10-2 0.99 mf set a − 9  3 − -1.6·10-3 0.99 set b − 1  1 − -2.4·10-3 0.99 mhf set a − 6  2 − -3.3·10-3 0.94 set b − 3  2 − -0.9·10-2 0.99 hf set a 2  1 2  1 − -0.6·10-2 0.98 set b − 1  1 − 1.9·10-5 0.99 phased lf set a − 25  5 − -0.8·10-3 0.99 set b − 4  2 − 3.1·10-3 0.92 lmf set a 44  7 − − -1.9·10-2 0.98 set b 47  7 4  2 − 1.2·10-3 0.99 mf set a − 3  2 − -1.4·10-2 0.94 set b − 8  3 − 1.5·10-3 0.96 mhf set a − 9  3 − 1.0·10-3 0.96 set b − 10  3 − 4.0·10-3 0.98 hf set a − 6  2 − -3.3·10-3 0.99 set b − 5  2 − -3.3·10-3 0.99 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 191 as regards the cirs model 069 simulator, no low-frequency oscillation was detected through the phased array probe. in both linear and convex array probes, a high number of oscillations was detected in correspondence of the medium-high flow regime mhf (figure 7). this could be due to a dampener failure in correspondence of such flow regime. similarly to gammex 1425a, a higher number of high velocity pulses was detected for both convex and phased array probes. by comparing the two doppler phantoms outcomes retrieved, gammex 1425a shows the lowest number of lowfrequency oscillations (by excluding the particular case of lmf regime), when compared to both linear and convex array probes of the cirs model 069, while for the phased array one no oscillations were detected. conversely, gammex 1425a globally shows the highest number of high velocity pulses compared to the cirs model 069 for both linear and convex array probes, while such issue seems to be reversed for phased array probe. table 6. number of detected failures according to the us probe, the flow regime and pw doppler settings for cirs model 069. us probe flow regime pw doppler setting low frequency oscillation thlf = 135 cm2·s-2·hz-1 high velocity pulse thhf = 67 cm2·s-2·hz-1 velocity drift thdr = 10·10-2 cm·s-2 angular coefficient m (cm·s-2) r2 linear lf set a 4  2 − − 3.7·10-2 0.99 set b 2  1 − − -0.8·10-2 0.99 lmf set a − − − 3.6·10-3 0.95 set b − − − -1.8·10-4 0.99 mf set a − − − -1.9·10-3 0.97 set b − − − -1.0·10-3 0.99 mhf set a 2  1 − − 0.8·10-2 0.94 set b 11  3 1  1 − 2.0·10-3 0.99 hf set a − − − 3.8·10-4 0.99 set b 2  1 − − 2.3·10-2 0.99 convex lf set a − 5  3 − 2.2·10-2 0.99 set b − 7  3 − -4.6·10-3 0.99 lmf set a − 9  3 − -2.0·10-2 0.95 set b − 6  3 − 1.0·10-2 0.93 mf set a 1  1 17  4 − 2.2·10-2 0.95 set b − 11  3 − 4.3·10-3 0.96 mhf set a − 1  1 − -1.9·10-2 0.99 set b 18  4 2  1 − 1.9·10-2 0.99 hf set a 5  2 2  1 − -2.6·10-2 0.99 set b 1  1 1  1 − 0.6·10-2 0.99 phased lf set a − 8  3 − 1.1·10-2 0.98 set b − 3  2 − -0.9·10-2 0.98 lmf set a − 17  4 − 1.4·10-2 0.99 set b − 11  3 − -0.9·10-2 0.99 mf set a − 14  4 − 3.6·10-3 0.99 set b − 13  4 − -1.8·10-3 0.94 mhf set a − 8  3 − 0.9·10-2 0.99 set b − 12  3 − 0.5·10-3 0.97 hf set a − 10  3 − 0.5·10-2 0.99 set b − 5  2 − 1.4·10-3 0.99 figure 6. example of the sinusoidal trend in lmf regime for gammex 1425a acquired with the linear array probe in set a. figure 7. example of the low frequency oscillations in mhf regime for cirs model 069 acquired with the linear array probe in set b. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 192 since the same number of put failures can have a different relevance depending on the intended use of the ultrasound system to be checked through the put (e.g., in echocardiography, obstetrics and gynecology, pediatrics, etc.), the emodica-stft may be applied with different thresholds by means of ad hoc us systems and probe models over time. therefore, the same put may be suitable for testing a restricted number of us scanners only. this could be an advantage for the technical assessment of the above medical devices, as well as for put maintenance. 5. conclusions doppler phantoms are standard reference test devices that, nowadays, are not yet included in a shared standard focusing on the objective evaluation of their performances and failures. in particular, phantom stability assessment is currently carried out through visual and subjective evaluations, or without a rigorous protocol. therefore, in the present study, a novel method, named emodica-stft, based on the combined application of emd, ica and stft techniques, is proposed and tested to automatically determine, through the processing of pw doppler spectrograms, the number of phantom failures. the main flow phantom failures were classified as low frequency oscillations, high velocity pulses and velocity drifts. data were collected from two flow phantoms by a single diagnostic us system equipped with three probe models. tests were carried out in two different us configuration settings and five flow regimes set on the test objects. after a series of simulations, adaptive thresholds for the detection of each failure were determined depending on the standard deviation of the put flow velocity. consequently, emodica-stft method was applied to the maximum flow velocity signals extracted from the pw doppler spectrograms through an automatic processing. finally, the number of detected failures was found for both doppler phantoms. on the basis of the promising outcomes, further studies should be carried out (a) on a higher number of doppler phantoms, (b) on a larger number of us diagnostic systems and (c) including an in-depth investigation of the proposed method uncertainty. acknowledgement the authors wish to thank hitachi healthcare and jan galo of the clinical engineering service at i.r.c.c.s. children hospital bambino gesù for hardware supply and technical assistance in data collection. references [1] international electrotechnical commission, iec 61685:2002-04, ultrasonics – flow measurement systems – flow test object, 2002. [2] j. e. browne, a review of ultrasound quality assurance protocols and test devices, phys. med. 30 (2014) pp. 742-751. doi: 10.1016/j.ejmp.2014.08.003 [3] s. m. shalbi, a. a. oglat, b. albarbar, f. elkut, m. a. qaeed, a. a. arra, a brief review for common doppler ultrasound flow phantoms, j. med. ultrasound 28 (2020) pp. 138-142. doi: 10.4103/jmu.jmu_96_19 [4] k. k. dakok, m. z. matjafri, n. suardi, a. a. oglat, s. e. nabasu, a review of carotid artery phantoms for doppler ultrasound applications, j. med. ultrasound 29 (2021) pp. 157-166. doi: 10.4103/jmu.jmu_164_20 [5] a. a. oglat, m. z. matjafri, n. suardi, m. a. abdelrahman, m. a. oqlat, a. a. oqlat, a new scatter particle and mixture fluid for preparing blood mimicking fluid for wall-less flow phantom, j. med. ultrasound 26 (2018) pp. 134-142. doi: 10.4103/jmu.jmu_7_18 [6] m. alshipli, m. a. sayah, a. a. oglat, compatibility and validation of a recent developed artificial blood through the vascular phantom using doppler ultrasound colorand motion-mode techniques, j. med. ultrasound 28 (2020) pp. 219-224. doi: 10.4103/jmu.jmu_116_19 [7] a. a. oglat, m. z. matjafri, n. suardi, m. a. oqlat, a. a. oqlat, m. a. abdelrahman, o. f. farhat, m. s. ahmad, b. n. alkhateb, s. j. gemanam, s. m. shalbi, r. abdalrheem, m. shipli, m. marashdeh, characterization and construction of a robust and elastic wall-less flow phantom for high pressure flow rate using doppler ultrasound applications, natural and engineering sciences 3 (2018) pp. 359-377. doi: 10.28978/nesciences.468972 [8] g. fiori, f. fuiano, a. scorza, j. galo, s. conforto, s. a. sciuto, a preliminary study on the adaptive snr threshold method for depth of penetration measurements in diagnostic ultrasounds, appl. sci. 10 (2020). doi: 10.3390/app10186533 [9] a. scorza, g. lupi, s. a. sciuto, f. bini, f. marinozzi, a novel approach to a phantom based method for maximum depth of penetration measurement in diagnostic ultrasound: a preliminary study, 2015 ieee international symposium on medical measurements and applications (memea), turin, italy, 7 – 9 may 2015. doi: 10.1109/memea.2015.7145230 [10] g. fiori, f. fuiano, a. scorza, j. galo, s. conforto, s. a. sciuto, a preliminary study on an image analysis based method for lowest detectable signal measurements in pulsed wave doppler ultrasounds, acta imeko 10 (2021) pp. 126-132. doi: 10.21014/acta_imeko.v10i2.1051 [11] g. fiori, f. fuiano, a. scorza, m. schmid, j. galo, s. conforto, s. a. sciuto, a novel sensitivity index from the flow velocity variation in quality control for pw doppler: a preliminary study, proc. of 2021 ieee international symposium on medical measurements and applications (memea), neuchâtel, switzerland, 23 – 25 june 2021. doi: 10.1109/memea52024.2021.9478686 [12] g. fiori, a. scorza, m. schmid, j. galo, s. conforto, s. a. sciuto, a preliminary study on the average maximum velocity sensitivity index from flow velocity variation in quality control for color doppler, measurement: sensors 18 (2021). doi: 10.1016/j.measen.2021.100245 [13] s. cournane, a. j. fagan, j. e. browne, an audit of a hospitalbased doppler ultrasound quality control protocol using a commercial string doppler phantom, phys. med. 30 (2014) pp. 380-384. doi: 10.1016/j.ejmp.2013.10.001 [14] ipem report no 102, quality assurance of ultrasound imaging systems, 2010, isbn 978-1-903613-43-6. [15] k. v. ramnarine, t. anderson, p. r. hoskins, construction and geometric stability of physiological flow rate wall-less stenosis phantoms, ultrasound med. biol. 27 (2001) pp. 245-250. doi: 10.1016/s0301-5629(00)00304-5 [16] a. malone, d. chari, s. cournane, i. naydenova, a. fagan, j. browne, investigation of the assessment of low degree (< 50%) renal artery stenosis based on velocity flow profile analysis using doppler ultrasound: an in-vitro study, phys. med. 65 (2019) pp. 209-218. doi: 10.1016/j.ejmp.2019.08.016 [17] j. v. grice, d. r. pickens, r. r. price, technical note: a new phantom design for routine testing of doppler ultrasound, med. phys. 43 (2016) pp. 4431-4434. doi: 10.1118/1.4954205 [18] m. y. park, s. e. jung, j. y. byun, j. h. kim, g. e. joo, effect of beam-flow angle on velocity measurements in modern doppler ultrasound systems, ajr am. j. roentgenol. 198 (2012) pp. 11391143. doi: 10.2214/ajr.11.7475 https://doi.org/10.1016/j.ejmp.2014.08.003 https://doi.org/10.4103/jmu.jmu_96_19 https://www.ncbi.nlm.nih.gov/pmc/articles/pmc8515632/#:~:text=10.4103/jmu.jmu_164_20 https://www.ncbi.nlm.nih.gov/pmc/articles/pmc6159322/#:~:text=10.4103/jmu.jmu_7_18 https://www.ncbi.nlm.nih.gov/pmc/articles/pmc7869744/#:~:text=10.4103/jmu.jmu_116_19 https://doi.org/10.28978/nesciences.468972 https://doi.org/10.3390/app10186533 https://doi.org/10.1109/memea.2015.7145230 http://dx.doi.org/10.21014/acta_imeko.v10i2.1051 https://doi.org/10.1109/memea52024.2021.9478686 https://doi.org/10.1016/j.measen.2021.100245 https://doi.org/10.1016/j.ejmp.2013.10.001 https://doi.org/10.1016/s0301-5629(00)00304-5 https://doi.org/10.1016/j.ejmp.2019.08.016 https://doi.org/10.1118/1.4954205 https://doi.org/10.2214/ajr.11.7475 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 193 [19] s. m. patel, p. e. allaire, h. g. wood, a. l. throckmorton, c. g. tribble, d. b. olsen, methods of failure and reliability assessment for mechanical heart pumps, artif. organs 29 (2005) pp. 15-25. doi: 10.1111/j.1525-1594.2004.29006.x [20] k. k. mckee, g. forbes, i. mazhar, r. entwistle, i. howard, a review of major centrifugal pump failure modes with application to the water supply and sewerage industries, proc. of the icoms asset management conference, gold coast, qld, australia, 16 may 2011. online [accessed 4 december 2021] https://espace.curtin.edu.au/bitstream/handle/20.500.11937/28 560/159989_38214_icoms%20-%20paper%2032%20%20k%20mckee.pdf?sequence=2&isallowed=y [21] g. fiori, f. fuiano, a. scorza, m. schmid, j. galo, s. conforto, s. a. sciuto, doppler flow phantom stability assessment through stft technique in medical pw doppler: a preliminary study, proc. of 2021 ieee international workshop on metrology for industry 4.0 & iot (metroind4.0&iot), rome, italy, 7 – 9 june 2021. doi: 10.1109/metroind4.0iot51437.2021.9488513 [22] b. mijović, m. de vos, i. gligorijević, j. taelman, s. van huffel, source separation from single-channel recordings by combining empirical-mode decomposition and independent component analysis, ieee trans. biomed. eng. 57 (2010) pp. 2188-2196. doi: 10.1109/tbme.2010.2051440 [23] z. wu, n. e. huang, ensemble empirical mode decomposition: a noise-assisted data analysis method, adv. adaptive data anal. 1 (2009) pp. 1-41. doi: 10.1142/s1793536909000047 [24] n. e. huang, s. zheng, s. r. long, m. c. wu, h. h. shih, q. zheng, n.-c. yen, c. c. tung, h. h. liu, the empirical mode decomposition and the hilbert spectrum for nonlinear and nonstationary time series analysis, proc. r. soc. lond. a. 454 (1998) pp. 903-995. doi: 10.1098/rspa.1998.0193 [25] g. r. naik, s. e. selvan, h. t. nguyen, single-channel emg classification with ensemble-empirical-mode-decompositionbased ica for diagnosing neuromuscular disorders, ieee trans. neural syst. rehabil. eng. 24 (2016) pp. 734-743. doi: 10.1109/tnsre.2015.2454503 [26] g. wang, c. teng, k. li, z. zhang, x. yan, the removal of eog artifacts from eeg signals using independent component analysis and multivariate empirical mode decomposition, ieee j. biomed. health inform. 20 (2016) pp. 1301-1308. doi: 10.1109/jbhi.2015.2450196 [27] h. r. ahmadi, n. mahdavi, m. bayat, a novel damage identification method based on short time fourier transform and a new efficient index, structures 33 (2021) pp. 3605-3614. doi: 10.1016/j.istruc.2021.06.081 [28] a. khan, d.-k. ko, s. c. lim, h. s. kim, structural vibrationbased classification and prediction of delamination in smart composite laminates using deep learning neural network, composites part b: engineering 161 (2019) pp. 586-594. doi: 10.1016/j.compositesb.2018.12.118 [29] m. le, j. kim, s. kim, j. lee, b-scan ultrasonic testing of rivets in multilayer structures based on short-time fourier transform analysis, measurement 128 (2018) pp. 495-503. doi: 10.1016/j.measurement.2018.06.049 [30] d. cordes, m. f. kaleem, z. yang, x. zhuang, t. curran, k. r. sreenivasan, v. r. mishra, r. nandy, r. r. walsh, energy-period profiles of brain networks in group fmri resting-state data: a comparison of empirical mode decomposition with the short-time fourier transform and the discrete wavelet transform, front. neurosci. 15 (2021). doi: 10.3389/fnins.2021.663403 [31] v. gupta, m. mittal, qrs complex detection using stft, chaos analysis, and pca in standard and real-time ecg databases, j. inst. eng. india ser. b 100 (2019) pp. 489-497. doi: 10.1007/s40031-019-00398-9 [32] a.-c. tsai, j.-j. luh, t.-t. lin, a novel stft-ranking feature of multi-channel emg for motion pattern recognition, expert systems with applications 42 (2015) pp. 3327-3341. doi: 10.1016/j.eswa.2014.11.044 [33] a. hyvärinen, p. ramkumar, l. parkkonen, r. hari, independent component analysis of short-time fourier transforms for spontaneous eeg/meg analysis, neuroimage 49 (2010) pp. 257-271. doi: 10.1016/j.neuroimage.2009.08.028 [34] a. hyvärinen, j. karhunen, e. oja, what is independent component analysis, in: independent component analysis. a. hyvärinen, j. karhunen, e. oja (editors). wiley-interscience, new york, ny, usa, 2001, isbn 0-471-22131-7, pp. 147-163. [35] a. hyvärinen, e. oja, a fast fixed-point algorithm for independent component analysis, neural computation 9 (1997) pp. 1483-1492. doi: 10.1162/neco.1997.9.7.1483 [36] http://research.ics.aalto.fi/ica/software.shtml online [accessed 4 december 2021] [37] b. boashash, heuristic formulation of time-frequency distributions, in: time-frequency signal analysis and processing. b. boashash (editor). academic press, 2015, isbn 978-0-12-3984999, pp. 65-102. [38] gammex, optimazer 1425a: ultrasound image analyzer for doppler and gray scale scanners. online [accessed 4 december 2021] https://cspmedical.com/content/1021086_doppler_user_manual.pdf [39] cirs tissue simulation & phantom technology, doppler ultrasound flow simulator – model 069. online [accessed 4 december 2021] http://www.3000buy.com/uploads/soft/180118/cirs069.pdf [40] f. marinozzi, f. bini, a. d’orazio, a. scorza, performance tests of sonographic instruments for the measure of flow speed, 2008 ieee international workshop on imaging systems and techniques (ist), chania, greece, 10 – 12 september 2008. doi: 10.1109/ist.2008.4659939 [41] j. r. taylor, an introduction to error analysis: the study of uncertainties in physical measurements, university science books, sausalito, ca, usa, 1997, isbn 0-935702-75-x, pp. 245-260. https://doi.org/10.1111/j.1525-1594.2004.29006.x https://espace.curtin.edu.au/bitstream/handle/20.500.11937/28560/159989_38214_icoms%20-%20paper%2032%20-%20k%20mckee.pdf?sequence=2&isallowed=y https://espace.curtin.edu.au/bitstream/handle/20.500.11937/28560/159989_38214_icoms%20-%20paper%2032%20-%20k%20mckee.pdf?sequence=2&isallowed=y https://espace.curtin.edu.au/bitstream/handle/20.500.11937/28560/159989_38214_icoms%20-%20paper%2032%20-%20k%20mckee.pdf?sequence=2&isallowed=y https://doi.org/10.1109/metroind4.0iot51437.2021.9488513 https://doi.org/10.1109/tbme.2010.2051440 https://doi.org/10.1142/s1793536909000047 https://doi.org/10.1098/rspa.1998.0193 https://doi.org/10.1109/tnsre.2015.2454503 https://doi.org/10.1109/jbhi.2015.2450196 https://doi.org/10.1016/j.istruc.2021.06.081 https://doi.org/10.1016/j.compositesb.2018.12.118 https://doi.org/10.1016/j.measurement.2018.06.049 https://doi.org/10.3389/fnins.2021.663403 https://doi.org/10.1007/s40031-019-00398-9 https://doi.org/10.1016/j.eswa.2014.11.044 https://doi.org/10.1016/j.neuroimage.2009.08.028 https://doi.org/10.1162/neco.1997.9.7.1483 http://research.ics.aalto.fi/ica/software.shtml https://cspmedical.com/content/102-1086_doppler_user_manual.pdf https://cspmedical.com/content/102-1086_doppler_user_manual.pdf http://www.3000buy.com/uploads/soft/180118/cirs069.pdf http://dx.doi.org/10.1109/ist.2008.4659939 microsoft word article 12 108-730-1-le.doc acta imeko december 2013, volume 2, number 2, 67 – 72 www.imeko.org acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 67 development of nanoparticle sizing system using fluorescence polarization terutake hayashi, masaki michihata, yasuhiro takaya, kok foong lee osaka university, 2-1 yamadaoka, suita, osaka 565-0871, japan section: research paper keywords: nanoparticle; fluorescence polarization; brownian motion; rotational diffusion coefficient; particle sizing citation: terutake hayashi, masaki michihata, yasuhiro takaya, kok foong lee, development of nanoparticle sizing system using fluorescence polarization, acta imeko, vol. 2, no. 2, article 12, december 2013, identifier: imeko-acta-02 (2013)-02-12 editor: paolo carbone, university of perugia received april 15th, 2013; in final form november 17th, 2013; published december 2013 copyright: © 2013 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: this work was supported in part by a grant-in-aid for scientific research from the ministry of education, culture, sports, science and technology of japan (grant-in-aid for exploratory research 21686015). corresponding author: terutake hayashi, e-mail: hayashi@mech.eng.osaka-u.ac.jp 1. introduction the integration of nanoparticles into devices produces unique electronic, photonic, and catalytic properties. it offers the prospect of both new fundamental scientific advances and useful technological applications. many of the fundamental properties of materials (e.g., optical, electrical, and mechanical) can be expressed as functions of the size, shape, composition, and structural order [1, 2]. it is important to evaluate nanoparticle sizes and maintain a constant state (size distribution, average size, shape uniformity). the size distribution of monodispersed nanoparticles in a solvent can be measured using dynamic light scattering [3,4]. however, the scattering method is inappropriate for detecting the sizes of particles in a mixture, which include both small monodispersed particles and large particles such as agglomerates. the signal intensity depends on the sixth power of the particle diameter. therefore, the scattering signal from monodispersed particles, compared to the signal from their agglomerates, is too small to detect. the size distribution of nanoparticles with a wide size distribution can be measured using transmission electron microscopy [5]. this requires a dried sample. however, it is difficult to evaluate both the average diameter and size distribution of nanoparticles because of the instability of their diameters in a solvent. to measure the sizes of nanoparticles with a wide size distribution in a solvent, we developed an optical microscopy system that enables fluorescence polarization (fp) measurement and optical observation. the fp method can trace the rotational diffusion constant of brownian motion in a fluorescent molecule. when a fluorophore is used to label nanoparticles, the rotational diffusion coefficient corresponds to the size of the nanoparticles. the system makes it possible to evaluate nanoparticle sizes over a wide range because the fluorescence signal intensity is independent of changes in the nanoparticle sizes. in this paper, we describe a fundamental experiment to verify the feasibility of using this method for different sizes of nanoparticles. 2. measurement of rotational diffusion coefficient and particle sizing 2.1. measurement of rotational diffusion coefficient we have developed an fp method [6, 7, 8] to measure nanoparticle sizes by measuring the rotational diffusion coefficient. when linearly polarized light is irradiated on a nanoparticle labeled with a fluorophore, a fluorescence signal abstract to measure the sizes of nanoparticles with a wide size distribution in a solvent, we developed an optical microscopy system that enables fluorescence polarization (fp) measurement and optical observation. this system allows the evaluation of nanoparticle sizes over a wide range because the fluorescence signal intensity is independent of changes in the nanoparticle sizes. in this paper, we describe a fundamental experiment to verify the feasibility of using this system for different sizes of nanoparticles. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 68 initially maintains the polarization state of the irradiation light. as shown in figure 1, small particles will have a high degree of rotation due to brownian motion, and the polarization plane of the light emitted will vary greatly from that of the excitation light. in contrast, larger particles will have a low degree of rotation, so light is emitted in the polarized plane, similar to the excitation light. the fluorescence signal then depolarizes as time passes. the fluorescence anisotropy depends on the angle of the particle rotation. in addition, depolarization is rapid for particles with small volumes, whereas the polarization state is maintained longer in particles with large volumes. the fluorescence anisotropy r(t) is related to the rotational diffusion coefficient dr of a nanoparticle. the particle size can be calculated from dr on the basis of an analysis using the debye–stokes–einstein (dse) relationship [8, 9] when the particle shape is approximated to that of a sphere. here r(t) is defined by   // // 2 i i r t = i + i    (1) where the first and second subscripts refer to the orientation of the excitation and that of the emission, respectively. i∥ and i⊥ are the horizontally polarized (p-polarized) and vertically polarized (s-polarized) components of the fluorescence, respectively, when horizontally polarized light is irradiated. we can also express r(t) as a function of time t as       / / 0 t τ t θ er t = r r e + r i t = τ     (2) where i(t) is the instantaneous emission intensity normalized to a unit-integrated value. the parameter r reflects the extent of the rotational reorientation of the fluorophore. in this case, r is obtained by integrating the intensity-weighted r(t), as shown in eq. (3). the steady-state anisotropy r of a fluorophore undergoing isotropic rotational diffusion is related to the fluorescence lifetime,  and rotational correlation time [4]:     0 0 1 r r r = r t i t = +σ r r        (3) where r0 is a limiting value in the absence of rotation given by the relative orientation of the absorption and emission transition moments, and  is the ratio . measurements of the anisotropy decay can reveal a multiplicity of rotational correlation times reflecting heterogeneity in the size, shape, and internal motions of the fluorophore–nanoparticle conjugate. the rotational correlation times are determined in either the time domain or the frequency domain. the rotational diffusion coefficient can be precisely calculated from the fluorescence anisotropy in the frequency domain. when an excitation pulse is used as a forcing function, eq. (3) is transformed into a decay process in which the fluorescence lifetime  is no longer linked to the correlation time. the rotational correlation times are determined by measuring three parameters: , yac, and ydc. to define them, we measure the sinusoidal waveforms of both i∥ and i⊥ in the frequency domain. a series of i∥ and i⊥ are measured while the relative phase between the excitation light and the detector gain is adjusted as the excitation light and detector gain are modulated using the same radial frequency. after the sinusoidal signals of i∥ and i⊥ are acquired, each data set is processed to yield the frequency-dependent amplitudes and the phase shift between the excitation and emission light. the resulting polarized emission components are modulated at the same frequency but phase shifted with phase decay  relative to each other [10]. i∥ac and i⊥ac are the amplitudes of i∥ and i⊥ in the frequency domain, respectively. i∥dc and i⊥dc are the average values of i∥ and i⊥, respectively. these signals are characterized by the ratio // /// /ac ac ac dc dc dcy = i i , y = i i  . (4) furthermore, the fluorescence lifetime of the emission light can be measured in the frequency domain. the fluorescence lifetime  is determined by both a series of data sets for the excited light intensity, i()and the total light intensity of the emission light, i() the parameters , yac, and ydc are then related to parameters r , r0, r and  by              0 0 0 01 0 3 2 1 1 2 1 1 tan 1 1 2 1 2 r r r r r r r r                               , (5)                 22 2 0 0 22 2 0 0 1 2 1 2 1 2 1 1 1 ac r r r y r r r                     , (6)         0 0 1 2 1 2 1 1 dc r r y r r            , (7) 1 2 dc dc y r y    . (8) the rotational correlation time  is given by               1 2 2 2 22 2 2 2 2 2 1 tan 4 tan tan4 2 ac ac dc dc ac dc ac ac dc ac dc y y y y y y y y y y y                            (9) to define the rotational correlation time , the fluorescence lifetime  is required. the fluorescence lifetime  is determined by the phase decay and modulation m of the fluorescence, which is excited using light whose intensity i(t) varies sinusoidal with time [10]: figure 1. concept of fluorescence polarization method. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 69    sini t a b t  (10) where  is the angular velocity of the modulated excitation light [10]. as a consequence of the finite duration of the excited state, the modulated fluorescence emission is delayed in phase by angle  relative to the excitation. in addition, the modulation of the fluorescence decreases. the intensity of the fluorescence is given by    sini t a b t    . (11) we acquire the phase decay  between the sinusoidal curve of the excitation light and the emitted fluorescence without the emission polarizer in the frequency domain by tan  . (12) 2.2. particle sizing using sphere model for brownian particles reorienting in a liquid, the debye model defines an exponential decay for the single-particle orientation time correlation function c and rotational correlation time   exp /c t   , (13) 1 6 rd   , (14) where dr is the rotational diffusion coefficient. dr is coupled with the shear viscosity  at temperature t using the dse relationship [8, 9]: b r h k t d v   (15) where kb is the boltzmann constant. the hydrodynamic volume vh is related to the particle volume  through a factor that depends on the shape of the reorienting particle and the boundary conditions. the particle volume  can be calculated by measuring the rotational diffusion coefficient dr when the relation between vh and  is formulated. for example, the formula for a sphere, 6 ,hv  is applied to the stick boundary conditions. the dse relationship is known to be effective at the molecular length scales of low-viscosity liquids. consequently, the hydrodynamic volume vh of a fluorophore can be determined using , yac, ydc, , and  on the basis of eqs. (14), eq. (15), and (12). when the sphere approximation is applied to a fluorophore–nanoparticle conjugate, the diameter can be calculated from vh. this reflects the excited state lifetime and intrinsic rotational diffusion properties of the fluorophore and the modulation frequency. 3. experimental setup we developed a rotational diffusion coefficient measurement system using an fp method, as shown in figure 2. an ar+ laser (wavelength: 488 nm) is the polarized light source, and it is coupled to an acousto-optic modulator (aom). before coupling to the aom, the polarization direction is oriented using a half-wave plate (1/2 wp) to improve the diffraction efficiency of the aom. high-speed light amplitude modulation (up to 80 mhz) can be achieved in the unit, which consists of the aom, lens (l), and iris. the polarization direction of the input signal is then oriented by a half-wave plate (1/2 wp) just as with the polarization direction of i//. the light is incident on the sample via a half mirror (hm) and objective lens. the sample is excited using linearly polarized light. the emission is relayed through the beam displacer, which divides the fluorescence polarization signals oriented parallel (i//) and perpendicular (i⊥) to the excitation beam polarizer. the fluorescence signals are finally relayed to the image intensifier. the image of the orthogonal components of the fluorescence signal is enhanced on the image intensifier and then relayed to a ccd. images of both the horizontally (i//) and vertically (i⊥) polarized components are analyzed to acquire the fluorescence anisotropy. the modulation of the gain of the image intensifier corresponds to the modulation of the input signal amplitude. the phase decay , is the phase difference between the incident light and the gain of the image intensifier. acquisition proceeds with a series of phase shifts to acquire a first-order photo bleaching compensation. a data series is processed to yield the frequencydependent amplitudes, along with the phase shift between the excitation light and the emitted lights in the two orthogonal polarization directions. the polarized emission components are modulated at the same frequency but phase shifted relative to each other. 4. experimental results 4.1. rotational correlation time for fluorophore to measure the particle sizes, precise measurement of the rotational diffusion coefficient, which corresponds to the rotational correlation time, is required, as shown in eq. (14). we performed fundamental experiments to verify the feasibility of ar-laser 488nm ½ wp l l aom iris m ½ wp p hm function generator function generator phase-locked  objective lens sample temperature controller emission filter tube lens beam displacer image intensifier relay lens ccd to computer figure 2. experimental setup. figure 3. ccd images of fluorescence signals. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 70 precise measurement of the rotational correlation time for the fluorophore [11] without labeling gold nanoparticles. in the experiment, a beam displacer is used to divide the fluorescence light into the parallel and perpendicular components. as shown in figure 3, both components of the signal are observed in the ccd camera after the fluorescence light passes through the beam displacer. the upper component is polarized in the horizontal (h) direction, and the lower component is polarized in the vertical (v) direction. to check the efficiency of both sides, two measurements are required. first, the polarization angle of the excitation light is adjusted to the direction parallel to the horizontal direction, and an image is recorded, as shown in figure 3(a). the intensities of the horizontal and vertical signals are denoted by ihh and ihv, respectively. next, the polarization angle of the excitation light is adjusted to the direction parallel to the vertical direction, and another image is recorded. the intensities of the horizontal and vertical signals are also denoted by ivh and ivv, against the orthogonal direction of polarization respectively, as shown in figure 3(b). fifteen images are taken for each case. depending on the direction of polarization, the efficiency of the light passing through may differ. therefore, a calibration factor g is needed to measure i∥ and i⊥ from the two signal components. we calculated the g factor for ivh; thus, when excitation light in the v direction is used, ivv is used directly in our calculation, and ivh times g is used for the h direction of i∥. the g factor for ivh is obtained as follows: vv hv ivh vh hh i i g i i  . (16) the modulation frequency of the aom was set to 60 mhz for the following experiment. the ccd exposure time was 400 ms, and the micro channel plate voltage was 760 v. the modulated fluorescence signals by varying the phase  are shown in figure 4. , yac, and ydc are determined from the fitting curves of both fluorescence signals. first, the fluorescence anisotropy as a function of the time t is evaluated using the developed system. figure 5 shows the linear variation in the rotational correlation time versus the viscosity of the solution. three solutions were prepared for measuring the rotational correlation time of a fluorophore by mixing water with glycerin at 30 wt%, 50 wt%, and 60 wt%. the resulting viscosities were 2.5 mpa·s, 6.0 mpa·s, and 10.8 mpa·s at 293 k, respectively. we also used water, which has a viscosity of 1.0 mpa·s at 293 k, as a solution. the fluorophore was alexa fluore 488 (invitrogen corp.), which is the same size as fluorescein. the fluorescence lifetime  varies, with values of 4.1 ns in water, 3.8 ns in 30 wt% glycerin, 3.6 ns in 50 wt% glycerin, and 3.1 ns in 60 wt% glycerin. if we apply the sphere approximation for the fluorophore, the theoretical value of the rotational correlation time, which depends on the particle size, can be calculated from eqs. (14) and (15) as follows: 36 b d k t     . (17) the theoretical value is plotted according to the solution temperature t (293 k) and nanoparticle diameter d (1.1 nm). according to eq. (16), the rotational correlation time of the fluorophore agrees well with the value for a nanoparticle with a diameter of 1.1 nm under the nonslip boundary condition. this value is close to that of fluorescein, whose size is estimated to be 1.0 nm [12, 13]. the size difference is considered to be an effect of hydration of the fluorophore in the solution. from the above results, the rotational correlation time can be precisely measured using the developed system. 4.2. rotational diffusion coefficient for gold nanoparticle with figure 4. parallel and perpendicular polarized components of modulated fluorescence signal. 0 0.5 1 1.5 2 2.5 0 2 4 6 8 10 12 r ot at io na l c or re la tio n tim e( ns ) viscosity (mpa.s) theoretical value for 1.1nm figure 5. rotational correlation time of fluorophore.                               2 22 0 2 2 22 22 2 0 0 0222 2 0 0 1 1 2 2 2 5 2 4 4 1 2 1 2 1 1 9 1 2 1 2 5 4 1 2 1 1 ac ac ac ac y r r r r r r r r r y r r y r r r r r r r y r r                                                           (18)             2 22 22 2 2 2 2 0 0 0 0 0 0 0 2 1 2 1 9 2 2 5 1 1 2 6 1 ac ac ac r ac r y r y r r r y r r d y                   (19) acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 71 fluorescent dna probe to evaluate the diameter of a gold nanoparticle, the rotational diffusion coefficient of the nanoparticle with a fluorescent dna probe is evaluated using the developed system. gold nanoparticles with average diameters of 8.2 nm were prepared as the standard sample for measurement of the rotational diffusion coefficient. we evaluated the rotational diffusion coefficient of the gold nanoparticles directly with the fluorescent dna probe. the fluorescent probe was connected to the gold nanoparticles via double-strand dna consisting of adenine with a length of 23 bases. the length of the double-strand dna is estimated to be 9.6 nm, as shown in figure 6. the dna acts as a spacer to avoid quenching of the fluorophore [14, 15, 16], which is located near the gold nanoparticle. the 23-base double-strand dna is strong enough to avoid bending [17, 18] and to maintain the distance between the gold nanoparticle and fluorophore. figure 7 shows the rotational diffusion coefficient of the fluorophore and gold nanoparticle with the fluorescent dna probes versus the solution parameters, represented by t/. the rotational correlation time  is calculated using eq. (18) and the data for the parameters , yac, and ydc. the rotational diffusion coefficient can be calculated using the reciprocal of , as shown in eq. (19). table 1 shows the measurement parameters used to evaluate the rotational diffusion coefficient. as shown in figure 7, the rotational diffusion coefficient, which indicates the speed of rotational motion, increases linearly with t/. this relation agrees with eq. (17), which shows a linear relation to /t. the inclination of the graph for the rotational diffusion coefficient of the fluorescent dna probe is four times higher than that of the gold nanoparticle (average diameter, 8.2 nm) with the fluorescent dna probe. a large difference in the rotational correlation coefficient appears between the fluorescent dna probe and the gold nanoparticle with the probe. this enables us to estimate the size of a gold nanoparticle quantitatively using the inclination of the rotational diffusion coefficient against the viscosity of the temperature. to evaluate the resolution of the particle sizing obtained using the proposed method, further experiments for various diameters of gold nanoparticles are needed. we are now preparing samples of gold nanoparticles with average diameters of 5 nm, 10 nm, 15 nm, and 20 nm. 5. conclusions we developed a nanoparticle sizing system using an fp method. this system can precisely determine the rotational correlation time of nanoparticles with a fluorophore. the results indicate that we can determine the size of a nanoparticle using the dse relation when the particle shape approximates a sphere. we also investigated the rotational correlation time of a fluorophore with gold nanoparticles that were smaller than 10 nm. this indicates that the measurement results for the rotational correlation time of a fluorophore-labeled gold particle can be used to estimate the size of gold nanoparticles smaller than 10 nm. acknowledgment this research was supported in part by a grant-in-aid for scientific research from the ministry of education, culture, sports, science and technology of japan (grant-in-aid for exploratory research 21686015). references [1] t. gao, q. li, and t. wang, “sonochemical synthesis, optical properties, and electrical properties of core/shell-type zno nanorod/cds nanoparticle composites,” chem. mater., no. 17, pp. 887–892, 2005. [2] r. g. freeman, k. c. grabar, k. j. allison, r. m. bright, j. a. davis, a. p. guthrie, m. b. hommer, m. a. jackson, p. c. smith, d. g. walter, and m. j. natan, “self-assembled metal colloid monolayers: an approach to sers substrates,” science, vol. 267, no. 5204, pp. 1629–1632, 1995. [3] s. sun, c. b. murray, d. weller, l. folks, and a. moser, “monodisperse fept nanoparticles and ferromagnetic fept nanocrystal superlattices,” science, vol. 287, no. 5460, pp. 1989-1992, 2000. figure 6. distance between gold nanoparticle and fluorescent probe. table 1. measurement parameters. temperature [k] viscosity [mpas] probe probe + gold nanoparticle τ [ns] yac τ [ns] yac 293 1.002 2.0 1.225 1.0 1.457 298 0.890 1.199 1.435 303 0.797 1.172 1.427 308 0.719 1.151 1.402 313 0.653 1.135 1.396 figure 7. rotational diffusion coefficients for fluorophore and gold nanoparticle with fluorescent dna probe. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 72 [4] r. pecora, “dynamic light scattering measurement of nanometer particles in liquids,” j. nanopart. res., vol. 2, issue 2, pp. 123-131, 2000. [5] l. c. gontard, d. ozkaya, and r. e. dunin-borkowski, “a simple algorithm for measuring particle size distributions on an uneven background from tem images,” ultramicroscopy, vol. 111, issue 2, pp. 101–106, 2011. [6] k. kinosita jr., s. kawato, and a. ikegami, “a theory of fluorescence polarization decay in membranes,” biophys. j., vol. 20, issue 3, pp. 289–305, 1977. [7] b. s. fujimoto and j. m. schurr, “an analysis of steady-state fluorescence polarization anisotropy measurements on dyes intercalated in dna,” j. phys. chem., vol. 91, no. 7, pp. 1947-1951, 1987. [8] f. stickel, e. w. fischer, and r. richert, “dynamics of glassforming liquids. ii. detailed comparison of dielectric relaxation, dc-conductivity, and viscosity data,” j. chem. phys., vol. 5, no. 104, pp. 2043–2055, 1996. [9] p. p. jose, d. chakrabarti, and b. bagchi, “complete breakdown of the debye model of rotational relaxation near the isotropicnematic phase boundary: effects of intermolecular correlations in orientational dynamics,” phys. rev. e, no. 73, 031705, 2006. [10] r. f. steiner, “fluorescence anisotropy: theory and applications,” top. fluoresc. spectrosc., vol. 2, pp. 1–52, 1991. [11] m. b. mustafa, d. l. tipton, m. d. barkley, and p. s. russo, “dye diffusion in isotropic and liquid crystalline aqueous (hydroxypropyl) cellulose,” macromolecules, vol. 26, no. 2, pp. 370–378, 1993. [12] a. h. a. clayton, q. s. hanley, d. j. arndt-jovin, v. subramaniam, and t. m. jovin, “dynamic fluorescence anisotropy imaging microscopy in the frequency domain (rflim),” biophys. j., vol. 83, pp. 1631–1649, 2002. [13] r. d. spencer and g. weber, “measurement of subnanosecond fluorescence lifetimes with a cross-correlation phase fluorometer,” ann. n. y. acad. sci., vol. 158, no. 1, pp. 361–376, 1969. [14] c. s. yun, a. javier, t. jennings, m. fisher, s. hira, s. peterson, b. hopkins, n. o. reich, and g. f. strouse, “nanometal surface energy transfer in optical rulers, breaking the fret barrier,” j. am. chem. soc., vol. 127, no. 9, pp. 3115–3119, 2005. [15] j. seelig, k. leslie, a. renn, s. kühn, v. jacobsen, m. van de corput, c. wyman, and v. sandoghdar, “nanoparticle-induced fluorescence lifetime modification as nanoscopic ruler: demonstration at the single molecule level,” nano lett., vol. 7, no. 3, pp. 685–689, 2007. [16] s. mayilo, m. a. kloster, m. wunderlich, a. lutich, t. a. klar, a. nichtl, k. kürzinger, f. d. stefani, and j. feldmann, “longrange fluorescence quenching by gold nanoparticles in a sandwich immunoassay for cardiac troponin t,” nano lett., vol. 9, no. 12, pp. 4558–4563, 2009. [17] d. porschke, “persistence length and bending dynamics of dna from electrooptical measurements at high salt concentrations,” biophys. chem., vol. 40, issue 2, pp. 169–179, 1991. [18] g. s. manning, “the persistence length of dna is reached from the persistence length of its null isomer through an internal electrostatic stretching force,” biophys. j., vol. 91, no. 10, pp. 3607–3616, 2006. low-cost real-time motion capturing system using inertial measurement units acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 9 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 low-cost real-time motion capturing system using inertial measurement units simona salicone1, simone corbellini2, harsha vardhana jetti1, sina ronaghi3 1 department of electronics, information and bioengineering (deib), politecnico di milano, via guiseppe ponzio 34, 20133, milano, italy 2 department of electronics and telecommunication (det), politecnico di torino, corso duca degli abruzzi, 24, 10129, torino, italy 3 department of energy (deng), politecnico di milano, via lambruschini 4a, 20156, milano, italy section: research paper keywords: motion-capture; bluetooth low energy; inertial measurement units; digital signal processing; embedded systems citation: simona salicone, simone corbellini, harsha vardhana jetti, sina ronaghi, low-cost real-time motion capturing system using inertial measurement units, acta imeko, vol. 11, no. 3, article 17, september 2022, identifier: imeko-acta-11 (2022)-03-17 section editor: francesco lamonaca, university of calabria, italy received may 17, 2022; in final form september 23, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: simona salicone, e-mail: simona.salicone@polimi.it 1. introduction human motion reconstruction, also known as motioncapture systems (mo-cap), plays an important role in medical rehabilitation, sports training, and entertainment [1]-[3]. in the field of healthcare, physical therapists use such systems to record patient movements, visualize the movements in real-time and finally making a comparison of the recorded movements throughout the treatment cycle for efficacy evaluation. for the purpose of entertainment, the use of mo-caps is necessary for reconstructing human movements for 3-dimensional (3d) games development and animated scenes in the movie industry. for the application of sports training, the use of mo-caps is beneficial for reconstructing the players' movement for the evaluation of each player individually and as a team, as well as, monitoring each player for automatic estimation and prediction of possible injuries, such as muscle damages caused by players collision during a period of professional activity [4]. mo-cap solutions use either non-optical sensor-based methods or optical computer vision methods to reconstruct the physical human movements. as it is currently known, solutions based on computer vision methods can be employed only in a controlled environment and they suffer from inconsistencies concerning environmental conditions such as ambient light (colour and illumination problems), object proximity (occlusion), and motion detection in cluttered scenes [5], [6]. alternatively, sensor-based methods of mo-cap are usually applied in form of wearable devices, that use inertial measurement units (imus) based on microelectromechanical systems (mems). in principle, mems-based imus are immune to visible environmental conditions, but they may suffer from orientation drifts over time. the development of inertial based mo-cap solutions, for both movement tracking and human activity recognition (har), has been a topic of research for many years, mainly due to the constant increase in the availability of new mems devices. nowadays, many similar commercial mo-cap solutions are available on the market; such products, depending on the specific targeted application, can differ in the maximum number of abstract human movement modeling also referred to as motion-capture is a rapidly expanding field of interest for medical rehabilitation, sports training, and entertainment. motion capture devices are used to provide a virtual 3-dimensional reconstruction of human physical activities employing either optical or inertial sensors. utilizing inertial measurement units and digital signal processing techniques offers a better alternative in terms of portability and immunity to visual perturbations when compared to conventional optical solutions. in this paper, a cable-free, low-cost motion-capture solution based on inertial measurement units with a novel approach for calibration is proposed. the goal of the proposed solution is to apply motion capture to the fields that, because of cost problems, did not take enough benefit of such technology (e.g., fitness training centers). according to this goal, the necessary requirement for the proposed system is to be low-cost. therefore, all the considerations and all the solutions provided in this work have been done according to this main requirement. mailto:simona.salicone@polimi.it acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 sensors, in the placement position of the sensors on the human body, and in the user interface and data representation. as an example, the mtw awinda [7] tracker by xsense is a complete wireless human motion tracker able to accurately monitor the joint angles and is targeted for many applications, spanning from rehabilitation to injury prevention in sport and to human machine interaction in robotics. the system can manage up to 20 sensors with a maximum data rate of 60 hz. the movit system g1 [8] by captiks is another available solution for both indoor and outdoor motion capture and analysis. this system can acquire up to 16 sensors and it is able to provide raw orientation measurements and even animation and video files. the acquisition rate can reach 100-200 hz to track fast movements. for movements with higher dynamics, the wearable 3d motion capture [9] by noraxon can reach a measurement output rate of 400 hz. this system is equipped with 16 sensors suitable for all type of movements including high velocity and high impact conditions. despite the availability on the marked of many solutions, usually these products are targeted to the most demanding applications and their cost remains too high for their use in many other fields. usually, the cost of such commercial mo-cap solutions can range from a few thousand dollars to even tens of thousands of dollars. these prices are unfortunately out of budget in many applications. therefore, in this paper, we focus on the development of a low-cost mo-cap solution, which is full-body, multi sensor and cable-free. hence, to avoid using costly measurement elements, a novel approach for calibration using software level data analysis solely based on gyroscope measurements is proposed. the system is intended to be used for physical activities that require movements with low angular velocity for a short period of time (i.e., medical rehabilitation, physiotherapy and movement analysis in aged people). a set of experiments are also performed, for a preliminary validation and metrological characterisation of the proposed solution. 2. developed system this section explains the architecture of the proposed solution. the hardware and software feature specifications of the solution are highlighted and the proposed method for gyroscope calibration is explained. finally, a set of experiments are defined to demonstrate the functionality of the system and to assess the effectiveness of the employed method in reducing drift errors during the attitude representation process. 2.1. system architecture figure 1 briefly shows some possible architectures of mo-cap solutions; in particular the red path indicates the selected method for our solution. four key decisions have been made to choose a method of implementation that are explained accordingly. the reason for choosing a non-optical method is to provide portability and immunity to visual environmental perturbations. in fact, for the applications that require the activity to be performed in an open area, it is inherently difficult to control environmental parameters such as lighting. mems-based imus are usually a combination of low-cost accelerometer, gyroscope and in some circumstances magnetometer. in a common navigation system, all the three available inertial sensors (e.g., accelerometers, magnetometers and gyroscopes) are employed for both the preliminary calibration and the navigation itself. in particular, during the navigation, the accelerometer and the magnetometer are necessary to obtain a geo-referenced frame of coordinates and to compensate for drifts inevitably arising from the integration of the gyroscope signals. however, the use of the magnetometer may decrease the accuracy in indoor environments due to the usual presence of local magnetic field perturbations. on the other hand, the accelerometer may increase the measurement noise since the body acceleration is superimposed on the detected gravity vector. fortunately, in most of the human movement measurement applications, a geo-referenced frame is not necessary and even the measurement of angle changes over short time intervals (e.g. a few minutes) with respect to a starting position is usually sufficient (e.g. the patient is asked to start the movement from a reference position). for these reasons the authors propose a robust measurement system solely based on the use of gyroscopes and a simple calibration procedure suitable to achieve reasonable drifts and angle accuracy over a few minute intervals. 2.2. sensor placement considering a wearable motion capture product, the factors to be considered are accurately indicated by marin et al. [10]. the placement factor is crucial to avoid any friction and displacement of the sensors during various activities. even displacements during movements with a low angular velocity can abruptly decrease the accuracy of the representation. therefore, the outermost places right above the joints and flat areas of the body are preferably used. considering the application of medical rehabilitation, it is necessary to track the position of the bones that contribute to commuting and basic physical activities. the proposed solution uses 15 wearable sensors for full-body motion capture. these sensors are attached to the front and back side of the human body in an order that is illustrated in figure 2, where the red dots refer to the sensors and the orange lines indicate the correct positioning of the sensors. additionally, the sensor numbers, names and the measurand parameters are included in table 1. furthermore, as shown in figure 3, the fixed fabric method is used for the attachment factor. in particular, 15 fabric elastic bands have been hand-made, with different circumferences according to the body parts where the wearable devices are intended to be installed. furthermore, on each elastic band, a pocket with the size of 5 cm × 5 cm has been sewed to contain the sensing element with the dimension of 3 cm × 3 cm × 1 cm. figure 1. range of possible implementation methods. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 2.3. hardware design to implement multiple wireless measuring nodes in forms of wearable devices that can perform online software level calibration, commercially available modules, also known as nrftag (or sensor_tag), were used, as illustrated in figure 4. it also shows the sensor base and holder, which are used for sensor calibration. in particular, the base has been specifically designed and 3d printed by the authors. the main specifications of the nrftag are summarized in table 2. from each sensor tag, the mems-based 6-dof imu mpu6050 [11] is used as the sensing element and the nrf51802 system on a chip (soc) [12] is used as the processing unit and the bluetooth low energy™ (ble) communication module. the manufacturer recommended power supply for the sensor tag is a cr2032 non-rechargeable coin-cell battery with 3v output. considering the portability design criteria, a rechargeable wearable device is more convenient from the perspective of the figure 2. sensor attachment to the human body and numbering of the sensors. on the left: front side. on the right: back side. figure 3. the employed sensor (in the green box) and the hand-made elastic belt used to fix the sensor on the body (the wrist in this figure). table 1. sensor names and the measurand parameters. # sensor measurand 1 forehead rotation and side-bend relating to the origin 2 chest anterior/posterior tilt and lateral tilt to left/right 3 left arm abduction/adduction, flexion/extension 4 right arm abduction/adduction, flexion/extension 5 left wrist abduction/adduction, flexion/extension 6 right wrist abduction/adduction, flexion/extension 7 left hand abduction/adduction, flexion/extension 8 right hand abduction/adduction, flexion/extension 9 sacral(back) rotation and side-bend relating to the origin 10 left knee flexion/extension 11 right knee flexion/extension 12 left ankle plantar-flexion/dorsi-flexion and supination/pronation 13 right ankle plantar-flexion/dorsi-flexion and supintion/pronation 14 left ankle rotation and side-bend 15 right ankle rotation and side-bend figure 4. top view of the sensor_tag module (in the green square) attached to the base, which has been specifically designed and 3d printed for the sensor calibration. table 2. brief specifications of the nrftag. product name nrftag (sensor_tag) application wearable devices supply voltage 3v coin-cell battery 2032 package embedded modules nrf51802 (arm® cortex™-m0 mpu6050 imu bmp280 barometric pressure sensor ap3216 ambient light sensor size circular shape d = 30 mm tx power +4 dbm ~ -20 dbm on air data rate 250 kbps, 1 mbps or 2 mbps modulation gfsk rx current 9.5 ma tx current at +4 dbm 10.5 ma acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 actor user. for this reason, a rechargeable circuit for the wearable devices was designed and implemented. although it was possible to use a rechargeable coin-cell battery with the same package size and output voltage (e.g. ml2032 li-al coin-cell battery), our solution uses a lithium-polymer (li-po) type of battery as the power supply. the reason for this preference was the variety of available options in terms of supply capacity, higher charging rate, and simplicity of charging circuit implementation for li-po batteries compared to other types. during this hardware modification, additional components such as tp4056 battery charger module [13] and ce6208 dc/dc buck converter [14] were used. the central receiver in this project works as a communication bridge between the sensor tags and the personal computer. this component should receive the measurement data, construct a data structure, and transfer the data to a personal computer. for this purpose, an nrf52840 development kit manufactured by nordic semiconductor™ was considered as illustrated and summarized in figure 5 and table 3 respectively. this development kit enabled us to interact with the 15 measurement nodes concurrently using ble communication protocol for realtime measurement [15]. furthermore, an approximated cost of the main components of the system are represented in table 4 where the items indicated with (*) are only required for the rechargeable configuration. these components are available in the commercial market, and it is of course possible to further reduce the cost of the system in case of bulk production. if we compare the obtained total price (in both the two proposed solutions) with the costs of the available commercial systems (which, as reported in the introduction, range from a few thousand dollars to even tens of thousands of dollars), it can be immediately understood the considerably high cost savings. 2.4. attitude representation despite different possible methods of attitude representation that are preferred due to the application [16], [17], the most common way of representing attitude of a rigid body in 3d space 1 euler angles were introduced by leonhard euler in 18th century is using the euler angles1. euler angles represent rotation by performing a sequential operation with respect to a particular sequence and angle value. by representing the 3d space with three perpendicular axes denoted as i, j, k, it is possible to rotate any rigid body object by angles φ, θ, ψ with respect to the 3d axes. rotations based on euler angles can be represented using a rotation vector as indicated in equation (1). 𝑢: = [𝜑 𝜃 𝜓] (1) considering equation (2) where r is a rotation operation along a single axis, it could be understood that 𝑅𝑖𝑗𝑘 is a rotation function which consists in the product of three individual rotations along each 3d axis sequentially. 𝑅𝑖𝑗𝑘 ≔ 𝑅𝑖(𝜑)𝑅𝑗(𝜃)𝑅𝑘(𝜓) (2) by denoting the i, j and k axes as 1, 2 and 3 respectively, it could be stated that the performed sequence in equation (2) is [1,2,3]. this sequence is widely used for applications consisting representation of gyroscopic spinning motions of a rigid body. although euler angles are widely used because of their easyto-understand mathematical expression, employing them might introduce a major drawback also referred to as singularities during attitude representation. these singularities, which are also known as gimbal lock, might occur during specific sequence operations. the gimbal lock is basically losing one or more degree of freedom (dof) when two or more axes are positioned in parallel to each other. in particular, with a [1,2,3] sequence, the gimbal lock problem will arise when the rotation along the second axis (𝜃) becomes an integer multiplication of 90° (𝜃 = 𝑛 π). in order to overcome the issues regarding singularities, it is possible to use the unit quaternions2 for attitude representation. the unit quaternions are a form of a four-component complex numerical system which are mostly used in the pure mathematics, but indeed, have practical uses in applied science such as navigation systems and attitude representation in 3d space. unlike euler angles, unit quaternion rotation vector is composed by four components. the first component 𝑞0 ≔ 𝑤 is a scalar value related to the rotation angle, while the following three parameters are a vector 𝑞1:3 ≔ (𝑥, 𝑦, 𝑧) that indicates the rotation axis as represented in equation (3). 𝑞𝑤,𝑥,𝑦,𝑧 = [𝑞0 𝑞1 𝑞2 𝑞3] 𝑇 = [ 𝑞0 𝑞1:3 ] (3) one key advantages of the unit quaternions comparing to euler angles is the fact that in case of the unit quaternions, the attitude representation process is not sequential. hence, the unit quaternions do not suffer from singularities. therefore, in our 2 quaternions were introduced by william rowan hamilton in 19th century figure 5. central receiver development kit (nrf52840 dk). table 3. brief specifications of the central receiver. product name nrf52840dk application central receiver supply voltage 1.7 ~ 5.5 v processor nrf52840 (arm® cortex™ m4 size rectangular shape (135 × 63 mm²) table 4. price of the single components and total price of the system (for the two proposed solutions). item number of employed items unit price (€) nrf52840 dk × 1 59 sensor_tag × 15 16 li-po battery (*) × 15 3 tp4056 charger (*) × 15 1 dc/dc buck converter (*) × 15 1 total price 299 total price (rechargeable) 374 (*) acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 method of representation, the unit quaternions are preferred over the euler angles. in addition, since the unit quaternions use only complex multiplications, they require less hardware resources comparing to the euler angles that require triangular function derivations [16]. although the unit quaternions benefit from a more robust mathematical derivation and no singularities, there are not easy to interpret with a physical meaning. hence, in order to report the data to the end-user, we preferred to convert the unit quaternion values to the euler angles only when a numerical value representation is needed. the conversion function from the unit quaternions to the euler angles is possible as indicated in equation 4 (and vice versa as indicated in equation (5)). it is important to note that this conversion depends on the euler angles' sequential system, which in the case of this solution is [1,2,3]. 𝑅(𝑢)1,2,3 = [ atan2(2𝑞2𝑞3 + 2𝑞0𝑞1, 𝑞3 2 − 𝑞2 2 − 𝑞1 2 + 𝑞0 2) −asin(2𝑞1𝑞3 − 2𝑞0𝑞2) atan2(2𝑞1𝑞2 + 2𝑞0𝑞3, 𝑞1 2 + 𝑞0 2 − 𝑞3 2 − 𝑞2 2) ] (4) 𝑞𝑤,𝑥,𝑦,𝑧 = [ 𝑞0 𝑞1 𝑞2 𝑞3 ] ≔ [ 𝐶𝜙/2𝐶𝜃/2𝐶𝜓/2 + 𝑆𝜙/2𝑆𝜃/2𝑆𝜓/2 −𝐶𝜙/2𝐶𝜃/2𝐶𝜓/2 + 𝑆𝜃/2𝑆𝜓/2𝑆𝜙/2 𝐶𝜙/2𝐶𝜓/2𝐶𝜃/2 + 𝑆𝜙/2𝑆𝜃/2𝑆𝜓/2 𝐶𝜙/2𝐶𝜃/2𝐶𝜓/2 − 𝑆𝜙/2𝑆𝜓/2𝑆𝜃/2 ] (5) where c and s are the short terms for the cosine and sine functions respectively, atan2 stands for 2-parametric inverse tangent function, and asin is the inverse sine function. 2.5. calibration method low cost imu devices are unfortunately affected by important systemic errors, mainly due to gain errors and to axis misalignments, which may lead to poor accuracy, especially when the sensor has to deal with accelerations and rotations that frequently change axes, as expected in human movement measurements. a calibration has therefore to be performed in order to achieve acceptable results. the main sources of inaccuracy in the use of the gyroscope lay in the gain error (the offset error is present as well, but it is not of concern since it can be easily removed by collecting some initial measurements with the sensor laying in a static position) and in the axes misalignments (i.e. the axes of sensitivity of three gyroscopes are not exactly orthogonal each other). assuming that the sensor x-axis is correctly aligned with the reference frame x-axis (more properly that the reference frame x-axis is chosen parallel to the actual sensor x-axis), and assuming that the reference y-axis is chosen so that the sensor y-axis lies in the reference frame x-y plane, the measurement signals can be related to the actual sensor rotation by using the following matrix equation: ( 𝑋 𝑌 𝑍 ) = ( 𝑎′ 0 0 𝑏′ 𝑐′ 0 𝑑′ 𝑒′ f′ ) . ( 𝑋𝑠 𝑌𝑠 𝑍𝑠 ) (6) where 𝑋𝑠, 𝑌𝑠 and 𝑍𝑠 represent the actual angular speeds with respect to the reference frame, and 𝑋, 𝑌, 𝑍 represent the signals measured by the sensor along its non-orthogonal axes and affected by gain errors. the sensor angular speed can be obtained from the measured signals by inverting the previous equation, whose matrix maintains a triangular shape: ( 𝑋𝑠 𝑌𝑠 𝑍𝑠 ) = ( 𝑎 0 0 𝑏 𝑐 0 𝑑 𝑒 𝑓 ) . ( 𝑋 𝑌 𝑍 ) (7) considering a rotation with angular speed 𝜔 along an arbitrary axis, it can be written: 𝑋𝑠 2 + 𝑌𝑠 2 + 𝑍𝑠 2 = 𝜔2 (8) and therefore: 𝑎2𝑋2 + (𝑏𝑋 + 𝑐𝑌)2 + (𝑑𝑋 + 𝑒𝑌 + 𝑓𝑍)2 = 𝜔2 (9) rearranging the terms: 𝑋2(𝑎2 + 𝑏2 + 𝑑2) + 𝑌2(𝑐2 + 𝑒2) + 𝑍2(𝑓2) + 2𝑋𝑌(𝑏 𝑐 + 𝑑 𝑒) + 2 𝑋 𝑍(𝑑𝑓) + 2 𝑌 𝑍(𝑒𝑓) = 𝜔2 , (10) which, with the following six definitions: 𝛼 = 𝑎2 + 𝑏2 + 𝑐2 𝛽 = 𝑐2 + 𝑒2 𝛾 = 𝑓2 𝛿 = 𝑏𝑐 + 𝑑𝑒 𝜖 = 𝑑𝑓 𝜁 = 𝑒𝑓 , (11) leads to the following compact matrix system of equations that collects all the applied rotations: ( 𝑋1 2 𝑌1 2 𝑍1 2 2𝑋1𝑌1 2𝑋1𝑍1 2𝑌1𝑍1 𝑋2 2 y2 2 z2 2 2𝑋2𝑌2 2𝑋2𝑍2 2𝑌2𝑍2 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ 𝑋n 2 yn 2 zn 2 2𝑋n𝑌n 2𝑋n𝑍n 2𝑌n𝑍n ) . ( α β γ δ ε ζ ) = ( 𝜔1 2 𝜔2 2 ⋮ 𝜔𝑁 2 ) (12) the six unknown parameters (from 𝛼 to 𝜁), can be obtained inverting the previous equation if at least six rotations with known speed (𝜔1to 𝜔𝑁) have been applied. eventually the required coefficients can be quickly obtained with the following order: 𝑓 = √𝛾 𝑑 = 𝜖/𝑓 𝑒 = 𝜁/𝑓 𝑐 = √𝛽 − 𝑒 2 𝑏 = (𝛿 − 𝑑𝑒)/𝑐 𝑎 = √𝛼 − 𝑏2 − 𝑐2 (13) this calibration has been easily implemented on the sensor micro-controller using the matrix svd decomposition for solving the system of equation (12), however the application of a rotation with a known speed to the sensor is not practical and would require complex equipment. fortunately, considering practically applicable sets of axes during calibration, the previous equations, by integration, hold also with angles for rotations along any arbitrary fixed axis. therefore, the proposed approach consists in rotating the sensor along a set of unknown axes for an integer number of turns, integrating the sensor signals and replacing (2 𝑘 π)2 instead of 𝜔2 in equation (12). to manage the calibration procedure, a base for rotating the sensor along arbitrary axes was necessary. therefore, as already mentioned in section 2.3, a specific base has been designed and 3d printed (the base is visible in figure 4). in addition, a specific virtual app has been developed using the bluetooth version of the miupanel platform (ref. https://miupanel.com). by means of the miupanel platform a mobile phone can establish a direct ble connection with the sensors and retrieve a graphical panel for controlling the sensor. the developed panel is shown in figure 6: it contains a realtime visualization of the yaw, pitch and roll angles, and some buttons to start the calibration, to acquire multiple rotation and to compute the calibration coefficients. through the panel it is also possible to store the calibration results into the sensor flash memory. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 2.6. embedded software design in this part, the combination of the central receiver and all the wearable devices as the acquisition group will be addressed. due to the software design considerations, it is necessary to indicate the desired features and specifications of the solution. the basic requirements regarding the acquisition group of our cable-free mo-cap solution are indicated below: • the central receiver should be able to receive data from all the 15 wearable sensors simultaneously and to associate them to the corresponding body part. • central receiver should be able to identify the index of the received data for synchronization purposes. • a lightweight format for data string communication between the central receiver and the wearable devices should be constructed. this data structure, which is also referred to as the data matrix, should be able to contain the measurement raw data and necessary attributes such as functionality status from the wearable devices. the first two requirements are associated with the connection establishment, while the rest are related to the measurement data structure. as previously mentioned, the employed connection protocol in the proposed solution is bluetooth™ low energy. this protocol sufficiently satisfies the requirements such as fast connection establishment, short-range and high-speed data transfer. the set of instructions which are executed by the peripherals are included in figure 7, where the red path indicates the calibration algorithm to reduce drifts of the gyroscope measurements. to avoid connection establishment to the unnecessary devices within the ble range, the central receiver is programmed to only search for this service-characteristic profile. furthermore, in order to distinguish each sensor, the unique peer address identifier of each sensor is associated with the sensor number in table 1. the peripheral communicates data through advertisement packets transmitted every 20 ms. since the attributes and the payload (measurement data) consume more data bytes than a single ble advertisement packet size (31 bytes), it is necessary to communicate the data using multiple advertisements packets. each advertisement packet size has 31 byte of data, and it consists of information such as: generic access profile (gap), type, manufacturer specific data, the peripheral address, and the payload. upon receiving the advertisement packets by the central receiver, the data matrix is structured to be communicated to a personal computer using serial communication. the set of instructions that are executed by the central receiver are indicated in figure 8, where initialization process is only executed when the central receiver is turned on, while the acquisition and communication process are done continuously. 2.7. representation software design in order to represent the measurement data, representation software in form of windows applications were developed. these applications use various techniques to interpret the incoming data from the central receiver. each windows application is provided with a graphical user interface (gui) to interact with the professional user. furthermore, these applications were created using software environments namely matlab, ni labview, and unity game engine. from the data matrix structure, it is possible to extract information such as sensor number, new data availability, data packet type, and measurement values. as illustrated in figure 9, the general principle for the data interpretation process in all the software environments is to decode the incoming hexadecimal data matrix from the serial interface into single-precision floating-point using the ieee 754-2019 standard. the decoded data will be then converted into the euler angles and the quaternion values. eventually, the converted angles are represented and stored numerically and graphically. figure 6. screenshot of the mobile application designed to manage the calibration process. figure 7. peripherals' program logic. figure 8. central receiver's program logic. figure 9. data interpretation logic. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 7 2.8. experimental procedure the experiments were conducted separately for each of the representation applications. therefore, the procedures and the necessary considerations regarding each test are explained in this section. 2.8.1. experiment 1 – functionality test the aim of this experiment is the 3d representation of human physical activities using matlab and unity game engine. before proceeding to the tests, the experimental modules followed by a brief description are provided below. • upper body experiment: these experiments are aimed to capture the upper body activities by tracking sensor numbers #1 to #8. • lower body experiment: these experiments are aimed to capture lower body activities by tracking sensor numbers #9 to #15. • full body experiment: these experiments perform the full-body motion capturing process by employing all the 15 sensors. figure 10 to figure 13 illustrate the functionality of the representation procedure using matlab and unity game engine where, in each figure, the test subject's posture is placed next to test results for an immediate comparison, while, for a better vision of the reader, the position of the sensors on the body are marked with red circles on the picture, according to table 1. 2.8.2. experiment 2 metrological characterisation this experiment aims to evaluate the type a standard uncertainty. this preliminary evaluation has been done to estimate the standard uncertainty of a single sensor (sensor #2 corresponding to the chest of the test subject) throughout the measurement process. the experiment was performed by connecting a single sensor tag to the central receiver via ble and connecting the central receiver to a personal computer using serial communication. the sensor tag was located on a fixed position (on a piece of sponge to damp environmental vibrations). then, the sensor tag was turned on and the offset compensation phase started, which took approximately 10 seconds. the experiment was done for about 9 minutes, at room temperature (23 °c) while errors due to the pcb mounting and cross-axis sensitivity effects were neglected [11]. moreover, the ni labview application was used to record the incoming data in a text-plain file. finally, the text-plain file was converted into an excel worksheet to perform the statistical analysis. 5000 samples were considered for the analysis (500 seconds of data acquisition). then the first 2000 samples were discarded to take into account the offset compensation phase and the warm-up time of the measuring units. hence, the samples between 2001 to 5000 were used for this analysis. let us consider the measurements of the gyroscope along each of the 3d axis as a variable that varies randomly. hence, the measurement uncertainty of the instrument along each axis based on the experimental results could be calculated according to the joint committee of guides in measurements (jcgm) guide to the expression of uncertainty in measurements (gum) [18]. the results of the calculations are provided in table 5, and measurements obtained with sensors in static position are illustrated in figure 14. 2.8.3. experiment 3 – correction method evaluation the purpose of this experiment was to highlight the impact of the calibration algorithm for reducing the drifts in the gyroscope measurements. to induce the drifts in the measuring unit, we placed a sensor on a ruler with a defined position on a table and reported the measured value for each of the 3d axis. then by grabbing the ruler with the right hand, we performed 15 forward rotations and 15 lateral movements of the right arm in 1 minute. next, the ruler was placed at the same defined position and the mismatch between the initial value was reported. the above steps were repeated for 5 times, therefore, 15 groups of figure 10. matlab upper body experiment with t-pose posture. figure 11. matlab lower body experiment with an arbitrary posture. figure 12. matlab full body experiment with t-pose posture. figure 13.unity lower body (left figure) and full body (right figure) experiment with an arbitrary posture. table 5. type a evaluation of the standard uncertainty. parameters φº θº ψº induced rotation 0 0 0 experimental sd of observations 0.015 0.022 0.0071 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 8 observations along each axis were obtained. finally, to emphasize the impact of the calibration method, we performed this experiment with and without the calibration algorithm being executed. the impact of the gyroscope calibration algorithm is highlighted in figure 14. as it is demonstrated, observations without using the correction algorithm are prone to drifts over time during repetitive movements. on the other hand, the gyroscope measurement drifts are bounded to ±1.5 ° when executing the calibration algorithm. 3. discussion considering the limitations of machine vision technologies in motion capture solutions, in this paper, we presented a low-cost, non-optical, cable-free, real-time and full-body wearable motion capture solution using 15 distributed mems based imu. by using the bluetooth™ low energy wireless communication protocol, we established a low-power consuming and low-cost mo-cap system that fully satisfies the portability design criteria of wearable devices. taking into account the broad range of applications for motion capture devices, our solution was specifically designed to be used for medical rehabilitation at home and in nonprofessional gyms, where the cost of the system plays a very important role. furthermore, it has been considered that the measurement duration and the angular velocity of the physical movements are low (i.e., physiotherapy and movement analysis in aged people) and that, after each set of exercises, the patient returns to a reference position. in the proposed solution, the angular position is estimated solely based on the measurements of the gyroscope. since the gyroscopic measurements are prone to angular drifts over time, we proposed a software level calibration method that is able not only to eliminate the gain errors but also to take into account the axes misalignments. the results of our experiments appeared promising by bounding the gyroscope drifts to approximately ±1.5 ° over a measurement period of 5 minutes. moreover, the experimental results also demonstrated the functionality of the system and a preliminary attempt to metrologically characterise the measurement system was performed. furthermore, the rotation angles can be further analysed using data processing methods for human activity recognition (har). this application is widely used for medical diagnostic procedures such as gait analysis and core stability assessment. it is therefore possible to implement har by using suitable classification methods such as deep neural networks. table 6 shows a comparison between the proposed system and other available commercial systems. the table clearly shows that the performances and characteristics of our system are comparable with the ones of other commercial systems, at least in short periods of time. however, the great advantage of the proposed system is represented by the cost which, as discussed in the introduction and shown in table 4, is one or even two orders of magnitude smaller. this represents a very great advantage of the proposed system, which, for the desired applications of few minutes, provides similar performances to the commercial systems, but at very low-cost. references [1] l. n. n. nguyen, d. rodríguez-martín, a. català, c. pérez-lópez, a. samà, a. cavallaro, basketball activity recognition using wearable inertial measurement units, proceedings of the xvi international conference on human computer interaction. association for computing machinery: vilanova i la geltru, spain, 2015, pp. 1-6. doi: 10.1145/2829875.2829930 [2] f. casamassima, a. ferrari, b. milosevic, p. ginis, e. farella, l. rocchi, a wearable system for gait training in subjects with parkinson's disease. sensors (basel), 2014, 14(4), pp. 6229-6246. doi: 10.3390/s140406229 figure 14. measurements obtained with sensors in static position (left figure) and evaluation of the improvements obtained with the proposed calibration algorithm (right figure). table 6. comparison between the proposed system and other available commercial systems. product proposed system mtw awinda [7] movit g1 [8] ultium motion [9] connection wireless wireless wireless wireless range (m) 20 20 30 40 number of sensors 15 20 16 16 battery life (h) 10 6 6 10 orientation accuracy (°) 1.5 (max 5 min) 1 1 1 sensor weight (g) 10 16 25 19 size (mm2) 30 × 30 47 × 30 48 × 39 44 × 33 angular range (deg/s) 2000 2000 2000 7000 quaternion rate (hz) 10 60 100 100 https://doi.org/10.1145/2829875.2829930 https://doi.org/10.3390/s140406229 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 9 [3] r. a. w. felius, m. geerars, s. m. bruijn, j. h. van dieën, n. c. wouda, m. punt, reliability of imu-based gait assessment in clinical stroke rehabilitation. sensors (basel), 2022, 22(3), pp. 119. doi: 10.3390/s22030908 [4] d. kelly, g. f. coughlan, b. s. green, b. caulfield, automatic detection of collisions in elite level rugby union using a wearable sensing device. sports engineering, 2012. 15(2), p. 81-92. doi: 10.1007/s12283-012-0088-5 [5] y. z. cheong, w. j. chew, the application of image processing to solve occlusion issue in object tracking. proceedings of matec web of conference., 2018, pp. 1-10. doi: 10.1051/matecconf/201815203001 [6] g. d. finlayson, colour and illumination in computer vision. interface focus, 2018. doi: 10.1098/rsfs.2018.0008 [7] xsens. mtw awinda. online [accessed 27 september 2022] https://www.xsens.com/products/mtw-awinda [8] captiks. movit system g1-3d. online [accessed 27 september 2022] http://www.captiks.com/products/movit-system-g1-3d [9] usa, n. ultium motion. 2021. online [accessed 27 september 2022] https://www.noraxon.com/our-products/ultium-motion/ [10] j. marin, t. blanco, j.j. marin, octopus: a design methodology for motion capture wearables. sensors, 2017. 17(8), pp. 1-24. doi: 10.3390/s17081875 [11] invensense. mpu-6000 and mpu-6050 product specification. 2013. online [accessed 27 september 2022] https://invensense.tdk.com/wpcontent/uploads/2015/02/mpu-6000-datasheet1.pdf [12] semiconductor., n. nrf51802 multiprotocol bluetooth low energy/2.4 ghz rf system on chip product specification. 2016. online [accessed 27 september 2022] https://infocenter.nordicsemi.com/pdf/nrf51802_ps_v1.2.pdf [13] corp, n.t.p.a. tp4056 1a standalone linear li-lon battery charger with thermal regulation in sop-8. online [accessed 27 september 2022] https://www.mikrocontroller.net/attachment/273612/tp4056.p df [14] inc, n.c.e. ce6208 series ultra-fast high psrr 1a cmos voltage regulator online [accessed 27 september 2022] https://datasheetspdf.com/datasheet/ce6208.html [15] semiconductor, n. nrf52840 development kit pca10056 user guide. 2019; online [accessed 27 september 2022] https://infocenter.nordicsemi.com/pdf/nrf52840_dk_user_ guide_v1.3.pdf [16] j. diebel, representing attitude: euler angles, unit quaternions, and rotation vectors. matrix, 2006, pp.1-35. [17] h. parwana., m. kothari, quaternions and attitude representation. arxiv, department of aerospace engineering, indian institute of technology kanpur, india, 2017, pp. 1-19. doi: 10.48550/arxiv.1708.08680 [18] jcgm. gum: guide to the expression of uncertainty in measurement. 2008. online [accessed 27 september 2022] https://www.bipm.org/documents/20126/2071204/jcgm_100 _2008_e.pdf https://doi.org/10.3390/s22030908 https://doi.org/10.1007/s12283-012-0088-5 https://doi.org/10.1051/matecconf/201815203001 https://doi.org/10.1098/rsfs.2018.0008 https://www.xsens.com/products/mtw-awinda http://www.captiks.com/products/movit-system-g1-3d https://www.noraxon.com/our-products/ultium-motion/ https://doi.org/10.3390/s17081875 https://invensense.tdk.com/wp-content/uploads/2015/02/mpu-6000-datasheet1.pdf https://invensense.tdk.com/wp-content/uploads/2015/02/mpu-6000-datasheet1.pdf https://infocenter.nordicsemi.com/pdf/nrf51802_ps_v1.2.pdf https://www.mikrocontroller.net/attachment/273612/tp4056.pdf https://www.mikrocontroller.net/attachment/273612/tp4056.pdf https://datasheetspdf.com/datasheet/ce6208.html https://infocenter.nordicsemi.com/pdf/nrf52840_dk_user_guide_v1.3.pdf https://infocenter.nordicsemi.com/pdf/nrf52840_dk_user_guide_v1.3.pdf https://doi.org/10.48550/arxiv.1708.08680 https://www.bipm.org/documents/20126/2071204/jcgm_100_2008_e.pdf https://www.bipm.org/documents/20126/2071204/jcgm_100_2008_e.pdf introductory notes for the acta imeko fourth issue 2021 acta imeko issn: 2221-870x december 2021, volume 10, number 4, 1 2 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 1 introductory notes for the acta imeko fourth issue 2021 francesco lamonaca1 1 university of calabria, dept. of computer science, modelling, electronic and system, via p.bucci 41c, arcavacata di rende, 87036 (cs), italy section: editorial citation: francesco lamonaca, introductory notes for the acta imeko fourth issue 2021, acta imeko, vol. 10, no. 4, article 1, december 2021, identifier: imeko-acta-10 (2021)-04-01 received december 23, 2021; in final form december 23, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: francesco lamonaca, e-mail: editorinchief.actaimeko@hunmeko.org dear readers, also in this issue, high quality and heterogeneous papers are presented confirming acta imeko as the natural platform for disseminating measurement information and stimulating collaboration among researchers from many different fields. i hope you will enjoy your reading. as usual, the general track collects contributions that do not relate to a specific event. as editor in chief, it is my pleasure to give an overview of these papers, with the aim of encouraging potential authors to consider sharing their research through acta imeko. an interesting technique is proposed by oleksandr vasilevskyi et al. in the paper ‘indicators of reproducibility and suitability for assessing the quality of production services’ for estimating the probability of the possible appearance of defective products or the inconsistency of the production service on the basis of indexes of suitability and reproducibility of the production process. the accuracy of the proposed estimation methodology, which includes the proposed mathematical apparatus, was estimated on the basis of the correctness index, whose assessment method is carried out in accordance with the international standard iso 57252:2002. mohammed alktranee et al. in the paper ‘simulation study of the photovoltaic panel under different operation conditions’ study the effects of temperature distribution on the photovoltaic panel at different solar radiation values and temperatures under various operative conditions in january and july. a 3d model of the pv panel was simulated using ansys software, depending on the various values of temperature and solar radiation values obtained using mathematic equations. in cognitive radio systems, estimating primary user direction of arrival (doa) measurement is one of the key issues. in order to increase the probability detection, multiple sensor antennas are used and they are analysed by using subspace-based technique. in the paper ‘an adaptive learning algorithm for spectrum sensing based on direction of arrival estimation in cognitive radio systems’, sala surekha et al. considered wideband spectrum with sub channels and here each sub channel facilitated with a sensor for the estimation of doa. in practical spectrum sensing process interference component also encounters in the sensing process. to avoid this interference level at output of receiver, the authors used an adaptive learning algorithm known as normalized least absolute mean deviation (nlamd) algorithm. rosario morello et al. in the paper ‘design of a non-invasive sensing system for diagnosing gastric disorders’ present a smart egg (electrogastrography) sensing system to non-invasively diagnose gastric disorders. in detail, the system records the gastric slow waves by means of skin surface electrodes placed in the epigastric area. cutaneous myoelectrical signals are so acquired from the body surface in proximity of stomach. electro-gastrographic record is then processed. according to the diagnostic model designed from the authors, the system estimates specific diagnostic parameters in time and frequency domains. it uses discrete wavelet transform to obtain power spectral density diagrams. the frequency and power of the egg waveform and the dominant frequency components are so analysed. the defined diagnostic parameters are compared to the reference values of a normal egg in order to estimate the presence of gastric pathologies by the analysis of arrhythmias (tachygastria, bradygastria and irregular rhythm). for military divers, having a robust, secure, and undetectable wireless communication system available is a fundamental need. wireless intercoms using acoustic waves are currently used. these systems, even if reliable, have the limit of being easily identifiable and detectable. visible light can pass through sea water. therefore, light can be used to develop short-range wireless communication systems. to realize secure close-range underwater wireless communication, the underwater optical wireless communication (uowc) can be a valid alternative to acoustic wireless communication. uowc is not a new idea, but the problem of the presence of sunlight and the possibility of using near-ultraviolet radiation (near-uv) has not been adequately addressed in the literature yet. in military applications, the possibility of using invisible optical radiation can be of great interest. in the paper ‘led-to-led wireless communication between divers’ by fabio leccese et al., a feasibility study is carried out to mailto:editorinchief.actaimeko@hunmeko.org acta imeko issn: 2221-870x december 2021, volume 10, number 4, 2 3 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 2 demonstrate that uowc can be performed using nearultraviolet radiation. the proposed system can be useful for wireless voice communications between military divers as well as amateur divers moreover, this issue contains extended version of selected papers presented to one of the most important international and italian metrology events organized in italy: the 2020 imeko tc19 international workshop on metrology for the sea “learning to measure sea health parameters”, guest edited by prof. silvio del pizzo; the international excellence italo gorini ph.d school guest edited by prof. pasquale arpaia and dr. umberto cesaro; the 40th measurement day jointly organised by the italian associations italian group of electric and electronic measurements (gmee) and italian group of mechanical and thermal measurements (gmtt) guest edited by prof. carlo carobbi, prof. nicola giaquinto, prof. gian marco revel. the xxix italian national congress on mechanical and thermal measurements guest edited by prof. alfredo cigada and prof. roberto montanini. the strong italian contribution to this fourth issue is justified by the wish of the italian metrology community to support imeko and its journals. i hope that in the near future acta imeko will receive an equally important contribution from the other imeko member countries and beyond. francesco lamonaca editor in chief simulation study of the photovoltaic panel under different operation conditions acta imeko issn: 2221-870x december 2021, volume 10, number 4, 62 66 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 62 simulation study of the photovoltaic panel under different operation conditions mohammed alktranee1, péter bencs1 1 department of fluid and heat engineering, faculty of mechanical engineering and informatics, university of miskolc, miskolc, hungary section: research paper keywords: photovoltaic; solar radiation; simulation; temperature citation: mohammed alktranee, péter bencs, simulation study of the photovoltaic panel under different operation conditions, acta imeko, vol. 10, no. 4, article 12, december 2021, identifier: imeko-acta-10 (2021)-04-12 section editor: francesco lamonaca, university of calabria, italy received march 28, 2021; in final form september 25, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: mohammed alktranee, e-mail: mohammed84alktranee@gmail.com 1. introduction solar energy is the essential branch of renewable energy is used in different applications as an alternative source of conventional systems such as solar electrical generation, solar cooling systems, solar heating, etc. [1]. the photovoltaic (pv) panels are one solar energy technology used for electrical generation by pv cells made of semiconductor materials. the pv cells convert the sunlight (photons energy) into electrical energy, where about 15 % – 20 % of the sunlight converts to electricity, and the rest converts to heat which influences their performance [2]. different factors influence the pv panel performance; some are related to the balance of system (subsystems) failure, such as pv inverter [3]. others are regarding operation conditions, cell cracking problems due to wind or snow pressure, vibrations, and the dust that collects on the pv module, which causes a failure along its operating lifetime during a short lifetime [4]. the pv panels efficiency is significantly affected by the increase in the pv cells temperature that causes cell degradation and shorting of life expected of pv module with dropped output power [5]. several studies revealed the performance of the pv panel is affected by the operation temperature. increasing the pv cells temperature higher than 25 °c has negative effects on the pv cells yield, where decreases the open-circuit voltage and then the electrical output of the pv panel [6]-[8]. the pv cells performance and productivity mainly depend on the solar radiation values and ambient temperature. therefore, at an average temperature of 25 °c and solar radiation of 1000 w/m², the pv cells give a maximum yield generate as a reference power produced by the pv cells [9]. stander test condition (stc) is a concept defined as the performance of pv cells within a temperature of 25 °c and solar radiation of 1000 w/m². increasing the temperature of the pv panel temperature one degree higher than 25 °c leads to reducing the pv cells efficiency to 0.45 % [10]. therefore, various pv models have been developed to predict the pv modules power output under different operation conditions [11]. a thermal model was developed to simulate pv panels thermal and electrical performance under different operation conditions with and without cooling. the developed model was sequentially linked with electrical and radiation models to evaluate the pv abstract an increase in the temperature of the photovoltaic (pv) cells is a significant issue in most pv panels application. about 15–20% of solar radiation is converted to electricity by pv panels, and the rest converts to heat that affects their efficiency. this paper studies the effects of temperature distribution on the pv panel at different solar radiation values, temperatures under different operation conditions in january and july. a 3d model of the pv panel was simulated with ansys software, depending on the various values of temperatures and solar radiation values obtained using mathematic equations. the simulation results indicate that pv panel temperature lowered with solar radiation values lower in january, and the temperature was homogeneous on the pv panel surface. an increase in the solar radiation value and temperature in july led causes heating of the pv panel with observed a convergence of the maximum and average temperature of the panel. thus, the pv panel temperature increase is directly proportional to the solar radiation increase that causes lower performance. cooling the pv panel by passive or active cooling represents the optimum option to enhance their performance and avoid increasing the pv cells' temperature at temperature increase. mailto:mohammed84alktranee@gmail.com acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 63 panels performance. the simulation results show lower pv panel performance with absorbed solar radiation of 800 w/m² and ambient temperature 0 °c – 50 °c without cooling. meanwhile, use cooling has a slight increase of the pv panels at ambient temperature 25 °c and absorbed radiation from 200 w/m² to 1000 w/m² [11]. the ambient temperature and solar radiation are important parameters that influence the pv cells' conversion efficiency and power produced [12]. another 3d model was simulated by ansys software to evaluate temperature distribution on the pv panel at different climate conditions and thus analysing the pv panels thermal behaviour under operating conditions. the results indicate an increase in the pv panel temperature is associated with increased solar radiation intensity and ambient temperature [13]. another simulation was conducted of the pv panel at a constant temperature with various solar radiation values, vice versa to predict the pv model performance and compare it with the pv panel performance under stc. the simulation revealed a reduction of solar radiation values even with constant temperature effects on the voltage and current of the pv panel. on the other hand, the pv panel performance degraded at high temperatures even with different solar radiation values. thus, lowering the temperature of the pv panel contributes to an increase in output power [14]. the present work aims to calculate temperature distribution on the pv panel at different solar radiation values and ambient temperature then determine the optimum operation condition extent of the pv panel. the simulation depended on the layers properties of the pv panel, values of solar radiation, temperature, convective heat transfer coefficient of the model to evaluate temperature distribution and identify the appropriate extent of operating conditions of the pv panel. the second section will discuss the characteristics of the pv panel and the mathematic equation used to predict the solar radiation values and the important results obtained. the third section discussed the simulation results and compared it with other studies [15], [16] that simulated under similar conditions to investigate the effect of ambient temperature and solar radiation on the pv panel performance. 2. methodology the pv panel used type as-6p30 polycrystalline, consisting of glass covering, pv cells with two layers of ethylene vinyl acetate (eva), aluminium frame, and tedlar (pvf) layer, as shown in figure 1 below. table 1 below shows the material properties of the pv components inserted in the ansys software engineering data. the pv panel datasheet has adopted a reference for comparing the simulation results and the pv panel values under stc. 2.1. geometry simulation according to the manufacturer datasheet, the pv model’s geometry is built by solidworks software with 1640 × 992 × 35 mm³ model dimensions. the materials are defined by the ansys library data of all the pv panel layers, as shown in figure 1. the pv model has been imported to the ansys software to analyse the pv panels temperature distribution inserted in the software as variable values depending on the different values of beam solar radiation obtained using mathematical equations below. thus, according to stc, three solar radiation values have been used two of them according to january and july with an estimation of clear-sky radiation and the third solar radiation value under stc. the heat flux changes with time, which causes a change in the temperature that influences the pv panels performance. the temperature was fixed at 4 °c in january, 35 °c in july, and 25 °c in stc. the value of the convective heat transfer coefficient on the panel is 14.8, given in w/(m2 k) and calculated using equation (1) [18] ℎ = 5.7 + 3.8 𝑉𝑚 , (1) where 𝑉𝑚 is the wind speed value that adopted 2.4, given in m/s, according to miskolc city [19]. the mesh element size of the pv model mesh using ansys simulation software was 0.002 m, and the number of meshes elements was 278964 with high smoothing, as shown in figure 2 above, the time simulated was 43.1 min in all initial conditions, figure 3 show the simulation steps of the pv panel for of january, july and for stc values. 2.2. governing mathematical equations the mathematical equations were used to predict and estimate direct radiations transmittance (beam radiation) for miskolc city when a clear atmosphere on 7 january at solar time 11:30 am and 13 july with solar time 12:30 pm. to achieve that, it needs to find some parameters that help achieve that such angular figure 1. the layers of the pv panels. table 1. the layers properties of the pv panel [17]. material (layer)  kg/m3 k w/(m²·k) cp j/(kg·k) glass 3000 1.8 500 eva 960 0.35 2090 pv 2330 148 677 pvf 1200 0.2 1250 aluminium frame 2707 204 996 figure 2. depict the pv panel meshing. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 64 position of the sun (solar declination 𝛿) represents the angle between the line from the centre of the sun to the earth centre equator. the solar declination value is changeable because of the rotating the earth around the sun and the tilt of the earth on its axis of rotation found by using cooper's equation [20] 𝛿 = 23.45° sin (360 284 + 𝑛 365 ) , (2) where 𝑛 is the day of the year of that selected date (𝑛 = 7 for 7 january and 𝑛 = 194 for 13 july). the zenith angle is the incidence beam solar radiation angle between the vertical and the line to the sun use with horizontal surfaces and determined by the following equation cos 𝜃𝑧 = cos 𝜙 cos 𝛿 cos 𝜔 + sin 𝜙 sin 𝛿 , (3) with latitude 𝜙 between −90° ≤ 𝜙 ≤ 90° for miskolc, hungary 𝜙 is 48.1°, 𝜔 is hour angle which is negative at morning and afternoon positive. solar time depends on the sun's angular motion in the sky, which may not synchronize with local time [21]. the extra-terrestrial radiation incident 𝐺on is the quantity of solar energy received per unit of time at the mean distance between the sun to the earth. on the normal plane, the 𝐺oncan calculate by the equation (4) 𝐺o = 𝐺sc (1 + 0.033 cos 360 𝑛 365 ) , (4) and 𝐺sc is the solar constant. calculating the daily and hourly solar radiation received on a horizontal surface is useful under standard conditions. to calculate the beam radiation transmitted from the sun without scattering by clear atmospheres, consider the zenith angle and altitude of the atmosphere for four climate types from equation (5) [21] 𝜏𝑏 = 𝑎0 + 𝑎1e − 𝑘 cos 𝜃𝑧 (5) 𝑎0 ∗ = 0.4237 − 0.00821 (6 − 𝐴)2, (6) 𝑎1 ∗ = 0.5055 + 0.00595 (6.5 − 𝐴)2 , (7) 𝑘∗ = 0.2711 + 0.01858 (2.5 − 𝐴)2, (8) where a0, a1 are constant, and k represents the standard atmosphere at 23 km visibility that can be found by using the equation (6), (7), and (8) a represents the altitude of the observer in kilometres. hence, multiply the constant values by correction factors in table 2, 𝑎0 ∗  𝑟𝑜 , 𝑎1 𝑎1 ∗ and k  𝑟𝑘 could calculate the beam radiation transmitted. thus, it can determine the beam radiation for any zenith angle or altitude, even 2.5 km. the normal beam radiation in the clear sky can be found by multiplying the value of the beam radiation transmitted during clear atmospheres 𝜏𝑏 by the value of extraterrestrial radiation incident 𝐺on. the value results multiply by the zenith angle to obtain solar radiations value on the horizontal of panel 𝐺cb [22]. 3. results and discussion the pv panel has been simulated under different solar radiation values estimated by mathematical equations and various temperatures. for january, solar radiation values were 176 w/m2, for july was 735 w/m2 and under stc was 1000 w/m2. january's highest temperature was 4 °c, july highest temperature reaches 35 °c, and the pv panel temperature was under stc 25 °c. the results indicate that temperature distribution on the pv panels surface was high at the top of the pv panel while the aluminium frame's temperature was lower. the dark-blue colour refers to the minimum temperature on the panel, the bright-red colour refers to the maximum temperature on the panel, and other colours represent temperature variations. in january, when the temperature is low, the beam solar radiation is 176 w/m2. the temperature distribution on the pv panel is between a maximum temperature of 15.4 °c and a minimum of 11.9 °c, as shown in figure 4. thus, this solar radiation range influences pv panel performance represented by power output, without leaving any damage or overheating because of lower the pv panel temperature. at applied the july solar radiation 735 w/m2 when the temperature reaches 35 °c, the simulation shows that the pv panel is exposed to a maximum temperature of 84.6 °c, while the minimum temperature was 68.4 °c as shown in figure 5 above. increase the pv panel temperature by 10°c above the stc value causes a decrease in its efficiency. this extent of temperature causes other problems for the pv panel, such as overheating, which leads to burning some cells or reduced voltage, power, and the pv panel's output current; thus, the pv panel becomes inoperative [23]. figure 3. simulation steps of the pv panel. table 2. correction factors for climate types [20]. climate type 𝒓𝒐 𝒓𝟏 𝒓𝒌 tropical 0.95 0.85 1.02 mid latitude winter 1.03 1.01 1 subarctic summer 0.99 0.99 1.01 mid latitude summer 0.97 0.99 1.02 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 65 under stc conditions of the pv panel 1000 w/m2 and 25 °c, observed rising the pv panel temperature until 92.5 °c, while the lowest temperature was 67.7 °c, as shown in figure 6 above. therefore, this temperature range is not suitable for pv panel operation due to the rising of the pv cells temperature and causes the same problems of july values as overheating pv cells or reduced conversion efficiency and production of the pv panel. the temperature increase of the pv panel is directly proportional to increasing solar radiation. due to the pv layers different properties, these layers temperature is different, as shown in the figures above; the pv panel temperatures surface is different from the aluminium frame. the maximum and average temperatures on the pv panel are converged somewhat at increasing the solar radiation, as shown in table 3 above, for july and stc values. the cooling of the pv panel to remove the excessive heat is important to continue its work under high temperatures, even with stc values. 4. conclusion the pv panel was simulated by ansys software according to the properties of the material of the pv layers, such as thermal conductivity, densities, and specific heat for each layer. the climatic conditions of miskolc city have depended on solar radiation, temperatures, and wind speed as a parameter in simulation. the solar radiation values have been estimated depending on the mathematical equations for january and july with an stc standard value. the pv panel showed a variable thermal behaviour with a change in solar radiations value, which became evident by the pv panel's distribution temperatures. thus, low solar radiation value and temperature influence the pv panels performance to the extent that it does not cause damage to the pv panel but reduces production. on the other hand, increasing solar radiation causes an increase in the pv panels temperature that causes reduce in power output with damage to the pv panel. a convergence of the maximum and average temperature was observed with increasing the solar radiation that causes the rising of pv cells temperature that requires cooling of the pv panel to remove the excessive heat. references [1] v. v. tyagi, n. a. a. rahim, n. a. rahim, j. selvaraj, progress in solar pv technology: research and achievement, renew sustain energy rev, 20 (2013), pp. 443-61. doi: 10.1016/j.rser.2012.09.028 [2] m. c. browne, b. norton, s. j. mccormack, heat retention of photovoltaic/thermal collector with pcm, sol energy, 133 (2016), pp. 533–48. doi: 10.1016/j. solener.2016.04.024 [3] loredana cristaldi, mohamed khalil, payam soulatiantork, a root cause analysis and a risk evaluation of pv balance of systems failures, acta imeko 6 (2007) 4, pp.113-120. doi: 10.21014/acta_imeko.v6i4.425 [4] loredana cristaldi, mohamed khalil, marco faifer, markov process reliability model for photovoltaic module failures, acta imeko 6 (2017), pp. 121-130. doi: 10.21014/acta_imeko.v6i4.428 [5] aarti kane, vishal verma, bhim singhc, optimization of thermoelectric cooling technology for active cooling of a photovoltaic panel, renewable and sustainable energy reviews 75 (2017), pp.1295-1305. doi: 10.1016/j.rser.2016.11.114 [6] fesharaki vj, dehghani m, fesharaki jj, the effect of temperature on photovoltaic cell efficiency, proceedings of the 1st international conference on etec, tehran, iran, 2011 . [7] borkar ds, prayagi sv, gotmare j, performance evaluation of photovoltaic solar panel using thermoelectric cooling, international journal of engineering research 9(2014), pp. 536539. [8] fontenault b, active forced convection photovoltaic/thermal panel efficiency optimization analysis, rensselaer polytechnic institute hartford, 2012. [9] skoplaki e, palyvos ja, on the temperature dependence of photovoltaic module electrical performance: a review of table 3. temperatures on the pv panel at different solar radiation. solar radiation w/m2 max temperature °c min temperature °c average temperature °c 176 15.8 11.4 15.2 735 84.6 66.3 82.1 1000 92.5 67.7 89.1 figure 4. at solar radiation 176 w/m2 and temperature 4 °c. figure 5. at solar radiation 734 w/m2 and temperature 35 °c. figure 6. at solar radiation 1000 w/m2 and temperature 25 °c. https://doi.org/10.1016/j.rser.2012.09.028 https://doi.org/10.1016/j.%20solener.2016.04.024 http://dx.doi.org/10.21014/acta_imeko.v6i4.425 http://dx.doi.org/10.21014/acta_imeko.v6i4.428 https://doi.org/10.1016/j.rser.2016.11.114 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 66 efficiency/power correlations, solar energy 83(2009), pp. 614– 624. [10] solar facts: photovoltaic efficiency inherent and system. online [accessed 8 december 2021] https://www.solar-facts.com/panels/panel-efficiency.php [11] g. m. tina, s. scrofani, electrical and thermal model for pv module temperature evaluation, ieee, 2008. doi: 10.1109/melcon.2008.4618498 [12] cătălin george popovici, sebastian valeriu hudișteanu, theodor dorin mateescu, nelu-cristian cherecheș. efficiency improvement of photovoltaic panels by using air cooled heat sinks, energy procedia. 85 (2016) pp. 425 – 432. doi: 10.1016/j.egypro.2015.12.223 [13] leow w z, irwan y m, safwati i, irwanto m, amelia a r, syafiqah z, fahmi m i and rosle n, simulation study on photovoltaic panel temperature under different solar radiation using a computational fluid dynamic method. journal of physics, conference series, iop 1432 (2020). doi: 10.1088/1742-6596/1432/1/012052 [14] mohammed alktranee and péter bencs, test the mathematical of the photovoltaic model under different conditions by use matlab– simulink, journal of mechanical engineering research and developments vol 43 (2020), pp. 514-521. [15] marek jaszczur, qusay hassan, janusz teneta, ewelina majewska1, marcin zych, an analysis of temperature distribution in solar photovoltaic module under various environmental conditions, matec web of conferences 240, 04004 (2018). doi: 10.1051/matecconf/201824004004 [16] n. pandiarajan and ranganath muthu, mathematical modeling of photovoltaic module with simulink, international conference on electrical energy systems (icees 2011) (2011), pp. 3-5. doi: 10.1109/icees.2011.5725339 [17] s. armstrong, w. g. hurley, a thermal photovoltaic panels under varying atmospheric conditions, applied thermal engineering 30(2010), pp. 1488-1495. doi: 10.1016/j.applthermaleng.2010.03.012 [18] j. vlachopoulos, d. strutt, heat transfer, in spe plastics technicians toolbox 2 (2002), pp. 21-33. [19] weather and climate, miskolc, hungary. online [accessed 5 march 2021] https://weather-and-climate.com/average-monthly-rainfalltemperature-sunshine,miskolc,hungary [20] cooper, p. i, the absorption of solar radiation in solar stills, solar energy 3 (1969), pp. 333-346. doi: 10.1016/0038-092x(69)90047-4 [21] john a. duffie, william a, beckman, solar engineering of thermal processes, fourth edition. isbn 978-0-470-87366-3 (2013). [22] hottel, h. c, a simple model for estimating the transmittance of direct solar radiation through clear atmospheres, solar energy 2 (1976), pp. 129-134. doi: 10.1016/0038-092x(76)90045-1 [23] bodnár, i., iski, p., koós, d., skribanek, á, examination of electricity production loss of a solar panel in case of different types and concentration of dust, advances and trends in engineering sciences and technologies iii (2019), pp. 313-318. doi: 10.1201/9780429021596-49 https://www.solar-facts.com/panels/panel-efficiency.php https://doi.org/10.1109/melcon.2008.4618498 https://doi.org/10.1016/j.egypro.2015.12.223 https://doi.org/10.1088/1742-6596/1432/1/012052 https://doi.org/10.1051/matecconf/201824004004 https://doi.org/10.1109/icees.2011.5725339 https://doi.org/10.1016/j.applthermaleng.2010.03.012 https://weather-and-climate.com/average-monthly-rainfall-temperature-sunshine,miskolc,hungary https://weather-and-climate.com/average-monthly-rainfall-temperature-sunshine,miskolc,hungary https://doi.org/10.1016/0038-092x(69)90047-4 https://doi.org/10.1016/0038-092x(76)90045-1 https://doi.org/10.1201/9780429021596-49 solar energy harvesting for lorawan-based pervasive environmental monitoring acta imeko issn: 2221-870x june 2021, volume 10, number 2, 111 118 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 111 solar energy harvesting for lorawan-based pervasive environmental monitoring tommaso addabbo1, ada fort1, matteo intravaia1, marco mugnaini1, lorenzo parri1, alessandro pozzebon1, valerio vignoli1 1 department of information engineering and mathematics, university of siena, via roma 56, 53100 siena, italy section: research paper keywords: solar; energy harvesting; lorawan; environmental monitoring; particulate matter citation: tommaso addabbo, ada fort, matteo intravaia, marco mugnaini, lorenzo parri, alessandro pozzebon, valerio vignoli, solar energy harvesting for lorawan-based pervasive environmental monitoring, acta imeko, vol. 10, no. 2, article 16, june 2021, identifier: imeko-acta-10 (2021)-02-16 section editor: giuseppe caravello, università degli studi di palermo, italy received january 18, 2021; in final form may 5, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: alessandro pozzebon, e-mail: alessandro.pozzebon@unisi.it 1. introduction energy self-sufficiency is one of the crucial requirements for the realisation of efficient real-time distributed monitoring infrastructures, in wide range of application fields, from environmental [1], [2] and cultural heritage monitoring [3], [4] to the aerospace [5] and the smart industry [6], [7] domains. indeed, when developing a large number of wirelessly connected sensing devices, energy self-sufficiency means their usability as deployand-forget items, where any kind of manual intervention is reduced at minimum. besides reducing power consumption, energy self-sufficiency mainly requires the presence of a continuous or semi-continuous source of energy that, when these devices are expected to be employed in motion, cannot be the power grid. for this reason, the only way to ensure the continuous presence of the adequate amount of energy to allow the sensing device functioning is the so-called energy harvesting, i.e., the presence of a source of renewable energy on-board. among the various possible sources of energy, the most exploited is the solar one: indeed, several monitoring platforms have been provided with solar cells that are used to recharge onboard batteries or super-capacitors. nevertheless, such a solution has to face in several cases limitations due to the large dimensions of the solar cells, to the limited amount of achievable power or to the inadequate exposition of the sensing device. another factor influencing the performances of the energy harvesting system is the complexity of the sensing platform: indeed, the more power-hungry are the components, the more difficult is the design of the harvesting solution. the aim of this paper is to propose the characterisation of a small scale solar-based energy harvesting system, designed to identify an efficient trade-off among dimensions and power efficiency. in order to demonstrate the validity of the proposed solution, it has been embedded in a wireless sensing device thought to be employed for distributed real-time environmental monitoring: in particular, the sensing device is provided with sensors for the measurement of particulate matter (pm), with a gps module for localisation and tracking purposes and with long range wide area network (lorawan) connectivity. such a device is expected to be employed within a smart city context. while several city-scale air quality wireless monitoring abstract the aim of this paper is to discuss the characterisation of a solar energy harvesting system to be integrated in a wireless sensor node, to be deployed on means of transport to pervasively collect measurements of particulate matter (pm) concentration in urban areas. the sensor node is based on the use of low-cost pm sensors and exploits lorawan connectivity to remotely transfer the collected data. the node also integrates gps localisation features, that allow to associate the measured values with the geographical coordinates of the sampling site. in particular, the system is provided with an innovative, small-scale, solar-based powering solution that allows its energy self-sufficiency and then its functioning without the need for a connection to the power grid. tests concerning the energy production of the solar cell were performed in order to optimise the functioning of the sensor node: satisfactory results were achieved in terms of number of samplings per hour. finally, field tests were carried out with the integrated environmental monitoring device proving its effectiveness. mailto:alessandro.pozzebon@unisi.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 112 infrastructures can be found in literature [8], [9], [10], this paper focuses on the realisation of a different typology of data acquisition architecture. indeed, in the proposed solution, the sensor nodes are expected to be provided with localisation features [11] and then to be deployed on means of public transport. this approach is especially relevant since it allows to acquire data in a more pervasive way, bringing the measurement instrumentation in almost every spot of a city. at the same time, the scope of the paper is to propose a device to be employed in a “plug-and-play” fashion: the use of the photovoltaic source for the powering of the system goes specifically in this direction. indeed, while on means of public transportation several sources of energy may be available, the connection of a new device may require structural modifications on the vehicle itself to wire the sensor node to the power source. such modifications may be even more cumbersome if the sensor node is expected to be deployed outside the vehicle, as in the case of acquisition of environmental parameters. conversely, the design of a totally autonomous system may allow its deployment without any kind of intervention on the vehicle: in its final configuration, the sensor node may be attached for example with a magnet to the vehicle chassis. the choice of long range (lora) as the transmission technology comes from its ability to provide possibly the best compromise between performances and costs within the smart city scenario [12], [13]. indeed, the long transmission ranges allow to cover a large area with a relatively small number of gateways. at the same time, thanks to the lorawan protocol, a large number of end devices can be simultaneously managed thanks to multi-channel and gateway redundancies. on the other side, costs are kept very low since no fee is required for the transmitting devices: such aspect may be crucial when the number of devices to be deployed is expected to grow. at the same time, the same lorawan network may be exploited also for other activities, thus further reducing the costs. if compared with competing technologies, the benefits coming from the adoption of lorawan can be better underlined. starting from local area technologies like zigbee, bluetooth or wifi, their short transmission range obviously prevents for using them for monitoring at a city scale since too many gateways may be required. moving to wide area technologies, cellular ones are of course more reliable than lora. however, they require a subscription for each device and this cost may be unsustainable with a growing number of devices: conversely, lorawan may easily scale since no cost is required for connection, and the price of lora modules is in the order of few euros, notably lower than its concurrent cellular technology, i.e., nb-iot. the same limitation is applied to the other well-known sub-ghz technology, sigfox. indeed, such technology too requires the payment of a subscription for each device. at the same time, the limitations that may come from the usage of lorawan are not crucial for the proposed application scenario. indeed, the 1% duty-cycle limitation is not influent on the acquisition of pm values that can be performed every 10-15 minutes, while the limited reliability of the connection may lead to the loss of some packets that are not relevant too for the purpose of the proposed system. the rest of the paper is structured as follows: in section 2 some details related to the monitoring of pm concentrations are provided, while section 3 focuses on the state of the art related to solar-based energy harvesting solutions. section 4 provides a description of the overall sensor node architecture while section 5 is devoted to the design of the solar harvesting system. section 6 provides some field test results while in section 7 some conclusive remarks are presented. 2. particulate matter monitoring the term "particulate matter" (pm) encompasses a wide range of solid, organic, and inorganic particles and liquid droplets that are commonly found in air. in general, pm is composed of a wide range of different elements that change according to the specific environmental features [14], [15], but include sulphate, nitrates, ammonia, sodium chloride, black carbon, mineral dust, and water. pm is classified according to the dimensions of the single particles: we speak then of pm10 when the diameter of the particles is lower than 10 micron (dpm10 < 10 µm) and of pm2.5 when the diameter is lower than 2.5 micron (dpm2.5 < 2.5 µm). both typologies of pm can be easily inhaled by human beings, and a chronic exposition to this kind of pollutants can bring to the emergence of cardiovascular and respiratory diseases. in particular [16], pm10 can penetrate inside lungs, while pm2.5 can penetrate the lung barrier and enter the blood system, with even more harmful effects. for this reason, world health organisation has defined two thresholds for each type of particulate, that can be seen in table 1 [16], that should not be overcome to safeguard the citizens' health. pm levels are usually measured by public bodies which collect the data by means of fixed monitoring stations deployed in a limited number of spots: in general, only one or few monitoring stations are present in medium to large-sized cities. moreover, data collected by these stations refer only to the area of the city where they are deployed, while they cannot provide a pervasive feedback on the pm levels in other parts of the city. this fact is mainly due to the high cost of this monitoring stations that prevents from deploying them in a large number across a large territory. nevertheless, some low-cost pm sensors are currently available on the market: while their accuracy level is not comparable with the fixed monitoring stations, they can still provide an interesting feedback on the level of pm, in particular, for what concerns the overcoming of the daily and yearly thresholds. moreover, these devices are characterised by small dimensions, and can be then integrated on portable data acquisition platforms that can be provided with the adequate connectivity to transfer the acquired data in real time to a remote data management centre. by deploying a large quantity of this kind of devices, a pervasive monitoring infrastructure can be then set up across a whole urban centre, thus perfectly fulfilling the paradigm of the smart city [17], [18]. 3. solar energy harvesting in the last decades, due to the steadily growing number of power requiring devices and to the subsequent technology sustainability issues, light energy harvesting has attracted tremendous interest and has aroused a great research effort in the scientific community, resulting in a plethora of solar cell typologies [19], [20], [21], [22], each with different optical and mechanical properties, performances and cost. table 1. particulate matter thresholds. 24-hour mean annual mean pm2.5 25 mg/m3 10 mg/m3 pm10 50 mg/m3 20 mg/m3 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 113 some studies [23], [24], [25], [26], [27], [28], [29] aim to enhance the performances under particular light spectrum conditions, mainly low-intensity indoor lighting, choosing materials with suitable absorption spectra. other researches focus on extremely improving the efficiency by realising multijunction structures capable of absorbing energy in a wide frequency range [30], [31]. crystalline (monocrystalline and polycrystalline) silicon [32] is surely the dominant technology for solar cells, representing a good compromise between performance and cost [20]. excellent efficiencies over 25% are achieved by monocrystalline silicon technology [19], thanks to efficiency improving strategies, such as carrier recombination reduction through contact passivation [33], [34], [35], [36]. even though these latest efficiency enhancing techniques are not commonly findable in the marketplace yet, monocrystalline silicon solar cells remain the preferable solution for powering a small outdoor electronic utility, like the application presented in this paper, with the minimum encumbrance and at a very reasonable cost. 4. sensor node structure the sensor node purpose is to periodically sample the amount of pm in the air and transmit this information over a lora radio channel. in addition, also the gps position of the node has to be acquired every time a sample is collected. the sensor is powered by a battery that is recharged by means of a crystalline silicon solar cell. the structure of the system is shown in figure 1. the main blocks that compose the node are the communication and control unit (ccu), the particle sensor, the gps module, a battery and a step-up dc-dc converter to manage the energy coming from the solar cells. the ccu has been developed ad-hoc (see figure 2) and hosts a low power stm32l073 microcontroller (mcu) by stmicroelectronics, a lora transceiver (rfm95 by hoperf) and a power management electronics to supply internal devices and charge a li-ion battery. the power from the solar cells is elevated and stabilised by a step-up dc-dc converter (ltc3105 on an evaluation board) that hosts a start-up controller (from 250 mv) and a maximum power point controller (mppc) that enables operation directly from low voltage power sources such as photovoltaic cells. the mppc set point can be selected depending on the solar cells used. if energy from solar cell is available, the battery charger (stc4054 by stmicroelectronics) will recharge the battery. the mcu and the radio module are supplied by a 2.5 v ldo regulator, the voltage level of the battery is controlled by an adc channel on the mcu and a voltage divider. the particle sensor (hpma115s0 by honeywell), shown in figure 3 requires 5 v to operate: this power source is generated by another ltc3105 module directly from the battery. this latter can be powered off by a specific shut down line from the mcu. the gps module (mtk3339 by adafruit) requires a power voltage of 3.3 v, that is available as an output of the particle sensor. since the sensor node is expected to operate continuously without the need of connection to the power grid, a power strategy based on a strict duty-cycling was adopted. in particular, the system is expected to perform data sampling and transmission, and then to be put in sleep mode according to an adaptive duty-cycling policy that will be described in detail in section 5. 5. energy harvesting and power management the sensor node is powered with a battery charged by a small solar harvester. the aim of this section is to foresee the maximum feasible duty cycle of the sensor node operations for not draining completely the battery, given the energy collected by the harvester. this problem can be formulated as the following condition: 𝑊𝐻 ≥ 𝑊𝛿 (𝛿) (1) figure 1. sensor node internal structure. figure 2. communication and control unit figure 3. honeywell hpma115s0 particulate matter sensor. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 114 where 𝑊h is the energy harvested over a day, while 𝑊𝛿 (𝛿) is the sensor node energy consumption over a day, which in fact depends on the duty cycle 𝛿. respecting condition (1) means to guarantee energy self-sufficiency to the sensor device, with no need for battery replacement. therefore, the first step is to assess the sensor node energy requirements, i.e. the quantity 𝑊𝛿 (𝛿). let us indicate with 𝑡0 the time (in seconds) necessary for a complete operating sequence, made up of mcu acquisition, data transmission via lora and gps localisation. the maximum number of operating sessions in an hour is given by: 𝑁 = ⌊ 3600 𝑡0 ⌋ . (2) in our case we have 𝑡0 = 10 𝑠 and thus 𝑁 = 360. let 𝑛 be the desired number of operating sessions in an hour. then the duty cycle is plainly given by: 𝛿 = 𝑛 𝑁 . (3) now, let 𝑊0 be the energy required for a single operating sequence. considering the sensor and analog front-end powering, the microcontroller consumption in run mode, the lora module consumption in stand-by and transmission modes, and the gps consumption, we obtained 𝑊0 ≈ 0.29 mw h. using equation (3), the daily energy consumption for a fixed duty cycle is given by: 𝑊𝛿 (𝛿) = 24 ∙ 𝑊0 ∙ 𝑛 = 24 ∙ 𝑊0 ∙ 𝛿 ∙ 𝑁. (4) substituting 𝑊𝛿 (𝛿) with 𝑊𝐻 in equation (4) yields the maximum feasible duty cycle: 𝛿max = 𝑊h 24 ∙ 𝑊0 ∙ 𝑁 . (5) the harvested energy 𝑊h varies dramatically during the year and obviously is strongly dependent on the weather conditions. in our previous work [37], we proposed a theoretical evaluation of the quantity 𝑊h over the year, calculating approximately the energy produced by a reference monocrystalline silicon solar cell [34]. in this paper, we present some recent experimental results on the energy harvested by a commercial monocrystalline silicon solar cell for outdoor use, produced by seeed studio (figure 4). the producers attest that this solar module is capable of operating at a voltage of 5.5 v and a current of 100 ma, resulting in a maximum power point (mpp) of 0.55 w1. the cell surface is 70 × 55 mm2. the measurements were performed in siena, italy, during sunny or partially cloudy days in the first half of january 2021 (a complete characterisation of the solar cell behaviour throughout the whole year would require a long measuring campaign, and data are not available to date). in the next section, the employed measurement system for the solar cell 1 the producers do not mention explicitly the working conditions under which their solar modules were characterised. usually, solar cell performances are evaluated under standard test conditions (i.e. 25 ℃ temperature, 1000 𝑊 𝑚2⁄ solar irradiance, 1.5 air mass). characterisation2 is described; section 5.2 shows the results of the measurements. 5.1. solar cell characterisation method the circuitry employed for characterising the performances of the selected solar module is based on a solution proposed by analog devices. figure 5 shows the circuit schematic. referring to figure 5, the solar module to be characterised is connected to the ports labelled pv+ and pvon the left. the lower branch is the voltage sensing part, made up of a simple voltage divider followed by an operational amplifier (oa) in noninverting configuration. the overall voltage gain 𝐺𝑉 is given by: 𝐺𝑉 = 𝑅2 𝑅1 + 𝑅2 ∙ (1 + 𝑅4 𝑅3 ) . (6) the r4 resistor is actually short-circuited because the solar module already outputs a voltage in the order of some volts. the upper branch is the current sensing part: the current is converted into a voltage through the 1 ω resistor and then amplified by 2 the term characterisation may be object of misunderstanding. in this context, we are only interested in evaluating the maximum power deliverable by the solar cell, we do not want to determine its parameters (open-circuit voltage, short-circuit current, fill factor) from the currentvoltage curve. therefore, in the following, the term “characterisation” is referred to the evaluation of the cell maximum deliverable power. figure 4. selected monocrystalline solar module. figure 5. solar cell characterisation circuit. for the varying elements, the value used for the experiments presented in this work is reported in brackets. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 115 another non-inverting amplifier. thus, the overall current gain 𝐺𝐼 is: 𝐺𝐼 = 1 + 𝑅11 𝑅10 . (7) this circuit is capable of scanning the current-voltage (iv) curve of the connected solar cell when mosfet q1 and mosfet q2 are conveniently driven. in detail, before starting the acquisition (idle state), q1 is on and q2 is on and therefore the solar cell is in short-circuit conditions, while the oa outputs are both zero. the measurement starts when q1 and q2 are switched off: when this happens, the current instantaneously flows through the capacitor c and the current sensing 1 ω resistor. the voltage across the solar cell is still zero (short-circuit conditions), but now the short-circuit current is visible amplified on the current sensing oa output. the capacitor starts to charge towards the cell open-circuit voltage, which is actually never reached as an effect of the presence of the voltage divider. in this way, the solar cell iv characteristic is scanned and, in particular, the mpp is certainly touched at some points. then, after the iv transient, q2, which has a drain dissipative power resistor to limit the current, is switched on again. finally, also q1 is switched on to return back into the initial idle state. the two oa outputs are sampled by a stm32l432kc microcontroller. the microcontroller acquires 500 samples (250 for the voltage and 250 for the current on two different adc channels) at 125 ksps per channel. the covered time interval is therefore 2 ms, which is sufficient for sampling the signals of interest with a satisfying time resolution. the adc has 12 bits and 3.3 v full-scale. the stm32l432kc also powers the two oa through an on-board 5 v output and drives q1 and q2 through 3.3 v tolerant general purpose input/output (gpio) pins. a raspberry pi powers the stm32l432kc and collects the data from it in javascript object notation (json) format. the data are available remotely on a web database. this solution for solar cell characterisation presents some problems (as already mentioned, the open-circuit voltage is unreachable), but is more than sufficient for our application, since we are not interested in drawing the entire iv curve, but only in evaluating the maximum power deliverable by the cell. furthermore, this solution exhibits two fundamental advantages: first, it is extremely low-cost; second, it is portable, as it exploits the response of the solar cell itself to a load impedance variation, and thus there is no need for a precision voltmeter or a signal generator (or for a more complex and power consuming current generator circuit), usually present for setting the cell current and measuring the voltage in more common solar cell characterisation methods. 5.2. solar cell characterisation results figure 6 shows the current and voltage variations on the cell during an acquisition cycle (i.e. the iv transient provoked by switching off q1 and q2, as explained in the previous section). as it can be seen, the cell passes from being short-circuited to an open-circuit condition. the resulting power curve (figure 7) assumes the bell shape typical of a solar cell. the specific curves in figure 6 and figure 7 were obtained in laboratory testing the circuit under a white led (3500 k colour temperature) and are only meant to demonstrate qualitatively the circuit functionality. the measurements were performed in siena, italy, throughout sunny or partially cloudy days in january 2021. the characterisation system was placed in a realistic position, exposed to the sun during most of the day but with some trees and other obstacles likely present in the actual sensor usage. an acquisition was performed every minute. as an example, figure 8 shows the cell maximum achievable power measured on 27th january 2021. the wells are due to the presence of obstacles (e.g. around midday some trees covered the solar module). the total energy collected during the examined days (that is 𝑊h, see the introduction of section 5) oscillates between a minimum of 400 mw h in partially cloudy days and a maximum of 1 w h. considering on average 𝑊h ≈ 700 mw h, the corresponding maximum feasible duty cycle, calculated through equation (5), is: 𝛿max ≈ 30% . (8) this result, put into equation (3), corresponds to about 100 sensor node acquisitions per hour, which is more than decent, considering that december and january are the worst months of the year for solar energy harvesting. however, it is clear that this performance assessment is not valid for heavily cloudy or rainy days, in which the energy production falls sharply (the energy produced on 9th january 2021 and 10th january 2021, which were completely sunless, was 20 mw h in total). for this reason, we are driving the future development of the powering of the sensor node towards a multi-source energy harvesting approach, figure 6. solar cell current and voltage variations during an iv acquisition cycle: laboratory test under white led. figure 7. power curve corresponding to current and voltage reported in figure 6. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 116 adding, along with the solar harvester, a piezoelectric harvester for collecting energy also from mechanical vibrations and even from rainfall [38]. 6. tests and measurements the performances of the system were tested in a real environment: in particular, the sensor node was placed for 1 week on the facade of the department of information engineering and mathematics of the university of siena, italy (see figure 9), acquiring pm10 and pm2.5 values each 15 minutes. during each sampling, 10 values were acquired and then their mean value was transmitted by means of lorawan protocol to a lorawan gateway positioned inside the building. the pm sensor was characterised only in static conditions (no significant vibration is present) since the idea in the real scenario is to acquire the measurement only when the vehicle is still. indeed, in its final configuration the node is expected to integrate an accelerometer that will be exploited to detect whether the vehicle is moving or not. the lorawan transmission is then performed too in this phase since the whole data acquisition and transmission task requires less than 2 s, avoiding thus possible additional issues due to the set up of a radio channel in nonstationary conditions. in order to verify the operation of the system, the sampled values were compared with the ones available on the website of the regional environmental protection agency of tuscany region (arpat), which owns a set of fixed monitoring stations deployed across the whole territory of tuscany. in particular, daily average values are available on arpat website: for this reason, for the acquired values the daily mean value was calculated. the values were compared with the ones acquired by the fixed station positioned in viale bracci, siena, italy, which is the closest one to the university building, at a distance of 2.5 km. a deployment close to this fixed station was not possible due to security reasons. figure 10 shows the daily mean values of pm10 concentrations measured at the fixed station by arpat and by the system described in this work, positioned on the university building. even if the two values are notably different, this is due to the deployment site: while the arpat fixed station is positioned close to the very busy road that leads to the siena hospital, the university building is positioned in a limited traffic area located in a peripheral part of the historic centre of siena. nevertheless, the effectiveness of the system can be noticed, since the trends of the two values all week long are almost similar: in particular, the values measured by the system are always almost half of the ones provided by arpat. an important comment has to be done: a low-cost sensor has been used in the realisation of the system, and its accuracy level cannot be compared with the professional, and then very expensive, measurement platforms used by arpat. nevertheless, looking at the values measured by the system, it is evident that the proposed solution can still be useful to collect data about pm in a more pervasive way, even if with a lower level of accuracy. in this sense, the proposed solution is not expected to replace the existing fixed measurement stations but mainly as system to enrich the knowledge about the different levels of pm that may be recorded in correspondence of different environmental conditions. figure 8. solar cell maximum power production on 27th january 2021 in siena, italy. figure 9. sensor node testing setup. figure 10. comparison between daily mean pm10 concentrations provided by arpat and measured by the system. figure 11 : geographical data visualisation. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 117 following the system characterisation performed in a controlled environment (i.e., the department facade), a geographical data acquisition campaign was carried out, measuring pm2.5 and pm10 concentrations along the roads of a wide area within the historic centre of siena. for this purpose, a dragino lorawan gateway was placed on the front facade of the department building, in the same spot that was used for the deployment of the sensor node in the previous experimentation. pm measurements were associated with the latitude and longitude values acquired by the gps module: these values were then used to set up a data visualisation tool by means of google maps services. the measured values show an increase in the narrower alleys where vehicular traffic was more consistent. a screenshot of the acquired data visualisation tool, with the measurements related to one of the positions, is shown in figure 11. blue markers represent the spots where measurements were acquired while the red star shows the gateway position. 7. conclusions the aim of this paper was to propose the architecture of a self-powered lorawan sensor node for the pervasive measurement of pm concentrations in urban areas. according to the presented results, the system is able to operate autonomously exploiting an energy harvesting system based on the use of a small low-cost monocrystalline solar cell. in particular, the experimentation carried out demonstrated how the energy provided by the solar harvester is sufficient to guarantee around a hundred samplings per hour during winter, when solar energy production is at its minimum. moreover, the addition of a mechanical vibration energy harvester is under evaluation as a future development, to enable a multi-source energy harvesting approach which would improve the energy production during sparsely lit days. at the same time, the system can sample the pm concentrations by means of a low-cost sensor, transmitting them to a lorawan gateway together with the geographic coordinates of the sampling location. by positioning the measurement system on means of public transport and combining these two data, pm levels may be measured across a large area and the level differences related to different areas of an urban centre may be identified. moreover, the energy selfsufficiency feature may allow an easy deployment of the device on the vehicles, without the need for setting up wires to connect the node to external power sources as for example the vehicle batteries. together with the energy harvesting system, also a prototype of the measurement system was tested: preliminary tests were carried out in a controlled environment, and the acquired values were compared with the certified ones, provided by a public body, proving the consistence of the measured parameters. following this preliminary step, the whole platform was tested for distributed data acquisition along city roads in siena, thus in a real application scenario. the acquired results showed the effectiveness of the proposed solution. references [1] f. leccese, m. cagnetti, a. calogero, d. trinca, s. di pasquale, s. giarnetti, i. cozzella, a new acquisition and imaging system for environmental measurements: an experience on the italian cultural heritage, sensors, vol. 14, 2014, no. 5, pp. 9290-9312. doi: 10.3390/s140509290 [2] a. pozzebon, i. cappelli, a. mecocci, d. bertoni, g. sarti, f. alquini, a wireless sensor network for the real-time remote measurement of aeolian sand transport on sandy beaches and dunes, sensors, vol. 18, 2018, no. 3, p. 820. doi: 10.3390/s18030820 [3] f. leccese, m. cagnetti, s. tuti, p. gabriele, e. de francesco, r. ðurovic-pejcev, a. pecora, modified leach for necropolis scenario, imeko international conference on metrology for archaeology and cultural heritage, lecce, italy, 23-25 october 2017, pp. 442-447. online [accessed 18 june 2021] https://www.imeko.org/publications/tc4-archaeo2017/imeko-tc4-archaeo-2017-088.pdf [4] f. lamonaca, c. scuro, p. f. sciammarella, r. s. olivito, d. grimaldi, d. l. carnì, a layered iot-based architecture for a distributed structural health monitoring system, acta imeko, vol. 8 (2019), no. 2, pp. 45-52. doi: 10.21014/acta_imeko.v8i2.640 [5] f. leccese, m. cagnetti, s. sciuto, a. scorza, k. torokhtii, e. silva, analysis, design, realization and test of a sensor network for aerospace applications, i2mtc 2017 2017 ieee international instrumentation and measurement technology conference, turin, italy, 22-25 may 2017, pp. 1-6. doi: 10.1109/i2mtc.2017.7969946 [6] t. addabbo, a. fort, m. mugnaini, l. parri, s. parrino, a. pozzebon, v. vignoli, a low power iot architecture for the monitoring of chemical emissions, acta imeko, vol. 8 (2019), no. 2, pp. 53-61. doi: 10.21014/acta_imeko.v8i2.642 [7] l. angrisani, u. cesaro, m. d'arco, o. tamburis, measurement applications in industry 4.0: the case of an iot–oriented platform for remote programming of automatic test equipment, acta imeko, vol. 8 (2019), no. 2, pp. 62-69. doi: 10.21014/acta_imeko.v8i2.643 [8] k. zheng, s. zhao, z. yang, x. xiong, w. w. xiang, design and implementation of lpwa-based air, ieee access, 2016, no. 4, pp. 3238-3245. doi: 10.1109/access.2016.2582153 [9] g. b. fioccola, r. sommese, i. tufano, r. c. a. g. ventre, polluino: an efficient cloud-based management of iot devices for air quality monitoring, in ieee 2nd international forum on research and technologies for society and industry leveraging a better tomorrow (rtsi), bologna, italy, 7-9 september 2016, pp. 1-6. doi: 10.1109/rtsi.2016.7740617 [10] a. candia, s. n. represa, d. giuliani, m. ç. luengo, a. a. porta, l. a. marrone, solutions for smartcities: proposal of a monitoring system of air quality based on a lorawan network with low-cost sensors, in congreso argentino de ciencias de la informatica y desarrollos de investigacion (cacidi), buenos aires, argentina, 28-30 november 2018, pp. 1-6. doi: 10.1109/cacidi.2018.8584183 [11] t. addabbo, a. fort, m. mugnaini, l. parri, a. pozzebon, v. vignoli, smart sensing in mobility: a lorawan architecture for pervasive environmental monitoring, in ieee 5th international forum on research and technology for society and industry (rtsi), firenze, 9-12 september 2019, pp. 421-426. doi: 10.1109/rtsi.2019.8895563 [12] d. magrin, m. centenaro, l. vangelista, performance evaluation of lora networks in a smart city scenario, 2017 ieee international conference on communications (icc) paris, france, 21-25 may 2017, pp. 1-7. doi: 10.1109/icc.2017.7996384 [13] p. j. basford, f. m. bulot, m. apetroaie-cristea, s. j. cox, s. j. ossont, lorawan for smart city iot deployments: a long term evaluation, sensors, vol. 20, 2020, no. 3. doi: 10.3390/s20030648 [14] c. perrino, f. marcovecchio, l. tofful, s. canepari, particulate matter concentration and chemical composition in the metro system of rome, italy, environmental science and pollution research, 2015, pp. 9204-9214. doi: 10.1007/s11356-014-4019-9 [15] b. zeb, k. alam, a. sorooshian, t. blaschke, i. ahmad, i. shahid, on the morphology and composition of particulate matter in an http://dx.doi.org/10.3390/s140509290 https://doi.org/10.3390/s18030820 https://www.imeko.org/publications/tc4-archaeo-2017/imeko-tc4-archaeo-2017-088.pdf https://www.imeko.org/publications/tc4-archaeo-2017/imeko-tc4-archaeo-2017-088.pdf http://dx.doi.org/10.21014/acta_imeko.v8i2.640 https://doi.org/10.1109/i2mtc.2017.7969946 http://dx.doi.org/10.21014/acta_imeko.v8i2.642 http://dx.doi.org/10.21014/acta_imeko.v8i2.643 https://doi.org/10.1109/access.2016.2582153 https://doi.org/10.1109/rtsi.2016.7740617 https://doi.org/10.1109/cacidi.2018.8584183 https://doi.org/10.1109/rtsi.2019.8895563 https://doi.org/10.1109/icc.2017.7996384 https://doi.org/10.3390/s20030648 https://doi.org/10.1007/s11356-014-4019-9 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 118 urban environments, aerosol and air quality research, 2018, p. 1431. doi: 10.4209/aaqr.2017.09.0340 [16] world health organization, ambient (outdoor) air pollution key facts. online [accessed 18 june 2021] https://www.who.int/news-room/fact-sheets/detail/ambient(outdoor)-air-quality-and-health [17] t. nam, t. a. pardo, conceptualizing smart city with dimensions of technology, people, and institutions, in proc. of the 12th annual international digital government research conference: digital government innovation in challenging times, college park maryland, usa 12-15 june 2011, pp. 282–291. doi: 10.1145/2037556.2037602 [18] m. cerchecci, f. luti, a. mecocci, s. parrino, g. peruzzi, a. pozzebon, a low power iot sensor node architecture for waste management within smart cities context, sensors, 2018, p. 1282. doi: 10.3390/s18041282 [19] m. green, e. dunlop, j. hohl‐ebinger, m. yoshita, n. kopidakis, x. hao, solar cell efficiency tables (version 57), prog photovolt res appl., vol. 29, 2021, pp. 3-15. doi: 10.1002/pip.3371 [20] m. h. shubbak, advances in solar photovoltaics: technology review and patent trends, renewable and sustainable energy reviews, vol. 115, 2019, p. 109383. doi: 10.1016/j.rser.2019.109383 [21] s. biswas, h. kim, solar cells for indoor applications: progress and development, polymers, vol. 12, 2020, p. 1338. doi: 10.3390/polym12061338 [22] s. chowdhury, m. kumar, s. dutta, j. park, j. kim, s. kim, m. ju, y. kim, y. cho, e. cho, j. yi, high-efficiency crystalline silicon solar cells: a review, new & renewable energy, vol. 15, 2019, pp. 36-45. doi: 10.1039/c5ee03380b [23] p. vincent, s.-c. shin, j. s. goo, y.-j. you, b. cho, s. lee, d.-w. lee, s. r. kwon, k.-b. chung, j.-j. lee, j.-h. bae, j. w. shim, h. kim, indoor-type photovoltaics with organic solar cells through optimal design, dyes and pigments, vol. 159, 2018, pp. 306-313. doi: 10.1016/j.dyepig.2018.06.025 [24] s. kim, m. a. saeed, s. h. kim, j. w. shim, enhanced hole selecting behavior of wo3 interlayers for efficient indoor organic photovoltaics with high fill-factor, applied surface science, vol. 527, 2020, p. 146840. doi: 10.1016/j.apsusc.2020.146840 [25] s. biswas, y.-j. you, y. lee, j. w. shim, h. kim, efficiency improvement of indoor organic solar cell by optimization of the doping level of the hole extraction layer, dyes and pigments, vol. 183, 2020, p. 108719. doi: 10.1016/j.dyepig.2020.108719 [26] a. s. teran, e. moon, w. lim, g. kim, i. lee, d. blaauw, j. d. phillips, energy harvesting for gaas photovoltaics under lowflux indoor lighting conditions, ieee transactions on electron devices, vol. 63, no. 7, 2016, pp. 2820-2825. doi: 10.1109/ted.2016.2569079 [27] i. mathews, p. j. king, f. s. a. r. frizzell, performance of iii–v solar cells as indoor light energy harvesters, ieee journal of photovoltaics, vol. 6, no. 1, 2016, pp. 230-235. doi: 10.1109/jphotov.2015.2487825 [28] c.-y. chen et al. (40 authors), performance characterization of dye-sensitized photovoltaics under indoor lighting, the journal of physical chemistry letters, vol. 8, 2017, pp. 1824-1830. doi: 10.1021/acs.jpclett.7b00515 [29] ming-chi tsai, chin-li wang, chiung-wen chang, cheng-wei hsu, yu-hsin hsiao, chia-lin liu, chen-chi wang, shr-yau lina, ching-yao lin, a large, ultra-black, efficient and costeffective dye-sensitized solar module approaching 12% overall efficiency under 1000 lux indoor light, journal of materials chemistry a, vol. 6, 2018, pp. 1995-2003. doi: 10.1039/c7ta09322e [30] p. colter, b. hagar, s. bedair, tunnel junctions for iii-v multijunction solar cells review, crystals, 2018, p. 445. doi: 10.3390/cryst8120445 [31] j. geisz, m. steiner, n. jain, k. schulte, r. france, w. mcmahon, e. perl, d. friedman, building a six-junction inverted metamorphic concentrator solar cell, ieee journal of photovoltaics, 2017, pp. 1-7. doi: 10.1109/jphotov.2017.2778567 [32] s. chowdhury, m. kumar, s. dutta, j. park, j. kim, s. kim, m. ju, y. kim, y. cho, e.-c. cho, j. yi, high-efficiency crystalline silicon solar cells: a review, new & renewable energy, vol. 15, 2019, pp. 36-45. doi: 10.7849/ksnre.2019.3.15.3.036 [33] a. morisset, r. cabal, v. giglia, a. boulineau, e. d. vito, a. chabli, s. dubois, j. alvarez, j.-p. kleider, evolution of the surface passivation mechanism during the fabrication of ex-situ doped poly-si(b)/siox passivating contacts for high-efficiency csi solar cells, solar energy materials and solar cells, vol. 221, 2021, p. 110899. doi: 10.1016/j.solmat.2020.110899 [34] a. richter, j. benick, f. feldmann, a. fell, m. hermle, s. glunz, n-type si solar cells with passivating electron contact: identifying sources for efficiency limitations by wafer thickness and resistivity variation, solar energy materials and solar cells, no. 173, 2017, pp. 96-105. doi: 10.1016/j.solmat.2017.05.042 [35] d. attafi, a. meftah, r. boumaraf, m. labed, n. sengouga, enhancement of silicon solar cell performance by introducing selected defects in the sio2 passivation layer, optik, vol. 229, 2021, p. 166206. doi: 10.1016/j.ijleo.2020.166206 [36] f. meyer, a. savoy, j. j. d. leon, m. persoz, x. niquille, c. allebé, s. nicolay, f.-j. haug, a. ingenito, c. ballif, optimization of front sinx/ito stacks for high-efficiency two-side contacted c-si solar cells with co-annealed front and rear passivating contacts, solar energy materials and solar cells, vol. 219, 2021, p. 110815. doi: 10.1016/j.solmat.2020.110815 [37] t. addabbo, a. fort, m. intravaia, m. mugnaini, l. parri, a. pozzebon, v. vignoli, pervasive environmental monitoring by means of self-powered particulate matter lorawan sensor nodes, in 24th imeko tc4 international symposium and 22nd international workshop on adc and dac modelling and testing, palermo, italy, 14-16 september 2020. online [accessed 18 june 2021] https://www.imeko.org/publications/tc4-2020/imeko-tc42020-20.pdf [38] g. acciari, m. caruso, m. fricano, a. imburgia, r. miceli, p. romano, g. schettino, f. viola, experimental investigation on different rainfall energy harvesting structures, in thirteenth international conference on ecological vehicles and renewable energies (ever), monte-carlo, monaco, 10-12 april 2018, pp. 15. doi: 10.1109/ever.2018.8362346 [39] p. löper, d. pysch, a. richter, m. hermle, s. janz, m. zacharias, s. glunz, analysis of the temperature dependence of the opencircuit voltage, energy procedia, no. 27, 2012, pp. 135-142. doi: 10.1016/j.egypro.2012.07.041 [40] p. peumans, a. yakimov, s. r. forrest, small molecular weight organic thin-film photodetectors and solar cells, journal of applied physics, no. 93(7), 2003, pp. 3693-3723. doi: 10.1063/1.1534621 https://doi.org/10.4209/aaqr.2017.09.0340 https://www.who.int/news-room/fact-sheets/detail/ambient-(outdoor)-air-quality-and-health https://www.who.int/news-room/fact-sheets/detail/ambient-(outdoor)-air-quality-and-health https://doi.org/10.1145/2037556.2037602 https://doi.org/10.3390/s18041282 https://doi.org/10.1002/pip.3371 https://doi.org/10.1016/j.rser.2019.109383 https://doi.org/10.3390/polym12061338 https://doi.org/10.1039/c5ee03380b https://doi.org/10.1016/j.dyepig.2018.06.025 https://doi.org/10.1016/j.apsusc.2020.146840 https://doi.org/10.1016/j.dyepig.2020.108719 https://doi.org/10.1109/ted.2016.2569079 https://doi.org/10.1109/jphotov.2015.2487825 https://doi.org/10.1021/acs.jpclett.7b00515 https://doi.org/10.1039/c7ta09322e https://doi.org/10.3390/cryst8120445 https://doi.org/10.1109/jphotov.2017.2778567 https://doi.org/10.7849/ksnre.2019.3.15.3.036 https://doi.org/10.1016/j.solmat.2020.110899 https://doi.org/10.1016/j.solmat.2017.05.042 https://doi.org/10.1016/j.ijleo.2020.166206 https://doi.org/10.1016/j.solmat.2020.110815 http://? http://? https://doi.org/10.1109/ever.2018.8362346 https://doi.org/10.1016/j.egypro.2012.07.041 https://doi.org/10.1063/1.1534621 measurements for non-intrusive load monitoring through machine learning approaches acta imeko issn: 2221-870x december 2021, volume 10, number 4, 90 96 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 90 measurements for non-intrusive load monitoring through machine learning approaches giovanni bucci1, fabrizio ciancetta1, edoardo fiorucci1, simone mari1, andrea fioravanti1 1 university of l’aquila, piazzale e. pontieri 1, 67100 l’aquila, italy section: research paper keywords: non-intrusive load monitoring (nilm); energy management; deep learning (dl); sweep frequency response analysis (sfra) citation: giovanni bucci, fabrizio ciancetta, edoardo fiorucci, simone mari, andrea fioravanti, measurements for non-intrusive load monitoring through machine learning approaches, acta imeko, vol. 10, no. 4, article 16, december 2021, identifier: imeko-acta-10 (2021)-04-16 section editors: umberto cesaro and pasquale arpaia, university of naples federico ii, italy received october 12, 2020; in final form december 6, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: simone mari, e-mail: simone.mari@graduate.univaq.it 1. introduction nowadays, economic development has led to a steady increase in the demand for electricity and related advanced services. companies are therefore moving towards the conversion of their traditional electrical systems into smart systems. one of the advantages that can be achieved is an efficient use of energy through the programming of loads and the awareness of consumption by users. to this end it is necessary to know the information on the status and consumption of the various loads powered by the system. this can be achieved through intrusive monitoring, i.e. by installing individual sensors for each load. but also through non-intrusive monitoring, i.e. measuring the total power absorbed by the system and deducing the contributions of the individual loads from this, through the use of specific algorithms. in this second case, an extremely simple and compact measurement system is obtained, at the expense of greater complexity from the point of view of processing [1]. these non-intrusive monitoring systems, however, have proven effective on a wide range of applications, which go beyond energy management alone [2]. non-intrusive load monitoring systems (nilms) are used successfully in many applications, including demand response programs, where consumers can generate profits based on their flexibility [3], [4]. other applications are those of anomaly detection, to detect malfunctions based on the profiles of power absorbed by individual loads [5] and of condition-based maintenance, which has allowed the creation of monitoring systems capable of helping operators in maintenance planning [6], [7]. finally, the ambient assisted living is also very important, where the nilm system monitors the switching on and off of household appliances to infer the position and activities of people, detecting the space-time context and, therefore, the activities of daily life. of the subject [8]-[10]. the first nilm system was proposed by g. hart in 1985 [11]. this algorithm was based on a detection of the edges in the aggregate power profile, followed by a clustering operation and subsequent matching based on the value of the absorbed power and on the on and off time. clearly this approach, while being abstract the topic of non-intrusive load monitoring (nilm) has seen a significant increase in research interest over the past decade, which has led to a significant increase in the performance of these systems. nowadays, nilm systems are used in numerous applications, in particular by energy companies that provide users with an advanced management service of different consumption. these systems are mainly based on artificial intelligence algorithms that allow the disaggregation of energy by processing the absorbed power signal over more or less long time intervals (generally from fractions of an hour up to 24 h). less attention was paid to the search for solutions that allow non-intrusive monitoring of the load in (almost) real time, that is, systems that make it possible to determine the variations in loads in extremely short times (seconds or fractions of a second). this paper proposes possible approaches for non-intrusive load monitoring systems operating in real time, analysing them from the point of view of measurement. the measurement and postprocessing techniques used are illustrated and the results discussed. in addition, the work discusses the use of the results obtained to train machine learning algorithms that allow you to convert the measurement results into useful information for the user. mailto:simone.mari@graduate.univaq.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 91 functional in certain situations, showed significant limitations as a multi-state appliance had to be managed as a set of distinct on / off appliances. conversely, continuously variable appliances and appliances with permanent consumption could not be detected correctly. it also included strong manual feature extraction requirements. subsequently other algorithms based on combinatorial optimization were proposed [12], whose main assumption is that each load can be in a given state (one of a reduced number k), in which each state is associated with a different energy consumption. the goal of the algorithm is to assign states to household appliances in such a way as to minimize the difference between the aggregate power reading and the sum of the energy consumption of the different loads. in the last decade, thanks to the increase in available computing powers, attention has shifted to artificial intelligence algorithms, such as hidden-state markov chains [13]-[15] and deep learning models [16]-[18]. in particular, the use of deep learning algorithms has overcome many of the limits that have characterized previous methods, thus allowing measurement systems to adapt to homes never analysed during the training phase. furthermore, in terms of accuracy, systems based on convolutional neural networks have overcome other state-ofthe-art methods, such as those based on factorial hidden markov models [19]. however, the state of the art of these systems is represented almost exclusively by monitoring systems that process signals over time intervals of the order of hours, consequently the resulting feedback will not be in real time. instead, it is often necessary to know in real time the status changes of the monitored loads. non-intrusive load monitoring systems are therefore required, capable of recognizing the different powered devices based on signal processing, during intervals of seconds or fractions of seconds. in this paper, two approaches for the recognition of electrical loads in real time will be presented. the first is a passive measurement system, based on the acquisition and processing of the current absorbed by the system. the second is an active measurement system, based on the measurement of the response to a variable frequency signal injected into the system. 2. nilm system based on passive measurements a first attempt was made by creating a recognition system for electrical loads based on the analysis of the total absorbed current. a system of this type allows to obtain a low cost and galvanically isolated measuring system. in steady state conditions, the absorbed current does not provide sufficient information to characterize a wide range of different loads. this is because the waveform of the current absorbed by domestic loads hardly has a significant harmonic content. therefore, the only considerations that can be made are based on the difference in amplitude. it was therefore decided to characterize the loads on the basis of their transient characteristics. previous studies have tried to create nilm systems based on transient characteristics [20]-[23], but all have limited the analysis to a reduced number of loads and particular cases. in this study, on the other hand, tests were conducted on signals acquired from a test plant in which five commonly used household appliances were activated and deactivated, but above all the performance was also evaluated on the basis of the building-level fully-labeled public dataset for electricity disaggregation (blued) [24]. first, the rms value of the current is calculated by processing the acquired raw current with a sliding window technique, as follows: 𝐼rms (𝑘) = √ 1 𝑁 ∑ 𝑖(𝑛) 2 𝑘+(𝑁−1) 𝑛=𝑘 , (1) where 𝑘 is the 𝑘-th measured current sample, 𝑁 is the number of samples per cycle, 𝑖(𝑛) is the sampled signal, and 𝑛 is the summation index. the rms value is then derived, and the resulting signal 𝐼rms(𝑘) ′ is an impulsive signal in which each pulse corresponds to a change in state of one of the powered loads. an example is shown in figure 1. 𝐼rms(𝑘) ′ = 𝐼rms (𝑘) − 𝐼rms (𝑘−1) (2) the position of the pulse in the derived signal identifies the moment in which a certain event occurred. in this way, the information relating to the steady state is filtered and only the information relating to the transients is kept. this impulsive signal is successively processed by the short-time fourier transform (stft), through the following known transformation [25], [26]: 𝑆𝑇𝐹𝑇(𝑚,𝜔) = ∑ 𝐼rms(𝑛) ′ 𝑤(𝑛−𝑚) 𝑒 −𝑗𝜔𝑛 ∞ 𝑛=−∞ (3) each change in the states of the powered loads is characterized and discriminated on the basis of the spectral content of the derived signal. the current is processed cyclically figure 1. variation with time of the rms current 𝐼rms (k) (top) and its derivative 𝐼rms(k) ′ (bottom). acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 92 at 1 second acquisition intervals, following the described procedure. each acquisition slot is processed (to calculate rms and the derivative) by adopting an overlap of 500 ms to ensure correct analysis. it is also processed for transient events, which can be fragmented into two successive slots. the stft is implemented by processing 10-cycles (200 ms) windows with an overlap of 4/5 of the processing window. this results in a spectrogram with 101 points in frequency at 26 different instants of time. to take into account the sign of the change (switching on, switching off or passing to a different consumption state), the spectrogram is multiplied by the sign of the cumulative sum, evaluated on the rms signal as follows: 𝑆𝑁 = ∑(𝐼rms (𝑛) − 𝐼rms (𝑛−1)) 𝑁 𝑛=1 (4) where 𝐼rms (𝑛) is the rms value of the current described in (1), 𝑁 is the number of samples, and 𝑆𝑁 is the value of the cumulative sum. the final signal 𝑆(𝑖, 𝑗) can be obtained in the form of a 101 × 26 matrix, as follows: 𝑆(𝑖, 𝑗) = 𝑆𝑇𝐹𝑇(𝑚,𝜔) ∙ sgn(𝑆𝑁 ) = ∑ 𝐼rms(𝑛) ′ 𝑤(𝑛−𝑚) e −𝑗𝜔𝑛 ∞ 𝑛=−∞ ∙ sgn (∑(𝐼rms (𝑛) − 𝐼rms (𝑛−1)) 𝑁 𝑛=1 ) (5) an example of the spectrogram obtained from this procedure is shown in figure 2. this spectrogram is used as inputs to a neural network which provides a response every 500 ms, indicating the presence or absence of events in the signal, and the type of device involved. 2.1. the adopted artificial neural network the deduction of the loads, starting from the spectrogram described above, is traced back to a multiclass classification problem, that is, a single unique label must be associated with each spectrogram. analysing the current using such a small sliding window (1 second with 0.5 second overlap) makes it possible to state that within a single window there is no change of state of more than one load. artificial neural networks (anns) are an example of algorithms that natively support multiclass classification problems. in this work, a particular ann type, namely, the convolutional neural network (cnn), is adopted [27] because of its capability of processing complex inputs such as multidimensional arrays. more specifically, cnns are designed to exploit the intrinsic properties of some two-dimensional data structures, in which there is a correlation between spatially close elements (local connectivity). the proposed system [28] includes different layers: an input level (for signal loading), three groups of layers, each of which consisting of convolution, relu, and max pooling layers (for feature extraction from the input), and a group of flatten, fully connected, and softmax layers, which uses data from convolution layers to generate the output. 2.2. the proposed system setup the proposed measurement system uses an agilent u2542a data acquisition module with a 16-bit resolution. the current signal was acquired using a ta sct-013 current transducer and the sampling frequency was set to 10 khz. the cnn network was implemented on a desktop computer (based on the windows 10 x64 operating system) using the open-source python 3.7 from anaconda. tests were conducted on signals acquired directly from a real system, in order to have flexibility both as regards the sampling frequency and for the generation of multiple events. other tests were conducted on signals belonging to the public blued dataset, which features 34 different types of devices. the proposed measurement system was installed on a test system, designed to generate electrical loads made by domestic users, as part of the research project "non-intrusive infrastructures for monitoring loads in residential users". the system, which is located in the electrical engineering laboratory of the university of l'aquila (i), allows the generation of electrical loads in a single or simultaneous way. these loads correspond to the loads generated by the most common household appliances and are integrated in a structure similar to that of a residential building to reproduce the real problems of conditioning and measurement of the signals. 2.3. the obtained results the performance of the nilm system was assessed by conducting acquisitions, during which various loads were turned on and off for a total of over 519 events. next, blued, a public dataset on residential electricity usage, was used. this dataset includes voltage and current measurements for a single-family house in the united states, sampled at 12 khz for an entire week. regarding nilm systems, no standard and consolidated techniques can be found in the literature to evaluate the performance of event detectors. since the purpose of a nilm system is to disaggregate consumption for each of the devices in question, their performances were analysed to verify the achievement of these objectives, which in summary are correct identification and classification of the events. these parameters were obtained using the number of true positive (tp), false positive (fp), true negative (tn), and false negative (fn). in addition, the accuracy was assessed as follows: 𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 % = 𝐶𝑜𝑟𝑟𝑒𝑐𝑡 𝑚𝑎𝑡𝑐ℎ𝑒𝑠 𝑇𝑜𝑡𝑎𝑙 𝑝𝑜𝑠𝑠𝑖𝑏𝑙𝑒 𝑚𝑎𝑡𝑐ℎ𝑒𝑠 100 % (6) the obtained results are summarized in table 1. figure 2. spectrogram obtained during the switch on of a microwave oven. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 93 3. nilm system based on active measurements subsequently, we tried to recognize the loads powered by an electrical system through the sweep frequency response analysis (sfra). the sfra is a non-destructive diagnostic technique that detects the displacement and deformation of windings, among other mechanical and electrical failures, in power and distribution transformers. sfra proceeds by applying a sinusoidal voltage signal of constant amplitude and variable frequency between one terminal of the bipole under test and ground. the response is measured between the other bipole terminal and ground. both input and output signals are acquired and processed. the obtained result is the transfer function (tf) of the bipole over a wide frequency range. a failure is detected when a change in the tf is observed. the possibility of using these traces to identify which devices are powered at the time of measurement was evaluated [29]. the basic idea is to detect a change in the load, starting from the change in the measured tf. for this purpose, a variable frequency sinusoidal signal is applied between the terminal of the power phase conductor and ground, by means of the instrumentation shown in figure 3, then both the applied input signal and the output signal between the neutral conductor terminal and ground are measured and processed. the test instrument generates a sinusoidal input signal of constant amplitude (a few volt) and frequency variable in the range between 10 khz and 1.5 mhz. the results obtained on the electrical system are analysed on a temporal basis, comparing them with those previously obtained on the same system. the measurement techniques follow the iec 60076-18 standard [30], which regulates the test execution methods, the characteristics of the instruments used, the connection methods and the analysis of the results. figure 4 shows the signatures obtained for the different types of loads. tests were also conducted in order to evaluate the ability to discriminate through these traces different loads when they are powered simultaneously. from figure 5 it is possible to see the possibility to discriminate, through these traces, if the heater is powered individually or in combination with other loads. 3.1. the machine learning approaches in order to translate these traces into useful information for the users, the problem was formulated as a multi-label classification problem. this is a variant of the classification problem, where multiple labels (or multiple classes) may be assigned to each instance. multi-label classification is a generalization of multiclass classification, which is the singlelabel problem of categorizing instances into precisely one of more than two classes. in the multi-label problem, there is no constraint on how many of the classes the instance can be assigned to. the problem was initially addressed with an ann [31] similar to the one described in the previous section, with good results. however, a limitation of the anns is the large amount of training data required, which makes it difficult to apply them in real cases. an attempt was therefore made to use another machine learning algorithm, the support vector machine (svm). svm is one of the most popular artificial intelligence algorithms and is a supervised learning algorithm used primarily for solving classification problems. unlike generic classification algorithms that discriminate on the basis of characteristics common to each class, svm focuses on the samples that are most similar to each other but belonging to different classes, which are therefore the most difficult samples to discriminate. on the basis of these samples, the algorithm constructs an optimal hyperplane capable of separating them, and which can then be used to discriminate the new samples. these samples are table 1. scores achieved with the acquired signal and the blued dataset. acquired signal blued dataset precision 0.981 0.998 recall 0.998 0.998 f1-score 0.989 0.998 accuracy % 98.0% 87.9% figure 3. instrumentation used for the sfra. figure 4. sfra tests of different household appliances. figure 5. sfra tests with simultaneous loads powered. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 94 called support vectors, because they are the only samples that support model creation, while all other samples are useless. in a two-dimensional case, where the examples to be classified are defined by only two characteristics, the optimal hyperplane is reduced to a straight line as shown in figure 6. the algorithm searches for the line that maximizes the margin between the two examples indicated as support vectors. if it is not possible to separate the classes with a straight line, as in the case of nonlinear classification problems, the algorithm uses the kernel trick [32]. in particular, a polynomial nucleus was chosen for this work, thus examining not only the given characteristics of the input samples to determine their similarity, but also their combinations. in the case of the proposed nilm system, the problem is obviously not two-dimensional. in fact, the sfra measuring system returns an array of 320 points, which represent the transfer functions corresponding to the different frequency values. therefore, the inputs of the svm is 320, and consequently also the size of the problem is 320. to solve the problem, to identify which devices are powered starting from the result of the sfra measurement, four svm classifiers are used, each of which performs a binary classification, identifying the presence or absence of the device associated with it. 3.2. the obtained results unlike the nilm system based on passive measurements, in this case there is no public dataset that has the necessary characteristics, i. e. there is no public dataset of measurements obtained through the sfra technique. therefore, the performance evaluation was made solely on the basis of our acquired measurements. as for the previously described system, also in this case the same test parameters were used. the proposed algorithm was subjected to different scenarios, each for a certain number of tests, in which the different appliances were powered individually or simultaneously. since, as already explained above, each appliance has an associated svm algorithm that reveals its presence or not, the performance of the four algorithms were assessed individually. to allow a comparison with the algorithms developed by other researchers, precision, recall, f1-score and accuracy during classification were evaluated [33]. the results are shown in table 2. as far as the ann is concerned, the evaluation measures for a multiclass, hence single-label classification problem, are generally different from those for the multiple label. in singlelabel classification we can use simple metrics such as precision, recall, and accuracy [34]. however, in the multi-label classification, an incorrect classification is no longer a real error, as a forecast containing a subset of the actual classes is certainly better than a forecast that does not, i.e. correctly predicting two of the four labels is better than foresee the absence of labels. to evaluate the performance of a multi-label classifier we have to calculate the average of the classes. there are two different methods of doing this called micro-averaging and macroaveraging [35]. the current is processed cyclically at 1 second acquisition the metric independently for each class and then take the average hence treating all classes equally, whereas the microaverage will aggregate the contributions of all classes to compute the average metric. in a multi-label classification setup, microaverage is preferable if there is a suspicion that there may be a class imbalance (i.e. the possibility of having many more examples of a class than other classes). in the cases under examination, this problem does not exist as the examples used for training and testing are sufficiently uniform, so micro-average and macro-average can both be considered reliable. 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛micro−averaging = ∑ 𝑇𝑃𝑛 𝑁 𝑛=1 ∑ 𝑇𝑃𝑛 + 𝐹𝑃𝑛 𝑁 𝑛=1 (7) 𝑅𝑒𝑐𝑎𝑙𝑙micro−averaging = ∑ 𝑇𝑃𝑛 𝑁 𝑛=1 ∑ 𝑇𝑃𝑛 + 𝐹𝑁𝑛 𝑁 𝑛=1 (8) 𝐹1 − 𝑠𝑐𝑜𝑟𝑒micro−averaging = 2 × 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛micro−averaging × 𝑅𝑒𝑐𝑎𝑙𝑙micro−averaging 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛micro−averaging + 𝑅𝑒𝑐𝑎𝑙𝑙micro−averaging (9) 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛macro−averaging = ∑ 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛𝑛 𝑁 𝑛=1 𝑁 (10) 𝑅𝑒𝑐𝑎𝑙𝑙macro−averaging = ∑ 𝑅𝑒𝑐𝑎𝑙𝑙𝑛 𝑁 𝑛=1 𝑁 (11) 𝐹1 − 𝑠𝑐𝑜𝑟𝑒macro−averaging = 2 × 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛macro−averaging × 𝑅𝑒𝑐𝑎𝑙𝑙macro−averaging 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛macro−averaging + 𝑅𝑒𝑐𝑎𝑙𝑙macro−averaging (12) figure 6. representation of a linear classification problem (top) and a nonlinear classification problem (bottom) in which the samples are defined by only two features. table 2. achieved scores with the svm. svm lamp svm hairdryer svm induction hob svm heater tp 42 200 200 200 fp 0 0 0 2 tn 400 250 250 248 fn 8 0 0 0 precision 1 1 1 0.99 recall 0.84 1 1 1 f1-score 0.91 1 1 0.99 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 95 table 3 shows the scores achieved according to the two criteria both with the ann and with the svm. 4. conclusions and final remarks in this paper a brief introduction of the state of the art of nilm systems has been presented. two different types of systems for real-time identification of electrical loads, based on different measurement techniques, were then presented. both systems were proven excellent identification performance. more in detail, the first system, based on the spectrogram analysis of the effective current through the cnn, has been proven excellent performance in correspondence with both the acquired measurements and those available in the blued dataset, reaching f1-score respectively equal to at 0.989 and 0.998 and accuracy% respectively equal to 98.0% and 87.9%. the greatest difficulty encountered in the classification phase with the blued dataset is attributable to the significantly greater number of devices that the network is required to recognize, compared to those used for the acquired measurements. furthermore, the value obtained for the f1-score is higher than that obtained with other systems using the same dataset, as those proposed in [36] (0.915) and [37] (0.932). traditional nilm systems perform the loads classification based on the analysis of quantities also related to voltage (e.g. analysis in the p-q or v-i plan [38]). the proposed system has the advantage of measuring only the overall current in a house. as a result, the complexity of the processing system is reduced. another advantage is that the measuring system can be implemented with a galvanically isolated at low-cost system, using a clamp current transducer. the second proposed system, based on the trace analysis provided by the sfra, also proved excellent performance. the traces were initially processed through an artificial neural network similar to that used for the previous system, reaching f1-score of 0.96. in order to reduce the number of training examples needed, it was decided to use a support vector machine. despite a significant reduction in the examples needed for training (from over 2000 to 90), the f1-score achieved with this second machine learning structure was even higher than that obtained with the artificial neural network. a system of this type is particularly interesting as it allows the creation of a plug-in solution that can be installed in any domestic, industrial or commercial environment. furthermore, the detection technique takes into account the physical characteristics of household appliances and the resulting transfer function. consequently, the identification of multi-state or continuously variable appliances is simplified, compared to processing time-varying signals such as real power, current, etc. references [1] g. bucci, f. ciancetta, e. fiorucci and s. mari, load identification system for residential applications based on the nilm technique, 2020 ieee international instrumentation and measurement technology conference (i2mtc), 25-28 may 2020, pp. 1-6. doi: 10.1109/i2mtc43012.2020.9128599 [2] g. bucci, f. ciancetta, e. fiorucci, s. mari, a. fioravanti, state of art overview of non-intrusive load monitoring applications in smart grids, measurement: sensors 18(2021), art. no. 100145. doi: 10.1016/j.measen.2021.100145 [3] a. lucas, l. jansen, n. andreadou, e. kotsakis, m. masera, load flexibility forecast for dr using non-intrusive load monitoring in the residential sector, energies 12(14) (2019), art. no. 2725. doi: 10.3390/en12142725 [4] w. schneider, f. campello de souza, non-intrusive load monitoring for smart grids, 2018. technical report. dell emc. [5] h. rashid, p. singh, v. stankovic, l. stankovic, can non-intrusive load monitoring be used for identifying an appliance's anomalous behaviour?, applied energy 238 (2019), pp. 796-805. doi: 10.1016/j.apenergy.2019.01.061 [6] d. green, t. kane, s. kidwell, p. lindahl, j. donnal, s. leeb, nilm dashboard: actionable feedback for condition-based maintenance, ieee instrumentation & measurement magazine 23(5) (2020), pp. 3-10. doi: 10.1109/mim.2020.9153467 [7] a. aboulian et al., nilm dashboard: a power system monitor for electromechanical equipment diagnostics, ieee transactions on industrial informatics 15(3) (2019), pp. 1405-1414. doi: 10.1109/tii.2018.2843770 [8] c. belley, s. gaboury, b. bouchard, a. bouzouane, an efficient and inexpensive method for activity recognition within a smart home based on load signatures of appliances, pervasive and mobile computing 12 (2014), pp. 58-78. doi: 10.1016/j.pmcj.2013.02.002 [9] n. noury, m. berenguer, h. teyssier, m. bouzid,m. giordani, building an index of activity of inhabitants from their activity on the residential electrical power line, ieee transactions on information technology in biomedicine 15(5) (2011), pp. 758766. doi: 10.1109/titb.2011.2138149 [10] x. zhang, t. kato, t. matsuyama, learning a context-aware personal model of appliance usage patterns in smart home,2014 ieee innovative smart grid technologies asia (isgt asia), kuala lumpur, malaysia, 20-23 may 2014, pp. 73-78. doi: 10.1109/isgt-asia.2014.6873767 [11] g. hart. 1985. prototype nonintrusive appliance load monitor. in mit energy laboratory technical report, and electric power research institute technical report. [12] g. w. hart. 1992. nonintrusive appliance load monitoring. proc. ieee 80, 12 (dec. 1992), 1870–1891. doi: 10.1109/5.192069 [13] j zico kolter and matthew j johnson. 2011. redd: a public data set for energy disaggregation research. [14] oliver parson, siddhartha ghosh, mark weal, alex rogers, nonintrusive load monitoring using prior models of general appliance types, 2012 twenty-sixth aaai conference on artificial intelligence, toronto, canada, 22-26 july 2012. [15] mingjun zhong, nigel goddard, and charles sutton. 2014. signal aggregate constraints in additive factorial hmms, with application to energy disaggregation. in advances in neural information processing systems 27, z. ghahramani, m. welling, c. cortes, n. d. lawrence, and k. q. weinberger (eds.). curran associates, inc., 3590–3598. [16] j. kelly, w. knottenbelt, neural nilm: deep neural networks applied to energy disaggregation, 2nd acm international conference on embedded systems for energy-efficient built environments (buildsys '15). association for computing machinery, new york, ny, usa, pp. 55–64. doi: 10.1145/2821650.2821672 [17] z. jia, l. yang, z. zhang, h. liu, f. kong, sequence to point learning based on bidirectional dilated residual network for nonintrusive load monitoring, international journal of electrical power & energy systems 129(2021), art. no. 106837. table 3. comparison between svm and ann. svm ann microaveraging macroaveraging microaveraging macroaveraging precision 0.99 0.99 0.94 0.91 recall 0.98 0.96 0.99 0.99 f1-score 0.98 0.97 0.96 0.95 https://doi.org/10.1109/i2mtc43012.2020.9128599 https://doi.org/10.1016/j.measen.2021.100145 https://doi.org/10.3390/en12142725 https://doi.org/10.1016/j.apenergy.2019.01.061 http://dx.doi.org/10.1109/mim.2020.9153467 https://doi.org/10.1109/tii.2018.2843770 https://doi.org/10.1016/j.pmcj.2013.02.002 https://doi.org/10.1109/titb.2011.2138149 https://doi.org/10.1109/isgt-asia.2014.6873767 https://doi.org/10.1109/5.192069 https://doi.org/10.1145/2821650.2821672 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 96 doi: 10.1016/j.ijepes.2021.106837 [18] g. bucci, f. ciancetta, e. fiorucci, s. mari, a. fioravanti, multistate appliances identification through a nilm system based on convolutional neural network, 2021 ieee instrumentation and measurement technology conference i2mtc 2021, 17-21 may 2021. doi: 10.1109/i2mtc50364.2021.9460038 [19] c. zhang, m. zhong, z. wang, n. goddard, c. sutton, sequenceto-point learning with neural networks for non-intrusive load monitoring, aaai conf. artif. intell., new orleans, la, usa, feb. 2018, pp. 2604–2611. [20] h. chang, k. chen, y. tsai and w. lee, a new measurement method for power signatures of nonintrusive demand monitoring and load identification, ieee transactions on industry applications 48(2) (2012), pp. 764-771. doi: 10.1109/tia.2011.2180497 [21] y. lin, m. tsai, development of an improved time–frequency analysis-based nonintrusive load monitor for load demand identification, ieee transactions on instrumentation and measurement 63(6) (2014), pp. 1470-1483. doi: 10.1109/tim.2013.2289700 [22] h. chang, k. lian, y. su, w. lee, power-spectrum-based wavelet transform for nonintrusive demand monitoring and load identification, ieee transactions on industry applications 50(3) (2014), pp. 2081-2089. doi: 10.1109/tia.2013.2283318 [23] s. b. leeb, s. r. shaw, j. l. kirtley, transient event detection in spectral envelope estimates for nonintrusive load monitoring, ieee transactions on power delivery 10(3) (1995), pp. 12001210. doi: 10.1109/61.400897 [24] k. anderson, a. ocneanu, d. benitez, d. carlson, a. rowe, m. bergés, blued: a fully labeled public dataset for event-based non-intrusive load monitoring research, 2nd kdd workshop on data mining applications in sustainability (sustkdd), beijing, china, aug. 2012, pp. 1–5. [25] w. yuegang, j. shao, x. hongtao, non-stationary signals processing based on stft, 8th international conference on electronic measurement and instruments, xi'an, 2007, pp. 3-301– 3-304. doi: 10.1109/icemi.2007.4350914 [26] s. zhang, d. yu, s. sheng, a discrete stft processor for realtime spectrum analysis, apccas 2006 2006 ieee asia pacific conference on circuits and systems, singapore, 4-7 dec. 2006, pp. 1943–1946. doi: 10.1109/apccas.2006.342241 [27] s. albawi, t. a. mohammed, s. al-zawi, understanding of a convolutional neural network, international conference on engineering and technology (icet), antalya, turkey, 21-23 aug. 2017, pp. 1–6. doi: 10.1109/icengtechnol.2017.8308186 [28] f. ciancetta, g. bucci, e. fiorucci, s. mari, a. fioravanti, a new convolutional neural network-based system for nilm applications, ieee transactions on instrumentation and measurement 70 (2021), art no. 1501112. doi: 10.1109/tim.2020.3035193 [29] a. fioravanti, a. prudenzi, g. bucci, e. fiorucci, f. ciancetta, s. mari, non-intrusive electrical load identification through an online sfra based approach, 2020 international symposium on power electronics, electrical drives, automation and motion (speedam), sorrento (italy), 24–26 june 2020, pp. 694–698. doi: 10.1109/speedam48782.2020.9161856 [30] iec 60076-18:2012 power transformers part 18: measurement of frequency response [31] g. bucci, f. ciancetta, e. fiorucci, s. mari, a. fioravanti, deep learning applied to sfra results: a preliminary study, 7th international conference on computing and artificial intelligence iccai 2021, tianjin, china, 23-26 april 2021, pp. 302-307 doi: 10.1145/3467707.3467753 [32] t. hofmann et al., kernel methods in machine learning, ann. statist. 36(3) (2008), pp. 1171–1220. doi: 10.1214/009053607000000677 [33] g. bucci, f. ciancetta, e. fiorucci, s. mari, a. fioravanti, a nonintrusive load identification system based on frequency response analysis, 2021 ieee international workshop on metrology for industry 4.0 & iot (metroind4.0&iot), 7-9 june 2021, pp. 254258. doi: 10.1109/metroind4.0iot51437.2021.9488472 [34] s makonin, f popowich, nonintrusive load monitoring (nilm) performance evaluation, energy efficiency 8(4) (2014), pp. 809– 814. doi: 10.1007/s12053-014-9306-2 [35] o. koyejo, n. natarajan, p. k. ravikumar, i. s. dhillon, consistent multilabel classification, in proc. nips, 2015, pp. 3321–3329. [36] m. a.peng, h. lee, energy disaggregation of overlapping home appliance consumptions using a cluster splitting approach, sustainable cities and society 43 (2018), pp. 487–494. doi: 10.1016/j.scs.2018.08.020 [37] k. jain, s. s. ahmed, p. sundaramoorthy, r. thiruvengadam, v. vijayaraghavan, current peak based device classification in nilm on a low-cost embedded platform using extra-trees, ieee mit undergraduate research technology conference (urtc), cambridge, ma, -5 nov. 2017, pp. 1–4. doi: 10.1109/urtc.2017.8284200 [38] t. hassan, f. javed, n. arshad, an empirical investigation of v-i trajectory-based load signatures for non-intrusive load monitoring, ieee transactions on smart grid 5(2) (2014), pp. 870–878. doi: 10.1109/pesgm.2014.6938824 https://doi.org/10.1016/j.ijepes.2021.106837 https://doi.org/10.1109/i2mtc50364.2021.9460038 https://doi.org/10.1109/tia.2011.2180497 https://doi.org/10.1109/tim.2013.2289700 https://doi.org/10.1109/tia.2013.2283318 https://doi.org/10.1109/61.400897 https://doi.org/10.1109/icemi.2007.4350914 https://doi.org/10.1109/apccas.2006.342241 https://doi.org/10.1109/icengtechnol.2017.8308186 https://doi.org/10.1109/tim.2020.3035193 https://doi.org/10.1109/speedam48782.2020.9161856 https://doi.org/10.1145/3467707.3467753 https://doi.org/10.1214/009053607000000677 https://doi.org/10.1109/metroind4.0iot51437.2021.9488472 https://doi.org/10.1007/s12053-014-9306-2 https://doi.org/10.1016/j.schres.2018.08.020 https://doi.org/10.1109/urtc.2017.8284200 https://doi.org/10.1109/pesgm.2014.6938824 a modified truncation and rounding-based scalable approximate multiplier with minimum error measurement acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 6 acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 1 a modified truncation and rounding-based scalable approximate multiplier with minimum error measurement yamini nagaratnam1, sudanthiraveeran rooban1 1 department of ece, koneru lakshmaiah education foundation, green fields, vaddeswaram,522502, guntur, ap, india section: research paper keywords: approximate multiplier; hardware computation; mean absolute relative error; truncation-based multiplier; rounding operation; absolute error citation: yamini nagaratnam, sudanthiraveeran rooban, a modified truncation and rounding-based scalable approximate multiplier with minimum error measurement, acta imeko, vol. 11, no. 2, article 37, june 2022, identifier: imeko-acta-11 (2022)-02-37 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received february 7, 2022; in final form may 11, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: sudanthiraveeran rooban, e-mail: sroban123@gmail.com 1. introduction in a recent technology of digital signal processing application, a multiplier is a priority one with low area and low power utilizations. nowadays, approximate multiplier is functioning in good manner to reduce area, delay, power, error measurement and energy utilization. as a result of these specifications, approximate computing becomes a trendy trend in the world of digital design 0. because of the high speed, fault tolerance, and power efficiency, the demand for efficient approximate multipliers is growing. the method of approximation computing encompasses a number of models, including data mining and multimedia processing [2]. multipliers are critical components in applications like digital signal processing, microprocessors, and embedded systems to accomplish operations like filtering and neural network convolution. these multipliers are made with complicated logic blocks, which increases the amount of energy consumed when the size of the circuit is increased. because the multiplier is a fundamental component of mathematical units, the configuration of approximation multiplier design has been a research topic for many years [3]. the approximation multiplier is made up of a few basic blocks in which the approximate technique is performed by using any one of the various phases [3]. when it comes to approximation techniques, truncation of partial products is one of the most effective methods for reducing the error by using correction functions [4]. there are various types of error measurement approximate multipliers depending on the operand size. this error measurement is used to overcome the problem of high latency and energy utilization, this work introduced a measurement of scalable approximate multiplier using truncated rounding-based technique which is used to minimize the measure of partial products based on leading one bit position [5]. the proposed error measurement approximate multiplier is of different bitlengths. 2. generalised approximate computing approximation computing process is executed at multiple architecture layers, software, and other circuits [6] and the study abstract multiplication necessitates more hardware resources and processing time. in a scalable method of approximate multiplier, the truncated rounding technique is added to reduce the number of logic gates in partial products with the help of leading one-bit architecture. truncation and rounding based scalable approximate multiplier (tosam) has few modes of error measurement based upon height (h) and truncated (t) named as (h,t). these multipliers are named as tosam(0,2), tosam(0,3), tosam(1,5), tosam(2,6), tosam(3,7), tosam(4,8), and tosam(5,9). multiplication provides a substantial impact on metrics like power dissipation, speed, size and power consumption. a modified approximate absolute unit is proposed to enhance the performance of the existing approximate multiplier. the existing 16-bit (3,7) error measurement multiplier shows an error measurement value of 0.4 %. the proposed 16-bit multiplier for the same error measurement possesses the error measured value is of 0.01%, mean relative error measured value of 0.3 %, mean absolute relative error measured value of 1.05, normalized error distance measured value of 0.0027, variance of absolute error measured value of 0.52, delay of 1.87 ns, power of 0.23 mw, energy of 0.4 pj. the proposed multiplier can be applied in image processing. the work is designed in verilog hdl and simulated in modelsim, synthesized in vivado. mailto:sroban123@gmail.com acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 2 of approximate computing is also applied in deep learning. arithmetic computation is performed by using the design of addition (in some cases can be termed as accumulation) and multiplication and for some applications, for instance as dsp and machine learning. to achieve the power and latency savings, many approximate adders have been developed. speculative adders and non-speculative transistor-level complete adders are collaborated to create the current approximate adder designs. the basic four parts of approximate multiplier are: approximation of operands approximation of partial product generation approximation of partial product tree approximation of compressors 2.1. approximation of operands mitchell proposed the concept of a logarithmic multiplier (lm), which uses estimated operands to perform multiplication. the lm performs the operation by changing the operands to approximate logarithmic numbers using shifting and addition operations. using precise piecewise linear approximation [7] and iterative methodology, the accuracy of contemporary designs of logarithmic multipliers is increased. the usage of estimated operands by the error-tolerant multiplier (etm) and a dynamic range unbiased multiplier (drum) [8] are the further enhancement in multipliers. the etm [9] is found with a technique known as multiplier partitioning, which divides a multiplier into accurate multiplication and non-multiplication parts. the least significant bits (lsbs) decide nonmultiplication, while the most significant bits (msbs) determine proper multiplication. 2.2. approximation of partial product generation to obtain the measured final product, we execute some specific processes, but before that the partial products had to be generated and these undergo some compression operations. the under designed multiplier (udm), is being forwarded by substituting one entry of the karnaugh-map based on 2 × 2 approximation multipliers. the approximation 2 × 2 multipliers are utilised as fundamental units for bigger size multipliers to yield approximate partial products which are collected by using the correct adder tree. during the partial product accumulation stage, the generalised design of udm is examined for further utilising carry-in prediction [10]. on approximation booth encoders, a study [11] is carried out. it uses two efficient radix-4 approximation booth encoders. 2.3. approximation of partial product tree in general, the truncation approach is used for incomplete product trees. the fixed-width multiplier estimates the least significant partial products as unchanged. some of the least significant columns are omitted in the inexact array multiplier, resulting in constant partial product columns. among the reduction and rounding strategies, the truncated multiplier that employs the correction constant is chosen. variable correction is required for truncated multipliers to avoid excessive mistakes. 2.4. approximation of compressors compressors are commonly employed in the construction of high-speed multipliers [12] to accelerate the accumulation of partial products (pp). some error compensation algorithms for fixed-width booth multipliers [13] have recently been proposed, which increases the multipliers accuracy. the error compensation circuit is developed using a simpler sorting network. several researches have been undertaken on how to determine or identify a number's logarithm and antilogarithm, with the replica being found. mitchell suggested a simple method for calculating a number's logarithm and antilogarithm, which is then utilised to generate approximate multiplication results (mitchell multiplier). although the multiplier is proposed, it falls short of the mark, hence more research has been done to improve the approximation in the measurement of mitchellbased logarithmic multipliers. 3. proposed approximate multiplier the proposed approximate multiplier having an error measurement of 16-bit for the rounding and truncation parameters of measurement (3,7) consists of blocks namely approximate absolute unit (aau), leading one detector unit also referred as foremost one detector unit (lod), truncation unit (tu), arithmetic unit (au), shift unit (su), sign and zero detector unit (szd), and is represented in figure 2. 3.1. approximate absolute unit the aau is given by inputs of a and b. if the input operand is negative, the results are inverted; if the input operand is positive, the results are unchanged. this aau can be removed for unsigned multipliers. it appears as |a|app and |b|app, for the measured values of a and b as described in [14]. 3.2. leading one detector unit the lod unit or foremost one detector unit takes input as |a|app and |b|app, values. by using these measurement values the ka and kb are detected, which detects the ‘1’ in the msb. these ka and kb are responsible for shifting operation. 3.3. truncation unit the inputs for this tu are measured as ka and kb, and also this is having another two inputs which are measured as |a|app and |b|app, values. the approximation inputs [15] are trimmed and converted to fixed width operands rely on the leading one position of the input operands. the output is obtained from truncation unit are measured as (ya)t and (yb)t these are given as inputs to the arithmetic unit. the terms (ya)t and (yb)t acquired from the truncated unit is the measured value which is represented in the following equation: tu = 1+(ya)t + (yb)t+ (ya)apx +(yb)apx . (1) 3.4. arithmetic unit this au will perform addition on the truncated fixed width operand as well as the product of approximation input, which is denoted by the value '1' and can be written as tu. it's worth noting that the msbs of (ya)apx and (yb)apx are identical to those of measured values of (ya)t and (yb)t. some adders and logical and gates in the arithmetic unit require power gating, this is determined by the operating mode. this is done to improve the design's energy efficiency. the arithmetic block is the same for all the bit lengths. 3.5. shift unit the arithmetic unit's output must be left shifted by the measured value of ka + kb times (ka and kb are the leading onebit values of a and b) the term 2 ka+kb (1+(ya)t + (yb)t+ (ya)apx (yb)apx) can be obtained by conducting the shifting operation, as shown in [16]. for the greatest truncation 't' and rounding 'h' values (h = 5 and t = 9 in this case), the tosam multiplier should be developed. acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 3 3.6. sign and zero detector unit the output operands sign is determined by the sign of the input operands, and if at least one of the inputs is zero, the output is set to zero. the aau should be eliminated, due to the unsigned input operands, and the sign and zero detector unit should be restored with a zero detector unit (zd), as the measurement of the sign unit is unnecessary when the input operands are unsigned. the proposed approximate absolute unit is implemented in the truncated approximate multiplier. the error measurement values (ya)apx ((yb)apx) are denoted as h + 1 bits in this example. when compared to the exact 16-bit multiplier of measurement of (3,7) in figure 1, the below dot diagram gives an overview of the procedure for a specific measurement where truncation 't' and rounding 'r' and their values are treated as t = 7 and h = 3 respectively. in the dot diagram below, the green square represents the "1" bit in the measured term 1+(ya)t +(yb)t + (ya)apx (yb)apx. on the msb side, the orange circles represent partial products of (ya)apx and (yb)apx, while the purple triangles represent the msb bits of (ya)t and (yb)t. the remaining grey circles and triangles in the dot diagram are not included in the current operations, but they will be considered for future multiplier computations. the measurement of partial multipliers in the approximation multiplier for 16-bit is illustrated in figure 3. the process involved in multiplying input operands a by b for a specific measurement of truncation t = 7 and rounding h = 3 is shown in figure 1. the tosam (x, y) structures, where x and y correlate to the rounding 'h' and truncation 't' parameters, and the correctness of this multiplier technique is mostly determined by these parameters ‘t’ and ‘h’. as a result, the relationship between these two parameters, 't' and 'h' must be satisfied in order to ensure maximum precision, as well as a speed and energy consumption that is reasonable. finally, by examining several approaches, this multiplication strategy can be employed for both signed and unsigned operands. to apply this approach to signed multipliers, we must first determine the absolute value of the input operands a and b, as well as the multiplier's sign. calculation time can be reduced by finding the input operands with exact absolute values. in the example, the input operand a is 16-bit and has a decimal value of 11761, whereas the input operand b is 16-bit and has a decimal value of 2482. a and b's exact measurement value is written as (a × b)exact the exact result in binary format is 0000 0001 1011 1101 0110 1010 1001 0010 which is represented in decimal format as 29 190 802, but using the existing method, we get the value as (a × b)existed in the binary format is 0000 0001 1011 1001 0000 0000 0000 0000, the decimal format is 28 901 376. the difference between existed and exact values in this situation is 289 426. the value of (a × b)proposed is calculated using the approximation technique which is explained in figure 4, the binary format 0000 0001 1011 1001 1111 1111 1111 1111, the decimal format is 28 966911. the difference in between the exact and proposed is 223 891. the ka and kb values reflect the leading one-bit location in the input operands a and b. the measured values of ka and kb numbers in this case are 13 and 11, respectively. various (h, t) combinations have a slight modification in the numerical example. various study is being done to build a new measurement of approximate multiplier. in the dynamic segment method (dsm) [17] design, the input operands are trimmed to 'm' bits depends on the location of the leading one bit (value of that particular position, i.e., 1,2, …), and fixed-width multiplication is implemented on the values, that are obtained from truncation operation. while applying this method of truncation, the produced output value in most of the cases is less than the exact one, resulting in a negative mean relative error (mre). when considering digital signal processing applications, strive to keep the mean error as low as feasible to achieve a good signalto-noise ratio (snr). the drum structure is truncated to yield the solution [18]. we seek to bring the mre value near zero, the lsb of the shorter input, which is assigned to the value "1," to limit the erroneous outcome. the truncation of the input operands is performed in the multiplication stage in the low figure 1. 16-bit tosam numerical example of measurement for truncation t = 7 and rounding h = 3. figure 2. block diagram of truncated multiplier. figure 3. representation of measured term 1+(ya)t + (yb)t+ (ya)apx +(yb)apx in dot diagram with truncation ‘t’ and rounding parameters ’h’ are 7 and 3 respectively. acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 4 energy truncation-based approximate multiplier (letam) [19] structure, and there is a chance of omitting half of the partial products. maxare is specified as maximum absolute relative error (considered from relative error re) mre is specified as mean relative error mare is specified as mean absolute relative error vare is specified as variance of absolute relative error ned is specified as normalized error distance max_ned is specified as maximum normalized error distance comparison of the accuracy of tosam against other approximation multipliers like dsm, drum, letam, and uroba in terms of mre, maxare, mare, vare, max ned, and ned using random vectors [5] is performed on the parameters of maxare, mre, mare, vare, max ned, and ned. all these findings are summarised in table 1. edp is defined as energy-delay product pda is defined as power-area-delay product delay, power, area, energy, edp, pda, and mare of the approximation multiplier are calculated and compared with the existing multiplier design and is tabulated in table 2. from the data, the proposed modified multipliers show better results than the other existing approximation multiplier configurations with respect to speed and energy usage while maintaining almost identical mare values. table 2 shows a comparison between dsm, drum, letam, u-roba with the proposed measurement approach of tosam (3,7) approximate multiplier. 4. results and discussions the proposed approximation multiplier with output measurement of 32-bit truncated based multiplier that produces a result that is more approximate than the existing multiplier. 11761 is the a value, and 2482 is the b value. the exact values of a and b are as follows: (a × b)exact value 29 190 802, and the existed of a and b is (a × b)existed equals 28 901 376. the difference between existed and exact values in this situation is 289 426. the value of (a × b)proposed is 28 966911when utilising the proposed approximate technique; the difference between proposed and exact is 223 891, indicating that the value is more approximate. the output is generated in the next cycle and the error value is also shown in figure 5. the internal structure shows the blocks of the proposed approximate multiplier namely approximate absolute unit aau, lead one detector lod, truncation unit tu, arithmetic unit au, shifter, sign-set. also, it represents the flow of data from one block to the other and is shown in figure 6. 5. conclusion and future scope low-energy and area-efficient 16-bit approximation multiplier is proposed. truncation on input operands is performed with two different parameters namely truncation ‘t’ and rounding ‘h’. the existing 16-bit multiplier with rounding and truncation measurement (3,7) shows a measurement error of 0.4 %. the proposed 16-bit multiplier for the same truncation and rounding measurement (3,7) with the measured error of 0.01 % (the error is less than 1 %). the error is reduced by rounding the input operands to the next odd value. the table 1. representation of various approximate multiplier with maxare, mre, mare, vare, max ned, and ned. architecture maxare (%) mre (%) mare (%) vare (%) max ned ned dsm(3) [20] 36.00 -16.1 16.10 40.43 0.2344 0.0399 tosam(0,2) [20] 31.25 -9.1 10.90 46.63 0.3125 0.0309 tosam(0,3) [20] 25.00 -3.3 7.61 28.81 0.2500 0.0213 drum(3) [8] 56.25 2.1 11.90 79.96 0.2344 0.0281 tosam(1,5) [20] 13.89 -0.7 3.95 7.60 0.1250 0.0104 tosam(2,6) [20] 6.87 -0.6 2.06 2.00 0.0664 0.0053 proposed tosam(3,7) 3.65 -0.3 1.05 0.52 0.0342 0.0027 letam(3) [14] 9.72 -4.0 4.00 2.54 0.0859 0.0104 u-roba [15] 11.10 0 2.89 6.37 0.0625 0.0069 tosam(4,8) [20] 1.88 -0.2 0.53 0.13 0.0173 0.0013 table 2. comparisons of delay, power, area, energy, edp, pda, and mare of the approximate multipliers. architecture delay (ns) power (mw) area (µm2) energy (pj) edp (pj · ns) pda (pj · µm2) mare (%) tosam(0,2) [20] 0.74 0.16 342 0.12 0.09 40 10.9 tosam(0,3) [20] 0.84 0.21 423 0.18 0.15 76 7.6 dsm(3) [20] 0.97 0.20 344 0.19 0.19 67 16.1 tosam(1,5) [20] 1.00 0.35 532 0.35 0.35 185 4.0 drum(3) [8] 0.88 0.13 257 0.11 0.10 29 11.9 tosam(2,6) [20] 1.00 0.35 532 0.35 0.35 185 2.06 letam(3) [14] 1.16 0.39 608 0.46 0.53 278 4.0 u-roba [15] 1.05 0.55 1438 0.57 0.60 826 2.9 proposed tosam(3,7) 1.87 0.23 593 0.4 0.748 255 1.05 figure 4. example for generation of approximate absolute value with two negative numbers. acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 5 recommended approximation multiplier is scalable and outperforms the correct multiplier in regard of speed, area, and energy. the proposed approximation multiplier consumes 0.23 mw which is lesser than the existing approximate multipliers. various types of approximate multipliers are used in sharpening the images. in future there is also the possibility of using the multiplier and accumulator unit to create an image sharpening module, and this may be used to measure the energy consumption for various approximate multipliers. also, in other applications, we can use the jpeg technique to compress many images, and this can be used for approximation multipliers in the discrete cosine transform unit. references [1] j. han, m. orshansky, approximate computing: an emerging paradigm for energy-efficient design, proc. 18th ieee eur. test symp., avignon, france, 27-30 may 2013, pp. 1–6. doi: 10.1109/ets.2013.6569370 [2] v. k. chippa, s. t. chakradhar, k. roy, a. raghunathan, analysis and characterization of inherent application resilience for approximate computing, proc. 50th acm/edac/ieee des. automat. conf. (dac), austin, tx, usa, 29 may 7 june 2013, pp. 1–9. doi: 10.1145/2463209.2488873 [3] h. jiang, c. liu, n. maheshwari, f. lombardi, j. han, a comparative evaluation of approximate multipliers, in proc. ieee/acm int. symp. nanoarch, beijing, china, 18-20 july 2016, pp. 191–196. doi: 10.1145/2950067.2950068 [4] s. balamurugan, p. s. mallick, error compensation techniques for fixed-width array multiplier design a technical survey, j. circuits, syst. comput., vol. 26, no. 3 (2017), p. 1730003. doi: 10.1142/s0218126617300033 [5] a. momeni, j. han, p. montuschi, f. lombardi, design and analysis of approximate compressors for multiplication, ieee trans. comput., vol. 64, no. 4 (2015), pp. 984–994. doi: 10.1109/tc.2014.2308214 [6] s. venkataramani, s. chakradhar, k. roy, a. raghunathan, approximate computing and the quest for computing efficiency, proc. 52nd annual design automation conference (dac), san francisco, ca, usa, 8-12 june 2015, article 120, pp.1-6. doi: 10.1145/2744769.2744904 [7] j. low, c. jong, unified mitchell-based approximation for efficient logarithmic conversion circuit, ieee trans. computers, vol. 64, no. 6 (2015), pp. 1783-1797. doi: 10.1109/tc.2014.2329683 [8] s. hashemi, r. bahar, s. reda, drum: a dynamic range unbiased multiplier for approximate applications, proc. ieee/acm international conference on computer design, austin, tx, usa, 2-6 november 2015, pp. 418-425. doi: 10.1109/iccad.2015.7372600 [9] s. rooban, s. saifuddin, s. leelamadhuri, s. waajeed , design of fir filter using wallace tree multiplier with kogge-stone adder, international journal of innovative technology and exploring engineering, vol. 8, no. 6 (2019), pp.92-96. [10] v. leon, g. zervakis, d. soudris, k. pekmestzi, approximate hybrid high radix encoding for energy-efficient inexact multipliers, ieee transactions on very large scale integration (vlsi) systems, vol. 26, no. 3 (2018), pp. 421-430. doi: 10.1109/tvlsi.2017.2767858 [11] s. rooban , d. l. prasanna, k. b. d. teja, p. v. m. kumar, carry select adder design with testability using reversible gates, international journal of performability engineering, vol. 17, no. 6 (2021), pp. 536–542. doi: 10.23940/ijpe.21.06.p6.536542 [12] s. venkatachalam, e. adams, h. j. lee, s. b. ko, design and analysis of area and power efficient approximate booth multipliers, ieee transactions on computers, vol. 68, no. 11 (2019), pp.1697-1703 doi: 10.1109/tc.2019.2926275 [13] s. narayanamoorthy, h. a. moghaddam, z. liu, t. park, n. s. kim, energy-efficient approximate multiplication for digital signal processing and classification applications, ieee trans. very large scale integr. (vlsi) syst., vol. 23, no. 6 (2015), pp. 1180–1184. doi: 10.1109/tvlsi.2014.2333366 [14] s. vahdat, m. kamal, a. afzali-kusha, m. pedram, letam: a low energy truncation-based approximate multiplier, comput. elect. eng., vol. 63 (2017), pp. 1–17. figure 5. result of the proposed approximate multiplier with measurement of (3,7). figure 6. rtl schematic of the proposed approximate multiplier with error measurement of (3,7). https://doi.org/10.1109/ets.2013.6569370 https://doi.org/10.1145/2463209.2488873 https://doi.org/10.1145/2950067.2950068 https://doi.org/10.1142/s0218126617300033 https://doi.org/10.1109/tc.2014.2308214 https://doi.org/10.1145/2744769.2744904 https://doi.org/10.1109/tc.2014.2329683 https://doi.org/10.1109/iccad.2015.7372600 https://doi.org/10.1109/tvlsi.2017.2767858 https://www.scopus.com/authid/detail.uri?authorid=56910327700 https://www.scopus.com/authid/detail.uri?authorid=57211475817 https://www.scopus.com/authid/detail.uri?authorid=57224940155 https://www.scopus.com/authid/detail.uri?authorid=57225916538 https://doi.org/10.23940/ijpe.21.06.p6.536542 https://doi.org/10.1109/tc.2019.2926275 https://doi.org/10.1109/tvlsi.2014.2333366 acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 6 doi: 10.1016/j.compeleceng.2017.08.019 [15] r. zendegani, m. kamal, m. bahadori, a. afzali-kusha, m. pedram, roba multiplier: a rounding-based approximate multiplier for high-speed yet energy-efficient digital signal processing, ieee trans. very large scale integr. (vlsi) syst., vol. 25, no. 2 (2017), pp. 393–401. doi: 10.1109/tvlsi.2016.2587696 [16] m. ha, s. lee, multipliers with approximate 4–2 compressors and error recovery modules, ieee embedded syst. lett., vol. 10, no. 1 (2018), pp. 6–9. doi: 10.1109/les.2017.2746084 [17] d. esposito, a. g. m. strollo, e. napoli, d. de caro, n. petra, approximate multipliers based on new approximate compressors, ieee trans. circuits syst. i, reg. papers, vol. 65, no. 12 (2018), pp. 4169–4182. doi: 10.1109/tcsi.2018.2839266 [18] i. alouani, h. ahangari, o. ozturk, s. niar, a novel heterogeneous approximate multiplier for low power and high performance, ieee embedded syst. lett., vol. 10, no. 2 (2018), pp. 45–48. doi: 10.1109/les.2017.2778341 [19] m. masadeh, o. hasan, s. tahar, comparative study of approximate multipliers, glsvlsi’18: proceedings of the 2018 on great lakes symposium on vlsi, 2018, pp. 415-418. doi: 10.1145/3194554.3194626 [20] s. vahdat, m. kamal, a. afzali-kusha, m. pedram, tosam: an energy-efficient truncationand rounding based scalable approximate multiplier, ieee transactions on very large scale integration (vlsi) systems, vol.27, no.5 (2019), pp. 1161-1173. doi: 10.1109/tvlsi.2018.2890712 https://doi.org/10.1016/j.compeleceng.2017.08.019 https://doi.org/10.1109/tvlsi.2016.2587696 https://doi.org/10.1109/les.2017.2746084 https://doi.org/10.1109/tcsi.2018.2839266 https://doi.org/10.1109/les.2017.2778341 https://doi.org/10.1145/3194554.3194626 https://doi.org/10.1109/tvlsi.2018.2890712 training program for the metric specification of imaging sensors acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 6 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 training program for the metric specification of imaging sensors raik illmann1, maik rosenberger1, gunther notni1 1 technische universität ilmenau, gustav kirchhoff platz 2, 98693 ilmenau, germany section: research paper keywords: measurement education; measurement training; engineering education; hands-on pedagogy; image sensor characterization citation: raik illmann, maik rosenberger, gunther notni, training program for the metric specification of imaging sensors, acta imeko, vol. 11, no. 4, article 10, december 2022, identifier: imeko-acta-11 (2022)-04-10 section editor: eric benoit, université savoie mont blanc, france received august 26, 2022; in final form december 2, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: raik illmann, e-mail: raik.illmann@tu-ilmenau.de 1. introduction optical coordinate metrology is an essential part of industrial automation and inspection processes. therefore, this subject area should play an essential role in today's engineering education in courses such as mechanical engineering, electrical engineering, computer science or engineering informatics. thus, the question arises, how to efficiently teach essential contents, which are necessary for a successful handling of this topic in engineering practice. the requirements are that both the basics and the relevance have been understood, and that the systems engineering integration can be implemented on the basis of the learned knowledge and transferred into a functional system. group work and the targeted intervention of a supervisor play an essential role here. not only in order to close existing knowledge gaps in a targeted manner, but also in order to train the cooperation in teams with corresponding social competence, which is inevitable in practice. the central topic of the program is the metric characterization of imaging sensors. with regard to practice, two essential technological aspects can be taught through this. first, it is crucial for the implementation of a test system to be able to evaluate and classify geometric product specifications (gps) characterized by the manufacturer of an image sensor, i.e., ultimately to be able to understand its test procedure. secondly, this makes essential principles of image signal processing accessible and also provides a practical and comprehensible application case, which proves the usefulness of the methods and thus offers the motivation directly on the basis of the concrete example. the standard din en iso 10360 [1] is used as the basis for the methodical procedure; for coordinate measuring machines with optoelectronic sensors, the vdi standard 2617 [2] is used specifically. it describes the inspection of coordinate abstract measurement systems in industrial practice are becoming increasingly complex and the system-technical integration levels are increasing. nevertheless, the functionalities can in principle always be traced back to proven basic functions and basic technologies, which should, however, be understood and developed. for this very reason, the teaching of elementary basics in engineering education is unavoidable. the present paper presents a concept to implement a contemporary training program within the practical engineering education on university level in the special subject area of optical coordinate measuring technology. the students learn to deal with the subject area in a fundamentally oriented way and to understand the system-technical integration in detail from the basic idea to the actual solution, which represents the common practice in the industrial environment. the training program is designed in such a way that the basics have to be worked out at the beginning, gaps in knowledge are closed by the aspect of group work and the targeted intervention of a supervisor. after the technology has been fully developed theoretically, the system is put into operation and applied with regard to a characterizing measurement. the measurement data are then evaluated using standardized procedures. a special part of the training program, which is to promote the own creativity and the comprehensible understanding, represents the evaluation of the modulation transfer function of the system by a self-developed algorithmic program section in the script-oriented development environment matlab, whereby students can supportively fall back on predefined functions for the evaluation, whose implementation however still must be accomplished. mailto:raik.illmann@tu-ilmenau.de acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 measuring machines by measuring calibrated test samples. this involves checking whether the measurement deviations are within the limits specified by the manufacturer or user. the test samples must be such that their properties do not decisively influence the parameters to be determined. the characterization is carried out on the basis of the principle described in [2] a circle measured at five different positions. for this purpose, a calibrated chrome standard and a through-light unit are used. after completion of the measurements, the results are to be statistically evaluated in accordance with the standard. in addition, the determination of the modulation transfer function (mtf) is intended to provide an assessment of the resolving capability of the overall system and to train an in-depth algorithmic understanding of 2d image processing. 2. theoretical background 2.1. problem description the metrological problem with which the students are to be confronted is described in figure 1. a calibrated sample with a circular ring (object) is placed as a normal on a light source. the circular ring passes (as a negative) through an optical system which consists of lenses and apertures. the image that is consequently created on the sensor surface deviates from the original image due to optical and mechanical influences. in addition, geometric quantization during scanning within the image sensor produces a further measurement deviation. all deviations in sum result in a total measurement deviation which has to be determined. further it can be observed that with a change of the position of the object (5 different positions in [2]) due to the influences of the optics and mechanics the measured diameters of the circular rings differ. this also has to be quantified. 2.2. analysis of measurement uncertainty all image processing algorithms are already implemented in software, so that the diameter of the circular ring is output as the result of the measurement. the algorithms for edge detection are subpixel-based and very complex, therefore a more detailed description is not provided within this paper so reference is made to special literature [3]. more crucial for the metric testing of the image sensor system is the consideration of the measurement uncertainties. according to the formulas 1-3 the total measurement uncertainty can be determined. the measurement uncertainty describes the deviation behaviour of the overall system. the total value of the measurement uncertainty is determined by the errors of the two essential assemblies, the mechanical and the optical measuring device, and results from: 𝑈total = √𝑈mech 2 + 𝑈opt 2 . (1) both mechanical and optical measurement uncertainty can be further divided into systematic and random measurement uncertainty. they result from: 𝑈mech = √𝑈sys,m 2 + 𝑈random,m 2 (2) and 𝑈opt = √𝑈sys,o 2 + 𝑈random,o 2 . (3) the systematic measurement uncertainty is usually specified by the manufacturer and is determined from comparative measurements with calibrated standards. within the training program these are given to the students. the random measurement uncertainty, on the other hand, is determined by several measurements carried out in the training program under the same environmental conditions and with the same test specimen. the standard deviation can now be calculated from the measured values obtained. 𝜎 = √ 1 𝑛 − 1 ∑(𝑥𝑖 − �̄�) 2 𝑛 𝑖=1 . (4) the random measurement uncertainty 𝑈random,ms of the measurement series 𝑈zuf,mr is now obtained, using the 95.4% confidence level. 𝑈random,ms = 2 𝜎 √𝑛 . (5) figure 1. schematic system description. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 3 however, since the mechanical and optical measurement uncertainties in the measurement result cannot be separated directly with the available setup, the evaluation is simplified somewhat at this point and only predefined systematic deviations are included in the calculation of the total measurement uncertainty. 2.3. modulation transfer function there are several approaches to determine the modulation transfer function (mtf). the main approaches are part of the teaching in parallel to the designed training program and are presented in the lectures. a simple and practically easy to understand method is described in [4] and [5]. thereby, the intensity differences (contrasts, 𝐼max − 𝐼min) of a search ray orthogonal to a rectangular signal are evaluated. if instead of the rectangular signal a sinusoidal signal is assumed, the contrast transfer function 𝐶𝑇(𝑓) goes directly over into the modulation transfer function 𝑀𝑇𝐹(𝑓). basically, the contrast ratios of the pattern in the image 𝐾(𝑓) 𝐾(𝑓) = 𝐼max − 𝐼min 𝐼max + 𝐼min (6) have to be put into the ratio to the contrast in the object space 𝐾′(𝑓). 𝐾′(𝑓) = 𝐼max ′ − 𝐼min ′ 𝐼max ′ + 𝐼min ′ . (7) plotted over the sampling points of the spatial frequency f as reciprocals of the line distances defined by the object, the mtf can thus be approximated 𝐶𝑇(𝑓) = 𝐾′(𝑓) 𝐾(𝑓) ≈ 𝑀𝑇𝐹(𝑓) . (8) for the calculations according to (6), (7) and (8), all values can be determined from the measurements, which is why these formulas must also be implemented by the students in the practical part. complex examples for modulation transfer functions of spectral sensors are given in [6]. 3. measurement system description 3.1. measurement system the system used for the experiment is shown in figure 2. it consists of a monochromatic camera , a telecentric objective , a light table for measurement using the transmitted light method , a halogen-based light generator , and a stand construction in column design . 3.2. measurement targets two samples are used for the training program. the first sample is shown in figure 3 (left). in the sample, the metric dimensions are represented by circles. these circles are moved to the 5 positions described and the diameter of the corresponding circular ring is measured there in each case. the second sample is a u.s. air force (usaf) test chart shown in figure 3 (right), based on which the modulation transfer function is determined. 4. training program concept the training program is shown as a flow chart in figure 4 and is described in detail below. 4.1. preparation and general aim the overall aim is to teach and consolidate the theoretical aspects. the theoretical basics are taught in preparation of the students in self-study. at the beginning of the training program, the supervisor checks the essential basics necessary for understanding the subject matter. here it must be paid attention particularly on the part of the supervisor to recognize lack of understanding and to recognize possibly by further questions the actual knowledge conditions of the participants individually. often misunderstandings or misinterpretations of the facts occur, which must be corrected. interdisciplinary action should also be taken here, and basic mathematics or mechanics should also be addressed. essentially, the following topics should have been learned at the end of the program: figure 2. measurement setup. figure 3. measurement targets. circular rings chrome glass (left), usaf (right). acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 • knowing and understanding terms [1] • understanding the necessity of characterisation concerning engineering tasks • understanding measurement setup (incl. optics) • understanding the measurement procedure • reflection and deduce the causes of uncertainties in the measurement system • evaluating und visualize the data • encouraging the understanding of algorithms. it is also important that the supervision always refers to the method of application and integration of the systems and components in the engineering practical task. in this way, the important practical relevance is continuously maintained. after verification of qualification and the elimination of misunderstandings, instruction is given in the measurement setup and operation of the software. 4.2. measurement process the measurement is performed according to the instructions given in [2]. regarding the training program, the meaningfulness has already been discussed in the basics and does not provide any knowledge gain for the student in the actual sense at this point. at this point, only the practical data is acquired, with which the evaluation and interpretation can be done afterwards. in the first step the 10 single measurements of the circular rings are carried out at 5 positions, which are reached by shifting the measuring object in the object field by means of the x-y-stage. the software required for image acquisition is provided for the student and can be operated intuitively, and the circle diameters are also determined in this software. after determining the circle diameters, these values are saved in the background in a file, which can later be pulled into the evaluation software. the second measurements is the capture of the usaf test chart, which is captured as a single image and then saved as an image file. 4.3. evaluating the measurements the evaluation of the measurement data is generally the most important and most insightful part of the training program. here, the students learn how to evaluate data in an environment that is frequently encountered in practice. both the measurement data of the circular diameters and the development of the mtf are implemented in the matlab environment. in general, the necessary program text is available as cloze text and must be completed by the students at the essential points. this also promotes the algorithmic understanding of a sequential process. the first insight is the differentiation of the data types. the measurement data of the circle diameters are purely vectoral and 1-dimensional, the image data as 2-dimensional field. in addition, the image data are available as 8bit integer, the measurement data are of the data type "double". for the evaluation of the circle data a script is available, which must be supplemented at some commented places only by the function for the computation of the standard deviation. essential necessary standard functions and their arguments to be passed with syntax must be researched by the students themselves via the internal help and implemented in the main program. experience shows that most students have enormous methodical deficits exactly at this point for independent problem solving. figure 4. measurement target 1. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 5 4.3.1. determination of the measurement uncertainty the data obtained in the measurement serves as the basis for determining the measurement uncertainty. these are available in a csv-file (comma separated value) stored in the background by the image processing software and can be transferred to matlab with a function for reading in. afterwards the data have to be validated. unfortunately, problems occur again and again due to the comma-dot separation at the decimal point in the exchange with german-language software, which should be explicitly pointed out here. in the last step the calculation of the measurement uncertainty by standard deviations etc. takes place and are to be completed here after the appropriate equations by the students in the program text. 4.3.2. determination of the mtf more detailed understanding is required to calculate the mtf. it should be noted that within the training program only the mtf of the entire system is to be determined. of course, this would also be possible for each individual component in the beam path. however, the goal is to get a pragmatic overview of the resolving power of the entire system. in addition, the time required should also be kept within reasonable limits. the necessary data are available as a single image, which must first be read in. in the next step, the relevant signals must be extracted from the image data set as one-dimensional signals. this is done manually by defining the start and end points of the search beams. afterwards, the students represent these signals as a simple plot for illustration purposes. to write the necessary functions would go beyond the scope of the training program, therefore these functions are predefined. after extracting the vectoral data, the students calculate the mtf according to the relationships presented in the basics. here it is also important to understand the signal properties of the overall image (contrast ratios) and to calculate them with appropriate functions from the field. also, here the necessary program text is available as gap text. calculations for visualization of the search rays in the image etc. have been predefined. as a result, the mtf is plotted graphically. 4.4. reflexion and interpretation of the data the main focus of the training program is the graphical representation of the evaluated data. for each result a plot is to be created, which illustrates the data. figure 5 shows an exemplary graphical representation of the statistical values of 10 single measurements at the 5 relevant positions. the graph shows the histogram and the distribution function fitted in it (red curve). on the basis of the compression of the curve the meaning of the standard deviation as a scattering parameter of the data can be read off immediately. the knowledge gain is secured by the graphic connection of the data thus finally and is to be summarized a protocol. all results are finally discussed with the supervisor on the basis of the graphs. thus, the learning result is secured by the renewed repetition and purposeful discussion of substantial qualitative characteristics, particularly in the curves. experience shows that qualitative progressions can be memorized very lastingly through graphically illustrated relations. 5. integration into teaching the present concept will be integrated into teaching as a fixed element from the time of publication. however, initial test sessions with students for the evaluation of the concept were performed before. these preliminary tests with a total of 4 groups had three main reasons. first of all, it was to be determined whether the test could be completed at all by the students in the appropriate time. all 4 groups managed to complete the task within the set time frame of 3 hours. the second reason is to estimate the suitability. the students were asked in detail about their assessment after completing the program. all groups confirmed that they had understood the usefulness of the course and its relation to real practice. furthermore, it was confirmed that the students were neither overstrained nor understrained at any time. the first group, which had already conducted an experiment on a similar topic in a different form a long time ago, represented a special case. this group confirmed above all the increase in clarity and confirmed that they had become better familiar with the topic due to the many integrated qualitative graphical representations of the results. the third reason for the preliminary tests is to test the stability of the system. this means both the system stability of the hardware and the stability of the software. none of the groups experienced any problems in this regard. the hardware ran stably, did not cause any dropouts, and the software did not deliver any errors. figure 5. measurements on 5 positions (target 1). acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 6 6. conclusion the present work represents a full methodological treatise on the development of a training concept of engineering students in the field of optical coordinate metrology and adjacent to the image processing. current and relevant problems are integrated, basic problems of image processing are simulated, demonstrated and solved. ensuring expert supervision, but intervening only when necessary, underlines the pedagogical concept. the clearly identified problems are solved using current tools. the focus is also always on creating a graphical picture between data, their evaluations and calculations, on the basis of which the results are easily discussed and the facts are easily memorized by the students through the graphical symbolization. special attention was paid to typical, recurring deficits and problems of the students during the creation of the program, which is why they were specifically included and problems are intentionally implied during the execution of the program. acknowledgement we thank the technische universität ilmenau for the financial support to realize this practical course. references [1] iso 10360 7: geometrical product specifications (gps) acceptance and re-verification tests for coordinate measuring machines (cmm) part 7: cmms equipped with imaging probing systems, international organization for standardization geneva, 2011. [2] verein deutscher ingenieure, vdi/vde 2617 blatt 6.2,: accuracy of coordinate measuring machines characteristics and their testing guideline for the application of din en iso 103608 to coordinate measuring machines with optical distance sensors, beuth verlag gmbh, berlin, februar 2021. [3] j. beyerer, f. puente león (eds.), automated visual inspection and machine vision iii: 27 june 2019, munich, germany (spie, bellingham, washington, usa, 2019). [4] t. luhmann, s. robson, close-range photogrammetry and 3d imaging, de gruyter, berlin, 2019, 3rd edn. [5] w. g. rees, physical principles of remote sensing, cambridge univ. press, cambridge, 2003, 2nd edn. [6] m. rosenberger, p.-g. dittrich, r. illmann, r. horn, a. golomoz, g. notni, s. eiser, o. hirst, n. jahn, multispectral imaging system for short wave infrared applications, proceedings of spie, 2022, vol. 12094, id. 120940z, 14 pp. doi: 10.1117/12.2619350 https://doi.org/10.1117/12.2619350 the importance of physiological data variability in wearable devices for digital health applications acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 8 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 the importance of physiological data variability in wearable devices for digital health applications gloria cosoli1, angelica poli2, susanna spinsante2, lorenzo scalise1 1 department of industrial engineering and mathematical sciences, università politecnica delle marche, v. brecce bianche, 60131 ancona, italy 2 department of information engineering, università politecnica delle marche, v. brecce bianche, 60131 ancona, italy section: research paper keywords: wearable devices; physiological measurements; data variability; physiological monitoring citation: gloria cosoli, angelica poli, susanna spinsante, lorenzo scalise, the importance of physiological data variability in wearable devices for digital health applications, acta imeko, vol. 11, no. 2, article 25, citation: gloria cosoli, angelica poli, susanna spinsante, lorenzo scalise, the importance of physiological data variability in wearable devices for digital health applications, acta imeko, vol. 11, no. 2, article 25, june 2022, identifier: imeko-acta11 (2022)-02-25 section editor: francesco lamonaca, university of calabria, italy received july 13, 2021; in final form march 21, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: gloria cosoli, e-mail: g.cosoli@staff.univpm.it 1. introduction the use of wearable devices is constantly spreading all over the world, thanks to their wide accessibility and high ease of use [1] (even if further actions and improvements are still needed to overcome barriers for a larger adoption by older adults [2]). nowadays a continuously growing number of people wear a smartwatch monitoring a plethora of physiological parameters: heart rate (hr) [3], energy expenditure (ee) [4], blood volume pulse signal (bvp) [5], electrodermal activity (eda) [6], acceleration signal [7], sleep quality [8], respiration rate [9], stressrelated indices [10], etc. these measurements can be useful for different purposes, from cardiovascular monitoring [11] to sleep tracking [12], through activity assessment [13], fitness-oriented applications [14] and blood pressure observation [15], just to cite some. furthermore, in the recent months wearable devices have expanded their application also to the possible detection of early symptoms related to sars-cov-2 pandemic [16], since this virus has stressed the importance of remote monitoring both to limit contagion and for “testing, tracking and tracing” strategies [17]. however, there are also critical aspects that should be thoroughly considered, pertaining to health-related data privacy issues and measurement accuracy of these innovative wearable instruments [5], which undoubtedly play important roles in the era of personalized medicine and digital health [18], [19]. physiological signals can be collected through wearable devices 24 hours a day, 7 days a week, producing big amounts of data, which are analysed through artificial intelligence (ai) algorithms more and more frequently, in order to provide useful information for the so-called decision-making processes [20], abstract this paper aims at characterizing the variability of physiological data collected through a wearable device (empatica e4), given that both intraand inter-subject variability play a pivotal role in digital health applications, where artificial intelligence (ai) techniques have become popular. inter-beat intervals (ibis), electrodermal activity (eda) and skin temperature (skt) signals have been considered and variability has been evaluated in terms of general statistics (mean and standard deviation) and coefficient of variation. results show that both intraand inter-subject variability values are significant, especially when considering those parameters describing how the signals vary over time. moreover, eda seems to be the signal characterized by the highest variability, followed by ibis, contrary to skt that results more stable. this variability could affect ai algorithms in classifying signals according to particular discriminants (e.g. emotions, daily activities, etc.), taking into account the dual role of variability: hindering a net distinction between classes, but also making algorithms more robust for deep learning purposes thanks to the consideration of a wide test population. indeed, it is worthy to note that variability plays a fundamental role in the whole measurement chain, characterizing data reliability and impacting on the final results accuracy and consequently on decision-making processes. mailto:g.cosoli@staff.univpm.it acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 [21], thus supporting human choices in different fields, from industry 4.0 [22], [23] to ehealth [24]. the purposes can be different: emotion classification [25], activity recognition [26], hypertension management [27], fall detection [28], smart living environments and well-being assessment [29], and so on. in order to be able to develop robust models, capable to provide reliable information, data quality is fundamental [30]; in this perspective, not only hardware and acquisition options (e.g. sampling frequency, signal-to-noise ratio (snr), resolution, etc.) have a big impact, but also data variability, linked both to different sources to collect data [31], but also to the physiological variability itself. indeed, the classification performance of ai algorithms surely depends on the variability observed in the data collected on the test population: if it is true that (physiological) variability somehow hinders a perfect discrimination among classes, on the other hand it is necessary to test a wide population in order to include its variability and avoid overfitting issues. these aspects should be thoroughly considered when developing ai algorithms for digital health applications, which cannot neglect physiological variability characterising the involved population, and consequently the measured data. the study reported in this manuscript aims at evaluating the intraand inter-subject variability of different physiological signals collected through a wearable wrist-worn device (empatica e4). in particular, the authors have analysed cardiacrelated parameters (i.e. heart rate variability – hrv – parameters computed on the bvp signal measured through a photoplethysmographic – ppg – sensor), features computed on eda signal and skin temperature (skt) values. mean, standard deviation and coefficient of variation have been computed for each extracted parameter, considering the repeated tests on a same subject to evaluate intra-subject variability, and the whole acquired data for inter-subject variability. the rest of the paper is organized as follows: section 2 describes the materials and methods employed for data acquisitions and for the evaluation of data variability, section 3 reports the intraand inter-subject variability results, and finally in section 4 the authors provide their considerations and conclusions. 2. materials and methods 2.1. participants the study was conducted on 10 healthy volunteers: 3 males, 7 females; age of (33 ± 16) years with a range of (15 59) years; height of (169.78 ± 8.83) cm; weight of (66.55 ± 12.00) kg; bmi of (22.92 ± 2.14) kg⁄m2 – data are reported as mean ± standard deviation. they declared they did not take any medication in the 24 hours preceding the tests, nor had particular clinical histories possibly influencing the results. before starting the tests, each participant was informed on the test purpose and procedure and signed an informed consent according to the european regulation 2016/679, i.e., the general data protection regulation (gdpr) to obtain the permission for processing personal data. 2.2. data collection in order to assess the inter-subject and intra-subject variability of physiological parameters, each subject repeated the acquisitions six times, for a total of 60 recordings, each lasting 5 minutes. ambient temperature and relative humidity were equal to (20 ± 2) °c and (50 ± 5) %, respectively, to be perceived as comfortable by most of the involved individuals. the participants (with a skin colour classification of type ii – fitzpatrick scale), laying comfortably in a supine position (i.e., in rest condition) in a quiet room, were instructed to relax as much as possible, breathe normally, and not talk during recordings, in order to minimize movement artifacts. as shown in figure 1, the physiological signals were simultaneously collected through a multisensory wearable device, namely empatica e4 [32], placed on the dominant wrist. this acquisition device was chosen as it provides the raw data, thus resulting particularly suitable for research purposes. firstly, the participants were allowed to adjust the device positioning to increase the comfort feeling. then, the device placement was verified to ensure the optimal skin contact (not worn too tightly or too loosely), and consequently to guarantee the optimal conditions for reliable ppg sensor acquisition [33] and, therefore, as high as possible data quality. 2.3. data acquisition device individual physiological signals were recorded with the multimodal device empatica e4 (class iia medical device according to the 93/42/eec directive) – firmware version: fw 3.1.0.7124. such a device captures the inter-beat-interval (ibi), bvp, eda, human skt, and 3-axis accelerometer signals. in particular, bvp and ibi signals, both sampled at 64 hz with a resolution of 0.9 nw/digit, are derived from the ppg sensor. on the bottom of the wristband, there are two green light emitting diodes (leds) enabling the measurements of blood volume changes and heartbeats, and two red leds for reducing the motion artifacts. additionally, two units of photodiodes (total 14 mm2 sensitive area) measure the reflected light. on the bracelet band of empatica e4, two ag/agcl electrodes allow to pass a small amount of alternating current (frequency 8 hz, with a maximum peak-to-peak value of 100 µa) for measuring the skin conductance in µs, sampled at 4 hz with a resolution of 900 ps in the range of [0.01, 100] µs. at the same sampling frequency (4 hz), an infrared thermopile, placed on the back of the case, records the skt data in °c with an accuracy of ± 0.20 °c (within the range 36 °c 39 °c), and a resolution of 0.02 °c. calibration is valid in the range [-40, 115] °c. the last sensor is a 3-axial mems accelerometer used to collect the acceleration along the three dimensions x, y, z with a 32 hz sampling frequency and a default measurement range of ± 2 g. in this case the resolution of the output signal is 0.015 g (8 bit). a dedicated mobile application (e4 realtime) was used to stream and view data in real-time on a mobile device connected with empatica e4 via bluetooth low energy (ble). following each measurement session, data were automatically transferred to a cloud repository (empatica connect) to view, manage, and download raw data in .csv format in the post-processing phase of the study. figure 1. measurement setup. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 2.4. data analysis as mentioned above, in this study the data variability analysis was conducted on hrv (or, more precisely, on pulse rate variability [34], [35]), eda and skt signals, previously processed in matlab environment in order to extract relevant features. regarding the hrv evaluation, after applying a previously developed artifact correction method [36], the analysis was performed on ibis signals by using the kubios toolbox [37]. seven meaningful hrv-related parameters were extracted from the corrected ibis signals in time domain (table 1), namely: mean and standard deviation of ibis; mean, standard deviation, minimum and maximum values of hr; root mean square of successive rr interval differences. frequency domain parameters were not considered in the present work, to limit the number of parameters extracted from the same signal, and also because the parameters in frequency domain can be strongly affected by spurious components linked to movement artifacts, to which wrist-worn wearable devices are prone [38], even more during intense physical activities [39]. concerning eda data, the bio-sp toolbox [40] was used to pre-process the signals and to extract all the features that the toolbox permits to compute. indeed, eda signal is composed by the superimposition of two components, specifically skin conductance response (scr) and skin conductance level (scl), related to the fast response to external stimuli events and the slow changes in baseline levels, respectively. this means that the scl depends on the individual characteristics (e.g. skin condition), and can differ markedly between individuals. consequently, under rest condition with no external stimuli, the scl has a higher impact than the scr component on both eda signal trend and amplitude. according to the literature [41], a gaussian low-pass filter, with a 40-point window and a sigma of 400 ms, was applied to reduce noise and motion artifacts due to potential subject’s wrist movements. in order to characterize the eda signal, the following five features were computed within bio-sp toolbox in time domain (table 1): scr mean duration, scr mean amplitude, scr mean rise-time, eda mean signal, number of scrs. finally, since an inspection of the skt data revealed slight and slow °c changes at rest, no filters were applied. therefore, from the raw skt signal the following parameters were extracted (table 1): mean and standard deviation, minimum and maximum of skin temperature. once the whole set of features was computed and extracted from the considered signals, both intraand inter-subject variability was evaluated for each metric. more specifically, data variability was estimated by computing the mean (𝜇), standard deviation (𝜎) and coefficient of variation (𝑐v = 𝜎 𝜇⁄ ) for all the extracted features. furthermore, the normality of the parameters distributions was verified by means of shapiro-wilk test [42] (null hypothesis: the test population is normally distributed; pvalue ≤ 0.05 considered as statistically significant). 3. results in this section, results are reported by grouping them according to data type: cardiac related parameters (i.e. hrv analysis parameters, subsection 3.1), eda-related parameters (subsection 3.2) and skin temperature parameters (subsection 3.3). results are reported in tables as 𝜇 ± 𝜎 (𝑐v); some examples of mean distributions are also shown by using the histogram representation. 3.1. hrv parameters the authors analysed the variability of hrv signal at parameters level, focusing on those extracted in time domain. the shapiro-wilk test evidenced that rr_mean, hr_mean, hr_min and hr_max can be considered as normally distributed (p-value ≥ 0.05). an example of the distribution is reported in the histogram (figure 2) related to rr_mean parameter. for the others (i.e. rr_std and rmssd), the null hypothesis cannot be rejected; the reason could be found in the limited numerosity of the test population (60 recordings on 6 subjects). similarly, the hr_std resulted to have non-normal distribution, probably also due to the presence of one outlier subject (i.e. subject no. 6, see table 2). observing the variability results in table 2, it is possible to notice a very high variability, in particular for the parameters table 1. time-domain features extracted from the physiological signals acquired in the tests. signal features measurement unit description hrv rr_mean ms mean value of inter-beat intervals rr_std ms standard deviation of inter-beat intervals hr_mean bpm mean value of heart rate hr_std bpm standard deviation of heart rate hr_min bpm minimum value of heart rate hr_max bpm maximum value of heart rate rrmsd ms root mean square of successive inter-beat intervals eda scr_d_mean s mean duration of skin conductance response signal scr_a_mean µs mean amplitude of skin conductance response signal scr_rt_mean s mean rise time of skin conductance response signal eda_mean µs mean value of eda signal scr_n no. of skin conductance response peaks skt skt_mean °c mean value of skin temperature skt_std °c standard deviation of skin temperature skt_min °c minimum value of skin temperature skt_max °c maximum value of skin temperature acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 describing how the measurement oscillates around its mean value, i.e. the standard deviation values of rr, hr and rmssd, reporting inter-subject variabilities of 55.8 %, 126.9 % and 65.7 %, respectively. this seems to underline the physiological variability, hence a subject’s condition of interest cannot be described (and classified) without properly considering such data variability. a particular remark should be made on the extremely high inter-variability of hr_std parameter; indeed, this could be linked to the subject no. 6 reporting an extremely high variability (i.e., 125.1 %), as already mentioned above. indeed, by performing a visual inspection of data collected on subject no. 6, among the tests conducted, one measurement on this subject resulted particularly noisy, hindering a reliable hrv analysis despite the use of proper artifact correction methods in the preprocessing phase. however, if this test is discarded from the variability analysis, the intra-variability of hr_std reduces from (12 ± 15) bpm (125.1 %) to (6 ± 3) bpm (50.0 %) – while the remaining parameters do not vary substantially; in this way, the inter-subject variability related to hr_std parameter would be (4 ± 3) bpm (81.1 %). the observed noise, which quite often characterises signals acquired through ppg sensors of wearable devices, could be an effect of subjects’ wrist movements [38]. intra-subject variability shows similar results, evidencing a very high variability, especially for the standard deviation parameters, describing the variations over time. on the other hand, mean value parameters show a quite low intra-subject variability, with values often lower than 10 % (e.g. for rr_mean parameter, lower than 10 % with the exception of 3 subjects out of 10). 3.2. eda parameters as stated above, in rest conditions scl is the predominant component of eda signal; this can result in very low intensity signals related to scr component (more linked to eventual stimuli), and consequently the eda_mean parameter values are expected to be low. in fact, in table 3, eda_mean parameters show very low values, up to 0.0005 μs for the subject no. 9. such very low mean values, together with high signal variability (i.e. high standard deviation), result in extremely high coefficients of variation (see for example subjects no. 3 and 9, where 𝑐v is extremely high due to the fact that the mean value of signal is an order of magnitude lower than its standard deviation). more in general, the parameters related to the eda signals show a very high variability, with coefficient of variation values related to inter-subject variability often over 100 %. also, intrasubject variability seems to be extremely high, evidencing that eda signal is not stable over time, hence it should be considered in this long-term evolution, instead of limiting to use descriptive statistics. such a high variability could be attributable to the fact that eda measurements at total rest, with no external stimuli, seem to be quite complicated, especially when performed by means of wearable devices. in fact, there are multiple subjective causes influencing the measurement results. furthermore, it should be considered that wrist eda results to be quite different from standard finger eda [43]. regarding the type of distribution, no features extracted from eda can be considered as normally distributed. the reason could be attributed again to the restricted test population. an example of distribution is reported in the histogram (figure 3) for eda_mean parameter. 3.3. skin temperature parameters contrarily to the previously reported parameters, skin temperature (table 4) shows measures slowly varying over time (with the exception of the standard deviation value, evidencing a higher variability – up to 87.1 % in intra-subject results), hence providing a more precise footprint of a subject in a determined condition. on the other hand, this could mean that the wrist skin temperature has a slow dynamic, thus it could be not suitable to rapidly mirror possible changes in the subject’s psycho-physical table 2. variability of hrv parameters in time domain. results are reported as µ ± σ (cv). subject rr_mean in ms rr_std in ms hr_mean in bpm hr_std in bpm hr_min in bpm hr_max in bpm rmssd in ms 1 1044 ± 66 (6.3 %) 70 ± 43 (61.0 %) 58 ± 4 (6.5 %) 4 ± 2 (49.9 %) 50 ± 4 (8.8 %) 65 ± 6 (8.7 %) 94 ± 64 (68.6 %) 2 1152 ± 30 (2.6 %) 36 ± 12 (33.7 %) 52 ± 1 (2.5 %) 2 ± 1 (48.5 %) 49 ± 2 (3.3 %) 58 ± 4 (6.1 %) 46 ± 16 (33.7 %) 3 934 ± 44 (4.7 %) 40 ± 13 (33.6 %) 64 ± 3 (4.8 %) 3 ± 1 (39.9 %) 59 ± 5 (7.8 %) 76 ± 5 (6.1 %) 49 ± 18 (36.8 %) 4 1008 ± 80 (8.0 %) 83 ± 51 (61.1 %) 60 ± 5 (8.1 %) 6 ± 5 (87.3 %) 49 ± 7 (13.6 %) 70 ± 5 (7.8 %) 114 ± 69 (60.1 %) 5 1027 ± 80 (7.8 %) 39 ± 20 (51.9 %) 59 ± 5 (8.0 %) 3 ± 2 (72.0 %) 53 ± 3 (5.0 %) 64 ± 6 (9.7 %) 54 ± 30 (56.1 %) 6 991 ± 133 (13.5 %) 72 ± 26 (35.5 %) 62 ± 9 (14.2 %) 12 ± 15 (125.1 %)* 52 ± 8 (15.1 %) 74 ± 12 (16.5 %) 97 ± 42 (43.2 %) 7 938 ± 31 (3.4 %) 68 ± 28 (40.5 %) 64 ± 2 (3.4 %) 6 ± 5 (81.3 %) 55 ± 5 (9.1 %) 74 ± 4 (4.8 %) 90 ± 44 (48.9 %) 8 873 ± 44 (5.0 %) 48 ± 2 (4.9 %) 69 ± 3 (5.0 %) 4 ± 1 (10.4 %) 61 ± 3 (4.2 %) 79 ± 6 (6.9 %) 45 ± 2 (4.7 %) 9 1083 ± 139 (12.9 %) 31 ± 7 (23.7 %) 56 ± 7 (12.8 %) 2 ± 1 (24.8 %) 53 ± 7 (12.8 %) 61 ± 7 (11.5 %) 39 ± 7 (18.6 %) 10 885 ± 130 (14.7 %) 49 ± 19 (39.6 %) 69 ± 10 (14.7 %) 4 ± 1 (16.9 %) 62 ± 9 (15.3 %) 83 ± 7 (8.7 %) 43 ± 66 (39.0 %) tot. 993 ± 117 (11.8 %) 54 ± 30 (55.8 %) 61 ± 7 (12.0 %) 4 ± 6 (126.9 %)* 54 ± 7 (12.9 %) 70 ± 10 (14.2 %) 67 ± 4 (65.7 %) * results affected by a particularly noisy test performed on subject no.6 figure 2. histogram related to rr_mean parameter (hrv signal). acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 5 conditions. none of the parameters extracted from skt signal can be considered normally distributed, according to the shapiro-wilk test. the reason could be the same indicated for the other signal (i.e. narrow test population). an example is reported in the histogram (figure 4) for skt_mean (where the distribution skewness is markedly < 0). 4. discussion and conclusions the use of wearable devices in a growing number of application fields emphasizes the need of considering the metrological aspects determining the reliability of measurement results. in recent years, ai algorithms have known unprecedent developments, providing extremely powerful tools to support decision-making processes and thus prevent serious health-issues in a variety of digital health applications, among which affective states classification, ehealth, smart living environments and ambient assisted living. in order to obtain good performances from ai algorithms, data accuracy and data quality are of uttermost importance, along with data variability that undoubtedly represents a key factor in this scenario. furthermore, only a part of variability can be minimised (e.g. by correcting the sensor positioning in the data acquisition phase), but another part is inevitable and uncontrollable, given that there is a physiological variability, whose values cannot be disregarded. it is a matter of fact that all the steps of the measurement chain influence the final results of ai algorithms: from the sensors uncertainties to the data variability and accuracy, all influencing the reliability of the output information. in a real-life context, this contributes to reveal a corrupted output with a poor information quality, which could be used for different final purposes (e.g. support to decision-making processes in digital health scenarios) [29]. the results obtained in this study have highlighted the physiological data variability among different subjects and intrasubject, considering data acquired by means of a wearable device. in particular, hrv and eda signals have been firstly analysed, observing that hrv parameters in time domain exhibit higher inter-subject variability when considering measures describing their variations over time (i.e. standard deviation values), with respect to average values, which seem more stable. furthermore, eda signals appear to be extremely changeable even in a same subject, evidencing the intrinsic variable nature of this type of data. indeed, this type of signal is referred to wrist skin conductance, instead of finger one, which is the site generally used for standard measurements. previous studies underlined that the measurement is not reliable if compared to finger/palm skin conductivity [44]; in fact, thermoregulatory processes would affect the results more than psychophysiological phenomena, which on the contrary are more influencing in the standard measurement sites [45]. on the other hand, other types of physiological data, such as skt, can show a quite limited variability, resulting more stable than hrv and eda. however, the slow changes could be table 3. variability of eda parameters. results are reported as µ ± σ (cv). subject scr_d_mean in s scr_a_mean in μs scr_rt_mean in s eda_mean in μs scr_n 1 10.2 ± 5.0 (49.4 %) 0.0097 ± 0.0034 (34.8 %) 5.3 ± 2.6 (48.7 %) 0.0022 ± 0.0032 (144.5 %) 23 ± 7 (30.5 %) 2 26.1 ± 26.1 (99.8 %) 0.0138 ± 0.0056 (72.7 %) 13.1 ± 10.6 (80.3 %) 0.0030 ± 0.0052 (177.1 %) 14 ± 10 (71.2 %) 3 8.3 ± 5.4 (65.0 %) 0.0095 ± 0.0056 (59.3 %) 4.3 ± 2.9 (68.6 %) 0.0010 ± 0.019 (1980.1 %) 12 ± 9 (77.7 %) 4 12.4 ± 17.5 (140.9 %) 0.0098 ± 0.0079 (80.8 %) 9.4 ± 15.8 (167.9 %) 0.0078 ± 0.010 (133.0 %) 21 ± 16 (76.6 %) 5 18.9 ± 19.2 (101.7 %) 0.0115 ± 0.0044 (38.1 %) 5.2 ± 3.4 (65.5 %) 0.0027 ± 0.0044 (131.4 %) 22 ± 13 (59.5 %) 6 15.2 ± 10.0 (65.6 %) 0.0112 ± 0.0051 (45.6 %) 9.3 ± 7.4 (79.5 %) 0.0043 ± 0.0051 (50.1 %) 18 ± 7 (41.0 %) 7 14.5 ± 14.4 (99.2 %) 0.0105 ± 0.0090 (85.7 %) 10.6 ± 12.2 (115.1 %) 0.0066 ± 0.0090 (97.4 %) 16 ± 14 (85.5 %) 8 5.7 ± 1.3 (22.5 %) 0.0257 ± 0.0160 (62.7 %) 3.1 ± 0.7 (23.1 %) 0.0029 ± 0.0160 (126.9 %) 23 ± 9 (38.0 %) 9 4.9 ± 0.9 (18.1 %) 0.0116 ± 0.0061 (52.9 %) 2.7 ± 0.5 (18.4 %) 0.0005 ± 0.0061 (862.7 %) 30 ± 9 (31.3 %) 10 5.6 ± 1.2 (20.5 %) 0.0144 ± 0.0065 (45.6 %) 2.9 ± 0.6 (19.3 %) 0.0029 ± 0.0065 (137.9 %) 30 ± 8 (27.9 %) tot. 12.2 ± 13.7 (112.3 %) 0.0128 ± 0.0089 (69.3 %) 6.6 ± 7.9 (120.0 %) 0.0034 ± 0.0076 (224.3 %) 21 ± 11(54.6 %) figure 3. histogram related to eda_mean parameter (eda signal). figure 4. histogram related to skt_mean parameter (skt signal). acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 6 problematic in following, for example, the subject’s reactions to external stimuli. the observed variability can represent a double sword edge: on one hand, the subjective diversity can hinder a net classification by means of ai algorithms; on the other hand, considering a test population sufficiently wide to include all the characteristic variability is required to develop robust ai algorithms, not suffering from overfitting issues. it is worthy to underline that the test population of this study is quite limited (10 subjects), therefore the normality condition could be non-optimally satisfied (verification through shapirowilk test). it would be interesting to repeat this kind of analysis on some publicly available large-scale databases (e.g. wesad [46], k-emocon [47], tiles [48], etc.), in order to examine the data variability results on wider populations (possibly including also different age groups) and also considering longer acquisition intervals and different measuring devices and acquisition conditions (e.g. free-living conditions, which probably remark variability). additionally, future studies may include one or more ai algorithms to compare the achieved performance on two datasets with different variabilities, for demonstrating the high impact of data variability on ai algorithms outputs, which can consequently impact on decision-making processes. acknowledgement a. p. and s. s. gratefully acknowledge the support of the italian ministry for economic development (mise) in implementation of the financial programme "research and development projects for the implementation of the national smart specilization strategy – “dm mise 5 marzo 2018", project "chaalenge", proposal no. 493, project nr. f/180016/0105/x43. references [1] g. cosoli, s. spinsante, l. scalise, wrist-worn and chest-strap wearable devices: systematic review on accuracy and metrological characteristics, measurement, apr. 2020, p. 107789. doi: 10.1016/j.measurement.2020.107789 [2] s. farivar, m. abouzahra, m. ghasemaghaei, wearable device adoption among older adults: a mixed-methods study, int. j. inf. manage., vol. 55, 2020, p. 102209. doi: 10.1016/j.ijinfomgt.2020.102209 [3] n. morresi, s. casaccia, m. sorcinelli, m. arnesano, g. m. revel, analysing performances of heart rate variability measurement through a smartwatch, 2020 ieee international symposium on medical measurements and applications (memea), bari, italy, 1 june-1 july 2020, pp. 1–6. doi: 10.1109/memea49120.2020.9137211 [4] s. levikari, a. immonen, m. kuisma, h. peltonen, m. silvennoinen, h. kyröläinen, p. silventoinen, improving energy expenditure estimation in wrist-worn wearables by augmenting heart rate data with heat flux measurement, ieee trans. instrum. meas., vol. 70, 2021, 8 pp. doi: 10.1109/tim.2021.3053070 [5] g. cosoli, g. iadarola, a. poli, s. spinsante, learning classifiers for analysis of blood volume pulse signals in iot-enabled systems, in ieee metroind4.0&iot, rome, italy, 7-9 june 2021, pp. 307312. doi: 10.1109/metroind4.0iot51437.2021.9488497 [6] s. cecchi, a. piersanti, a. poli, s. spinsante, physical stimuli and emotions: eda features analysis from a wrist-worn measurement sensor, ieee int. workshop on computer aided modeling and design of communication links and networks, camad, pisa, italy, 14-16 september 2020, pp. 1-6. doi: 10.1109/camad50429.2020.9209307 [7] c. john, z. mueller, l. prayaga, k. devulapalli, a neural network model to identify relative movements from wearable devices, proc. of ieee southeastcon, raleigh, nc, usa, 28-29 march 2020, vol. 2, pp. 1-4. doi: 10.1109/southeastcon44009.2020.9368261 [8] n. mahadevan, y. christakis, j. di, j. bruno, y. zhang, e. ray dorsey, w. r. pigeon, l. a. beck, k. thomas, y. liu, m. wicker, c. brooks, n. shaafi kabiri, j. bhangu, c. northcott, s. patel, development of digital measures for nighttime scratch and sleep using wrist-worn wearable devices, npj digit. med., vol. 4, no. 1, 2021, pp. 1–10. doi: 10.1038/s41746-021-00402-x [9] r. dai, c. lu, m. avidan, t. kannampallil, respwatch: robust measurement of respiratory rate on smartwatches with photoplethysmography, proc. of the international conference on internet-of-things design and implementation, charlottesville va, usa, 18 21 may 2021, pp. 208-220. doi: 10.1145/3450268.3453531 [10] j. chen, m. abbod, j. s. shieh, pain and stress detection using wearable sensors and devices—a review, sensors (switzerland), vol. 21, no. 4, 2021 mdpi ag, pp. 1–18. doi: 10.3390/s21041030 [11] k. bayoumy, m. gaber, a. elshafeey, o. mhaimeed, e. h. dineen, f. a. marvel, s. s. martin, e. d. muse, m. p. turakhia, kh. g. tarakji, m. b. elshazly, smart wearable devices in cardiovascular care: where we are and how to move forward, nat. rev. cardiol., 18 (2021), pp. 581–599. doi: 10.1038/s41569-021-00522-7 [12] s. cajigal, as consumer sleep trackers gain in popularity, sleep neurologists seek more data to assess how to use them in table 4. variability of skt parameters. results are reported as µ ± σ (cv). subject skt_mean in °c skt_std in °c skt_min in °c skt_max in °c 1 33.20 ± 1.58 (4.8 %) 0.17 ± 0.12 (71.7 %) 32.81 ± 1.87 (5.7 %) 33.46 ± 1.53 (4.6 %) 2 31.99 ± 2.20 (6.7 %) 0.21 ± 0.15 (68.8 %) 31.58 ± 2.13 (6.8 %) 32.37 ± 2.29 (7.1 %) 3 34.02 ± 1.25 (3.7 %) 0.07 ± 0.04 (56.5 %) 33.90 ± 1.28 (3.8 %) 34.20 ± 1.25 (3.6 %) 4 32.67 ± 1.62 (4.9 %) 0.13 ± 0.07 (56.3 %) 32.37 ± 1.70 (5.3 %) 32.89 ± 1.55 (4.7 %) 5 33.05 ± 1.02 (3.1 %) 0.12 ± 0.07 (60.5 %) 32.80 ± 1.15 (3.5 %) 33.30 ± 0.98 (3.0 %) 6 32.14 ± 2.04 (6.4 %) 0.09 ± 0.04 (46.0 %) 31.88 ± 2.17 (6.8 %) 32.27 ± 2.03 (6.3 %) 7 32.02 ± 2.07 (6.5 %) 0.10 ± 0.09 (87.1 %) 31.83 ± 2.05 (6.4 %) 32.22 ± 2.21 (6.9 %) 8 35.22 ± 0.24 (0.7 %) 0.06 ± 0.04 (55.6 %) 35.09 ± 0.24 (0.7 %) 35.34 ± 0.24 (0.7 %) 9 35.32 ± 0.24 (0.7 %) 0.05 ± 0.03 (76.5 %) 35.23 ± 0.26 (0.7 %) 35.43 ± 0.27 (0.8 %) 10 33.95 ± 1.09 (3.2 %) 0.05 ± 0.02 (39.6 %) 33.86 ± 1.08 (3.2 %) 34.07 ± 1.14 (3.4 %) tot. 33.36 ± 1.82 (5.4 %) 0.11 ± 0.09 (83.7 %) 33.14 ± 1.91 (5.8 %) 33.55 ± 1.80 (5.4 %) https://doi.org/10.1016/j.measurement.2020.107789 https://doi.org/10.1016/j.ijinfomgt.2020.102209 https://doi.org/10.1109/memea49120.2020.9137211 https://doi.org/10.1109/tim.2021.3053070 https://doi.org/10.1109/metroind4.0iot51437.2021.9488497 https://doi.org/10.1109/camad50429.2020.9209307 https://doi.org/10.1109/southeastcon44009.2020.9368261 https://doi.org/10.1038/s41746-021-00402-x. https://doi.org/10.1145/3450268.3453531 https://doi.org/10.3390/s21041030 https://doi.org/10.1038/s41569-021-00522-7 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 7 practice, neurol. today, vol. 21, no. 9, 2021, pp. 8-14. doi: 10.1097/01.nt.0000752872.13869.cc [13] c. p. wen, j. p. m. wai, c. h. chen, w. gao, can weight loss be accelerated if we exercise smarter with wearable devices by subscribing to personal activity intelligence (pai)?, lancet reg. heal. eur., vol. 5, 2021, p. 100133 (8 pp.). doi: 10.1016/j.lanepe.2021.100133 [14] l. scalise, g. cosoli, wearables for health and fitness: measurement characteristics and accuracy, proc. of the 2018 ieee international instrumentation and measurement technology conference i2mtc: discovering new horizons in instrumentation and measurement, houston, tx, usa, 14-17 may 2018, pp. 1-6. doi: 10.1109/i2mtc.2018.8409635 [15] j. ringrose, r. padwal, wearable technology to detect stressinduced blood pressure changes: the next chapter in ambulatory blood pressure monitoring?, american journal of hypertension, vol. 34, no. 4. nlm (medline), 2021, pp. 330–331. doi: 10.1093/ajh/hpaa158 [16] g. quer, j. m. radin, m. gadaleta, k. baca-motes, l. ariniello, e. ramos, v. kheterpal, e. j. topol, s. r. steinhubl, wearable sensor data and self-reported symptoms for covid-19 detection, nat. med., vol. 27, no. 1, 2021, pp. 73–77. doi: 10.1038/s41591-020-1123-x [17] j. budd, b. s. miller, e. m. manning, v. lampos, m. zhuang, m. edelstein, g. rees, v. c. emery, m. m. stevens, n. keegan, m. j. short, d. pillay, ed manley, i. j. cox, d. heymann, a. m. johnson, r. a. mckendry, digital technologies in the public-health response to covid-19., nat. med., vol. 26, no. 8, aug. 2020, pp. 1183–1192. doi: 10.1038/s41591-020-1011-4 [18] m. l. millenson, j. l. baldwin, l. zipperer, h. singh, beyond dr. google: the evidence on consumer-facing digital tools for diagnosis, diagnosis, vol. 5, no. 3, 2018, pp. 95–105. doi: 10.1515/dx-2018-0009 [19] g. cosoli, s. spinsante, l. scalise, wearable devices and diagnostic apps: beyond the borders of traditional medicine, but what about their accuracy and reliability?, instrum. meas. mag., vol. 24, no. 6, september 2020, pp. 89 94. doi: 10.1109/mim.2021.9513636 [20] m. cukurova, c. kent, r. luckin, artificial intelligence and multimodal data in the service of human decision-making: a case study in debate tutoring, br. j. educ. technol., vol. 50, no. 6, 2019, pp. 3032–3046. doi: 10.1111/bjet.12829 [21] a. chang, the role of artificial intelligence in digital health, springer, cham, 2020, pp. 71–81 doi: 10.1007/978-3-030-12719-0_7 [22] e. b. hansen, s. bøgh, artificial intelligence and internet of things in small and medium-sized enterprises: a survey, j. manuf. syst., vol. 58, 2021, pp. 362–372. doi: 10.1016/j.jmsy.2020.08.009 [23] m. borghetti, p. bellitti, n. f. lopomo, m. serpelloni, e. sardini, f. bonavolonta, validation of a modular and wearable system for tracking fingers movements, acta imeko, vol. 9, no. 4, 2020, pp. 157–164. doi: 10.21014/acta_imeko.v9i4.752 [24] a. razzaque, a. hamdan, artificial intelligence based multinational corporate model for ehr interoperability on an ehealth platform, studies in computational intelligence, vol. 912. springer, 2021, pp. 71–81. doi: 10.1007/978-3-030-51920-9_5 [25] t. zhang, a. el ali, c. wang, a. hanjalic, p. cesar, corrnet: finegrained emotion recognition for video watching using wearable physiological sensors, sensors (switzerland), vol. 21, no. 1, 2021, pp. 1–25. doi: 10.3390/s21010052 [26] s. mekruksavanich, a. jitpattanakul, biometric user identification based on human activity recognition using wearable sensors: an experiment using deep learning models, electronics, vol. 10, no. 3, 2021, pp. 1-21. doi: 10.3390/electronics10030308 [27] kelvin tsoi, karen yiu, helen lee, hao-min cheng, tzung-dau wang, jam-chin tay, boon wee teo, yuda turana, arieska ann soenarta, guru prasad sogunuru, saulat siddique, yook-chin chia, jinho shin, chen-huan chen, ji-guang wang, kazuomi kario, the hope asia network, applications of artificial intelligence for hypertension management, journal of clinical hypertension, vol. 23, no. 3. blackwell publishing inc., 2021, pp. 568–574. doi: 10.1111/jch.14180 [28] e. anceschi, g. bonifazi, m. c. de donato, e. corradini, d. ursino, l. virgili, savemenow.ai: a machine learning based wearable device for fall detection in a workplace, studies in computational intelligence, vol. 911. springer science and business media deutschland gmbh, 2021, pp. 493–514. doi: 10.1007/978-3-030-52067-0_22 [29] s. casaccia, g. m. revel, g. cosoli, l. scalise, assessment of domestic well-being: from perception to measurement, instrum. meas. mag., vol. 24, no. 6, 2021, pp. 58-67. doi: 10.1109/mim.2021.9513641 [30] a. poli, g. cosoli, l. scalise, s. spinsante, impact of wearable measurement properties and data quality on adls classification accuracy, ieee sens. j., volume: 21, issue: 13, july 2021, pp. 14221-14231. doi: 10.1109/jsen.2020.3009368 [31] c. sáez, n. romero, j. a. conejero, j. m. garcía-gómez, potential limitations in covid-19 machine learning due to data source variability: a case study in the ncov2019 dataset, j. am. med. informatics assoc., vol. 28, no. 2, 2021, pp. 360–364. doi: 10.1093/jamia/ocaa258 [32] m. garbarino, m. lai, d. bender, r. w. picard, s. tognetti, empatica e3 — a wearable wireless multi-sensor device for realtime computerized biofeedback and data acquisition, proc. of the 4th int. conference on wireless mobile communication and healthcare transforming healthcare through innovations in mobile and wireless technologies (mobihealth), athens, greece, 3-5 november 2014, pp. 39–42. doi: 10.1109/mobihealth.2014.7015904 [33] f. scardulla, l. d’acquisto, r. colombarini, s. hu, s. pasta, d. bellavia, a study on the effect of contact pressure during physical activity on photoplethysmographic heart rate measurements, sensors (switzerland), vol. 20, no. 18, 2020, pp. 1–15. doi: 10.3390/s20185052 [34] e. yuda, m. shibata, y. ogata, n. ueda, t. yambe, m. yoshizawa, j. hayano, pulse rate variability: a new biomarker, not a surrogate for heart rate variability, j. physiol. anthropol. (2020), pp. 1-4. doi: 10.1186/s40101-020-00233-x [35] n. pinheiro, r. couceiro, j. henriques, j. muehlsteff, i. quintal, l. goncalves, p. carvalho, can ppg be used for hrv analysis?, proc. annu. int. conf. ieee eng. med. biol. soc. embs, orlando, fl, usa, 16-20 august 2016, pp. 2945–2949. doi: 10.1109/embc.2016.7591347 [36] g. cosoli, a. poli, l. scalise, s. spinsante, heart rate variability analysis with wearable devices: influence of artifact correction method on classification accuracy for emotion recognition, proc. of the 2021 ieee int. instrumentation and measurement technology conference i2mtc: discovering new horizons in instrumentation and measurement, glasgow, united kingdom, 17-20 may 2021, pp. 1-6. doi: 10.1109/i2mtc50364.2021.9459828 [37] m. p. tarvainen, j.-p. niskanen, j. a. lipponen, p. o. ranta-aho, p. a. karjalainen, kubios hrv – heart rate variability analysis software, comput. methods programs biomed., vol. 113, no. 1, 2014, pp. 210–220. doi: 10.1016/j.cmpb.2013.07.024 [38] j. lee, m. kim, h. k. park, i. y. kim, motion artifact reduction in wearable photoplethysmography based on multi-channel sensors with multiple wavelengths, sensors (switzerland), vol. 20, no. 5, https://doi.org/10.1097/01.nt.0000752872.13869.cc https://doi.org/10.1016/j.lanepe.2021.100133 https://doi.org/10.1109/i2mtc.2018.8409635 https://doi.org/10.1093/ajh/hpaa158 https://doi.org/10.1038/s41591-020-1123-x https://doi.org/10.1038/s41591-020-1011-4 https://doi.org/10.1515/dx-2018-0009 https://doi.org/10.1109/mim.2021.9513636 https://doi.org/10.1111/bjet.12829 https://doi.org/10.1007/978-3-030-12719-0_7 https://doi.org/10.1016/j.jmsy.2020.08.009 https://doi.org/10.21014/acta_imeko.v9i4.752 https://doi.org/10.1007/978-3-030-51920-9_5 https://doi.org/10.3390/s21010052 https://doi.org/10.3390/electronics10030308 https://doi.org/10.1111/jch.14180 https://doi.org/10.1007/978-3-030-52067-0_22 https://doi.org/10.1109/mim.2021.9513641 https://doi.org/10.1109/jsen.2020.3009368 https://doi.org/10.1093/jamia/ocaa258 https://doi.org/10.1109/mobihealth.2014.7015904 https://doi.org/10.3390/s20185052 https://doi.org/10.1186/s40101-020-00233-x https://doi.org/10.1109/embc.2016.7591347 https://doi.org/10.1109/i2mtc50364.2021.9459828 https://doi.org/10.1016/j.cmpb.2013.07.024 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 8 2020, 1493, pp. 1-14. doi: 10.3390/s20051493 [39] h. lee, h. chung, j. w. kim, j. lee, motion artifact identification and removal from wearable reflectance photoplethysmography using piezoelectric transducer, ieee sens. j., vol. 19, no. 10, 2019, pp. 3861–3870. doi: 10.1109/jsen.2019.2894640 [40] m. nabian, y. yin, j. wormwood, k. s. quigley, l. f. barrett, s. ostadabbas, an open-source feature extraction tool for the analysis of peripheral physiological data, ieee j. transl. eng. heal. med., vol. 6, 2018, pp. 1-11. doi: 10.1109/jtehm.2018.2878000 [41] a. greco, g. valenza, e. p. scilingo, electrodermal phenomena and recording techniques, advances in electrodermal activity processing with applications for mental health. springer international publishing, 2016, pp. 1–17. doi: 10.1007/978-3-319-46705-4_1 [42] s. s. shapiro, m. b. wilk, an analysis of variance test for normality (complete samples), biometrika, vol. 52, no. 3/4, 1965, pp. 591-611. doi: 10.2307/2333709 [43] k. kasos, z. kekecs, l. csirmaz, s. zimonyi, f. vikor, e. kasos, a. veres, e. kotyuk, a. szekely, bilateral comparison of traditional and alternate electrodermal measurement sites, psychophysiology, vol. 57, no. 11, 2020, e13645, pp. 1-15. doi: 10.1111/psyp.13645 [44] n. milstein, i. gordon, validating measures of electrodermal activity and heart rate variability derived from the empatica e4 utilized in research settings that involve interactive dyadic states, front. behav. neurosci., vol. 14, 2020, 13 pp. doi: 10.3389/fnbeh.2020.00148 [45] l. menghini, e. gianfranchi, n. cellini, e. patron, m. tagliabue, m. sarlo, stressing the accuracy: wrist-worn wearable sensor validation over different conditions, psychophysiology, vol. 56, no. 11, 2019, e13441, 15 pp. doi: 10.1111/psyp.13441 [46] p. schmidt, a. reiss, r. duerichen, c. marberger, k. van laerhoven, introducing wesad, a multimodal dataset for wearable stress and affect detection, proc. of the 20th acm international conference on multimodal interaction, boulder, co, usa, 16 – 20 october 2018, pp. 400–408. doi: 10.1145/3242969.3242985 [47] c. y. park, n. cha, s. kang, a. kim, a. habib khandoker, l. hadjileontiadis, a. oh, y. jeong, u. lee, k-emocon, a multimodal sensor dataset for continuous emotion recognition in naturalistic conversations, sci. data, vol. 7, 2020, no. 1, 293, pp. 1–16. doi: 10.1038/s41597-020-00630-y [48] k. mundnich, b. m. booth, m. l’hommedieu, t. feng, b. girault, j. l’hommedieu, m. wildman, s. skaaden, a. nadarajan, j. l. villatte, t. h. falk, k. lerman, e. ferrara, s. narayanan, tiles2018, a longitudinal physiologic and behavioral data set of hospital workers, sci. data, vol. 7, no. 1, 2020, pp. 354. doi: 10.1038/s41597-020-00655-3 https://doi.org/10.3390/s20051493 https://doi.org/10.1109/jsen.2019.2894640 https://doi.org/10.1109/jtehm.2018.2878000 https://doi.org/10.1007/978-3-319-46705-4_1 https://doi.org/10.2307/2333709 https://doi.org/10.1111/psyp.13645 https://doi.org/10.3389/fnbeh.2020.00148 https://doi.org/10.1111/psyp.13441 https://doi.org/10.1145/3242969.3242985 https://doi.org/10.1038/s41597-020-00630-y https://doi.org/10.1038/s41597-020-00655-3 digital twins based on augmented reality of measurement instruments for the implementation of a cyber-physical system acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 8 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 digital twins based on augmented reality of measurement instruments for the implementation of a cyber-physical system annalisa liccardo1, francesco bonavolontà1, rosario schiano lo moriello2, francesco lamonaca3, luca de vito4, antonio gloria2, enzo caputo2, giorgio de alteriis2 1 dipartimento di ingegneria elettrica università di napoli federico ii naples, italy 2 dipartimento di ingegneria industriale università di napoli federico ii naples, italy 3 dipartimento di ingegneria informatica, modellistica, elettronica e sistemistica, università della calabria, italy 4 dipartimento di ingegneria, università degli studi del sannio, italy section: research paper keywords: cyber-physical systems; digital twin; remote control; augmented reality; measurement instrumentation citation: annalisa liccardo, francesco bonavolontà, rosario schiano lo moriello, francesco lamonaca, luca de vito, antonio gloria, enzo caputo, giorgio de alteriis, digital twins based on augmented reality of measurement instruments for the implementation of a cyber-physical system, acta imeko, vol. 11, no. 4, article 15, december 2022, identifier: imeko-acta-11 (2022)-04-15 section editor: leonardo iannucci, politecnico di torino, italy received november 14, 2022; in final form november 26, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: rosario schiano lo moriello, e-mail: rschiano@unina.it 1. introduction the concept of cyber-physical systems (cps) was first presented in 2006, introduced by helen gill of the national science foundation [1], in the united states, to denote a plane of "local sensation and manipulation of the physical world" correlated with a virtual plane of "real-time control and observability." the concept is presented as an evolution of embedded systems in which the computational capability and its effects descend deep into every physical component of the system and even within the materials [2]. from this initial vision, the concept of cps has taken on a breadth over time that does not facilitate its unambiguous and definitive conceptualization or representation. the most common definition of cps, as integration of computational resources and physical processes, now seems too simplifying as other elements, particularly large-scale network connectivity, have rightfully entered the perimeter of cps. in its current most common definition, cps is considered as an integration of systems of different natures whose main purpose is the control of a physical process and, through feedback, its adaptation in real-time to new operating conditions. this is achieved by the fusion of physical objects and processes, computational platforms, and telecommunications networks [3], [4]. the term physical refers to actual objects as they are "perceivable" by human senses, while the term cyber refers to the abstract the recent increase on the internet of things and industry 4.0 fields has led the research topics to investigate on the innovative technologies that could support these emerging topics in different area of applications. in particular, the current trends are to close the gap between the physical and digital worlds, thus originating the so-called cyber-physical system (cps). a relevant feature of the cps is the digital twin, i.e., a digital replica of a process/product with which user can interact to operate on the real world. in this paper, the authors propose an innovative approach exploiting an augmented reality solution as digital twin for the measurement instrument to obtain a tight connection between the measurements as physical world and the internet of things as digital applications. in fact, by means of the adoption of the 3d scanning strategy, augmented reality software and with the development of a suitable connection between the instrument and the digital world, a cyber-physical system has been realized as an iot platform that collect and control the real measurement instrument and makes its available in augmented reality. an application example involving a digital storage oscilloscope is finally presented to highlight the efficacy of the proposed approach. mailto:rschiano@unina.it acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 virtual image in which actual objects are represented and enriched with one or more additional layers of information. the relationship between physical objects and their virtual "interpretation" has been referred to by some authors as the effective term "social network of objects" [5]. cpss, therefore, are based on related objects that, through sensors, actuators, and network connections, generate and acquire data of various kinds, thus reducing distances and information asymmetries between all elements of the system [6]. with the help of widespread sensors, the cps can autonomously determine its current operational state within its environment and the distance between its component objects. actuators perform planned actions and execute corrective decisions, optimizing a process or solving a problem [7]. decisions are made by intelligence that evaluates information internal to the cps and, in some scenarios, also information from other cpss [8]-[10]. in this context, a very important aspect is the interaction with measurement systems (in terms of both sensors or sensor networks and actual measurement instruments). in this case, the aim is to make the measurement system an integral and priority part of the cps; realizing the digital twin of measurement systems allows possible users to be able to "touch with their hands" is, therefore, a desirable condition [11]. remote control of instruments is a research activity that saw its first examples in the 1990s; however, such activities were only aimed at enabling measurements to be made by remote programming of instruments. instead, the point of view, both educational and industrial, has changed, requiring a faithful replication of the system to be controlled with direct interaction of the operator (whether student or worker) on the instrument. for this reason, the authors propose an augmented realitybased approach to create a digital twin of the measuring instrument that can be controlled and operated remotely; for this purpose, several enabling technologies inherent in industry 4.0 and internet of things paradigms are used, such as augmented reality and mqtt communication protocols. the goal is to enable users to operate the remote instruments as if they were in their presence [12]. the paper is organized as follows: in section 2, a literature review on cps and ar-based applications is presented, while the proposed method is described in detail in section 3. an application example is given in section 4 before drawing the conclusions in section 5. 2. related work providing for an exhaustive review of the exploitation of ar in cyber-physical systems is a difficult challenge due to the wide variety of configurations and application fields they are interested in, such as in additive manufacturing [13]-[16] , industry 4.0 [17][22] and autonomous vehicle [23]-[25]. as an example, an application of cps in the manufacturing environment is proposed in [7], where the real-time data are sent into cyberspace through different types of networks to build a digital twin of the machine tool. finally, ar is exploited as an interface between human and the cps to mainly retrieve information about the ongoing processes. in [26], the authors present a closed-loop cooperative human cps for the mining industry that exploits the information obtained from ar and virtual reality (vr) systems. the main goal of this research is to allow human interaction with the mining operation by means of a deep integration of ar and vr; this integration makes visual information available to the operator that is supported during the operations in terms of making the correct decisions, conduct inspections and interacts with the equipment. a compelling cps for autonomous vehicle applications has been proposed in [9]. the authors have developed an ar indoor environment for testing and debugging autonomous vehicles that act in such a way to realize dynamic missions in laboratory environments for planned mission testing and validation; the proposed solution is realized by exploiting ground and aerial systems, motion cameras, ground projectors, and network communications to obtain a standard procedure for testing and prototyping environment for cpss. finally, in a field test performance of perception, planning, and learning algorithms in real-time have been evaluated. furthermore, in [27], context-aware guidance of cyberphysical process has been proposed for the press line maintenance process. in particular, by means of a suitable context graph, it was possible to manage and structure the cps sensor data. in addition, an ar application is adopted to support the interaction of users with the cps processes thanks to the integration of position and marker sensors in the proposed solution; the object detection is improved by means of the digital data that ensures improved guidance in the process execution. the adoption of ar with cps allows to support the end-user in the manual tasks where it is guided and monitored during the operations. an interesting application is evaluated in [28], where a suitable ar navigation system for industrial applications has been proposed. the authors have developed a prototype of an autonomous vehicle that interacts with a robot arm. in particular, the robot arm interacts with the vehicle when the correct position is reached, and the vehicle navigation is obtained by exploiting an ar solution based on markers recognition; these markers are used as system reference position points. finally, the research carried out in [29] has highlighted that the integration of iot platforms and ar software as digital twin is the most suitable technology to close the gap between the physical and digital worlds by means of the definition of a digital twin architecture model, the introduction of a digital twin service and the investigation on the key elements of industry 4.0 required for the realization of a digital twin. differently from the solutions mentioned so far and mainly intended to exploit ar as a fundamental information layer for the interaction with cps, the authors want to propose a method to realize a cps for measurement instruments that act as an interface between the actual and ar world. instrument control and the related measurements are not carried out as simulations, but the ar interaction between user and rendered instruments corresponds to real physical state changes in the real world. to accomplish the considered target, a general framework for the definition, implementation, and assessment of an example of digital twin application will be presented in the following by referring to a typical measuring instrument. 3. proposed method the method adopted to create a digital twin of the measurement instrument exploits an augmented reality approach based on the framework shown in figure 1. the first step is dedicated to generating the 3d geometric model of the desired instrument; operating approaches typical of reverse engineering can be applied to the purpose. in particular, the reverse acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 3 engineering approach uses (i) suitable tools to acquire 3d spatial data of objects and (ii) dedicated software environments to manipulate and convert them into useful information. faro's non-contact laser scanarm [30] (figure 2) was chosen to acquire the image and the main dimensional information about the instruments; the output of the scan operation consists of clouds of points in the 3d space. thanks to computer-aided design (cad) systems, it has been possible to define mathematical representations allowing the reconstruction of the instrument geometry, thus obtaining a 3d cad model of the desired object. in order to move from point clouds to the extraction of both surface and geometrical characteristics, suitable software as well as reverse engineering techniques were combined to provide a very efficient and robust solution. in particular, the point clouds were first handled using geomagic wrap software [31] to transform those data into polygonal meshes, and then the 3d image reconstruction was performed. from the reconstruction of case, front panel, back panel and support systems, the 3d model of the instrument has been obtained; buttons and knobs have been separately reconstructed and placed one by one in their well-defined positions. the scanning phase produces a file with an obj extension containing an empty container, a simple 3d view of the instrument (figure 3). the successive step aimed at transforming the obtained 3d object into an augmented interactive object; the 3d graphics development platform unity has been used to the purpose. every interaction that the user performs on the augmented reality object is translated in the corresponding operation executed on the actual instrument. to this aim, a typical iot protocol called message queue telemetry transport (mqtt) [32] is used to communicate with the laboratory where the instruments are placed. in particular, the lab is equipped with a figure 1. proposed solution workflow: from the physical world to the digital twin. figure 2. 3d scanning strategy of the instrument exploiting a non-contact laser scanarm by farotm. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 personal computer connected from one side to ethernet network and on the other side to the instruments. the pc converts received mqtt messages into messages compliant with the protocol used by instruments (ieee488), in order to forward to the instruments commands and request corresponding to ar user's operations. a suitable software on the pc implements a mqtt client, which has the role of receiving all messages from the augmented instrument and sending them to the actual instrument and vice versa. communication among different mqtt clients is assured by a third entity, the so-called broker, mandated to dispatch messages to the client subscribed to specific information arguments, referred to as topic [33]. to assure reliable operations and continuous service as well as maintain a complete control on the exchanged data, it was decided to exploit a private broker. to realize an augmented reality application, unity exploits a software development plug-in, namely vuforia, allows to recognize images chosen as a target. in particular, vuforia is capable of assessing the quality of image target and thus providing feedback regarding the possible usage or not of that image as a target. images with a more complex pattern proved to be the best candidate as a target in terms of 3d reconstruction location and stability; for this purpose, the one adopted for the realised application is shown in figure 4 and produced a satisfying evaluation from the vuforia tool. in an augmented reality application, the image target has an important role, since it allows to locate a 3d object in a scene; when an ar application starts and the camera mounted on the device frame the considered image target, digital replicas of the object, in this case the instruments are superimposed on the image itself. as said before, the obj file contains a 3d replica of the instrument. therefore, it was necessary to import the obj file into the unity environment and make the appearance of this object similar to the actual one. regarding the oscilloscope shown in figure 3, it has been necessary to add the labels above each key, the model of the instrument in the top left-hand corner as well as additional markings and symbols that are present on the real instrument. in addition, compared to the real instrument, two '+' and '-' symbols were added on each knob to allow the user to rotate them with step values similar to those of the actual instrument. the result of this operation is shown in figure 5. the final step regards the communication between ar and actual instrument; to this aim, a software module in c# language has been implemented to recognize button pressure on the virtual object and perform the corresponding operation by sending the command to the actual instrumentation (figure 6). as said before, the protocol used to communicate with the real instrumentation is mqtt, so an mqtt client was created within the application running on the user device (e.g. smartphone, tablet or smart glasses) to send commands to the actual instrumentation and to receive the corresponding responses from them if necessary, as in case of the so-called queries [34]. in particular, the developed c# module can figure 3. oscilloscope 3d model obtained from the 3d scanning strategy. figure 4. example of image target exploited for rendering and spatially locating ar instruments. figure 5. comparison between the actual instruments and its digital twin; instrument view in the ar software environment (a), actual instrument (b). acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 5 recognize if the sequence of button pressed, and consequently of operations, are correct and in this case, sends the command to the real instrument; as previously said, in the laboratory is present a pc with a suitable software that receives these messages and sends them to the associated instruments. if the command requires a response from the instrumentation, the c# module reads these responses that are properly managed to be shown on the display of the 3d ar instrument. in this way, the user has the feeling of interfacing with the actual instrument even if he is in a different location. finally, to make the instrument's behaviour as close as possible to reality, secondary effects such as pressure emulation, knob movement or button backlighting are also reproduced. 4. application example of the implemented arbased cyber-physical system this section aims to present a case study of the proposed approach by considering the typical operations that student have to perform on the digital oscilloscope during basic metrology courses. to this aim, a proper mobile application has been realized to assess the reliability of the method. when the student interacts with the rendered 3d instrument, the first operation he/she must perform, as with the actual instrument, is to switch it on. in fact, when the oscilloscope power button is pressed in the ar application, a query is sent to the actual instrument to retrieve the waveforms currently present on the instrument's display as well as its configuration parameters (as an example, horizontal, s/div, and vertical, v/div, resolution, figure 7). this way, the student can gain awareness and knowledge about the signal currently acquired and displayed on the actual oscilloscope and, consequently, change the parameters, depending on the operations to be performed, by acting on the corresponding knobs of the v/div and s/div. in addition, the student can have information about the signal coupling. as on the actual instrument, the “ch1 menu” button must be clicked on, and the information appears on the righthand side of the display (figure 8). in the example case shown in the figure 8 the waveform coupling is “dc”. figure 6. example of implemented c# module for interaction between user, ar reconstruction and actual instrumentation. figure 7. waveform and resolution information on the ar display after turning on. figure 8. ch1 menu selection on the ar instrument. figure 9. signal level. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 6 by turning the specific knobs located on the “ch1 menu” button it is possible to change the signal level (figure 9), so that the signal is moved up or down according to the axis origin. this is important if the student has both signals on the instrument display and he doesn’t want to display them as superimposed. moreover, it is possible to change the trigger level as well as on the actual instrument and perform operations with the cursors to measure period and amplitude of the signal. figure 10.a shows how the cursors were used to evaluate the peak-to-peak amplitude of the signal, while in figure 10.b the period was measured. it is worth noting the relevant concurrence between the display of the instrument rendered within the ar application and that characterizing the actual instrument in the laboratory. as a further case study, a typical students’ exercise is presented, mandated to measure the variation in amplitude and phase of the output signal referred to an rc filter circuit; in particular, an rc filter with a cut-off frequency of 1.7 khz was used. figure 11 shows the measurement setup present in the actual laboratory, where in this case the device under test (dut) stands for the rc filter. as can appreciated in the figure 11, the input signal of the filter is connected to channel 1 of the oscilloscope while channel 2 will show the output signal of the filter; figure 12 shows the corresponding results as the frequency of the input signal rises. in particular, figure 12.a presents the acquired (and almost superimposed) waveforms of the two channels when an input sine wave with a frequency equal to 200 hz has been applied. when the frequency of the generated signal increases up to 1 khz, (figure 12.b), the output waveform begins to be attenuated and delayed with respect to the input one. as the frequency is increased to 5 khz (figure 12.c), the filter effect of the considered dut proves to be evident with a significant attenuation and displacement of the output waveform. as for the actual lab experience, gain and phase shift of the filter can be measured by means of cursors as in figure 10; for the sake of the brevity, the procedure is not shown in the paper. 5. conclusions the purpose of the current research study was to evaluate the capabilities of cps in the measurement framework. in particular, a suitable method to develop a digital twin has been proposed, and a real implementation referred to an oscilloscope has been presented. in fact, starting from a 3d scanning strategy, the actual instrument is reconstructed in augmented reality, each front panel element is associated with an instruction of the actual instrument, and then an ar application is developed by means of the vuforia and unity environments. moreover, a suitable communication, based on the mqtt protocol, is adopted between the actual instrument and its 3d reconstruction. to this aim, a personal computer is exploited to realize (i) the physical connection with the actual instrument, (ii) the internet or local connection with the mqtt client, and (iii) an interpreter for the command sent to the instrument and the responses from the instrument outputs to the ar application. as an example, a typical application is evaluated where a filtered signal has been correctly displayed and measured in terms of frequency in the ar application. so, it was proved that the oscilloscope could be used in an ar framework where it is remotely controlled and sent the actual measure (acquired in the physical world) to the augmented reality world. it has been demonstrated that a cps for the measuring instruments can be realized, highlighting that instrument as digital twin acts in the same way as those in the real world. a) b) figure 10. evaluation of a) peak-peak amplitude and b) period of the signal. figure 11. laboratory setup in experiments with rc filter. a) b) c) figure 12. evolution of signal output of rc filter: a) signal frequency input equal to 200 hz, b) signal frequency input equal to 1 khz, c) signal frequency input equal to 5 khz. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 7 references [1] r. bahati, h. gill, cyber-physical systems. the impact of control technology, open j. soc. sci. sci. res. publ, vol. 5, pp. 161–166, 2011. [2] m. bashendy, a. tantawy, a. erradi, intrusion response systems for cyber-physical systems: a comprehensive survey, comput secur, p. 102984, 2022. doi: 10.1016/j.cose.2022.102984 [3] s. zanero, cyber-physical systems, computer (long beach calif), vol. 50, no. 4, pp. 14–16, 2017. doi: 10.1109/mc.2017.105 [4] w. wolf, cyber-physical systems, computer (long beach calif), vol. 42, no. 03, pp. 88–89, 2009. [5] f. cicirelli, a. guerrieri, g. spezzano, a. vinci, edge computing and social internet of things for large-scale smart environments development, ieee internet things j, vol. 5, no. 4, pp. 2557– 2571, 2017. doi: 10.1109/jiot.2017.2775739 [6] m. prist, a. monteriù, e. pallotta, p. cicconi, a. freddi, f. giuggioloni, e. caizer, c. verdini, s. longhi, cyber-physical manufacturing systems: an architecture for sensor integration, production line simulation and cloud services, acta imeko, vol. 9, 2020, no. 4, pp. 39–52. doi: 10.21014/acta_imeko.v9i4.731 [7] f. hu, cyber-physical systems. taylor & francis group llc, 2014. [8] j. shi, j. wan, h. yan, h. suo, a survey of cyber-physical systems, int. conference on wireless communications and signal processing (wcsp), nanjing, china, 9-11 november 2011, pp. 1–6. doi: 10.1109/wcsp.2011.6096958 [9] b. scherer, hardware-in-the-loop test based non-intrusive diagnostics of cyber-physical systems using microcontroller debug ports, acta imeko, vol. 7, 2018, no. 1, pp. 27–35. doi: 10.21014/acta_imeko.v7i1.513 [10] p. s. matharu, a. a. ghadge, y. almubarak, y. tadesse, jelly-z: twisted and coiled polymer muscle actuated jellyfish robot for environmental monitoring, acta imeko, vol. 11, 2022, no. 3, pp. 1-7. doi: 10.21014/acta_imeko.v11i3.1255 [11] f. gabellone, digital twin: a new perspective for cultural heritage management and fruition, acta imeko, vol. 11, 2022, no. 1, 7 pp. doi: 10.21014/acta_imeko.v11i1.1085 [12] m. y. serebryakov, i. s. moiseev, current trends in the development of cyber-physical interfaces linking virtual reality and physical system, conference of russian young researchers in electrical and electronic engineering (elconrus), saint petersburg, russian federation, 25-28 january 2022, pp. 419–424. doi: 10.1109/elconrus54750.2022.9755545 [13] h. lhachemi, a. malik, r. shorten, augmented reality, cyberphysical systems, and feedback control for additive manufacturing: a review, ieee access, vol. 7, 2019, pp. 50119–50135. doi: 10.1109/access.2019.2907287 [14] a. t. silvestri, v. bottino, e. caputo, f. bonavolontà, r. schiano lo moriello, a. squillace, d. accardo, innovative fusion strategy for mems redundant-imu exploiting custom 3d components, ieee 9th int. workshop on metrology for aerospace (metroaerospace), 2022, pp. 644–648. doi: 10.1109/metroaerospace54187.2022.9856222 [15] a. t. silvestri, m. perini, p. bosetti, a. squillace, exploring potentialities of direct laser deposition: thin-walled structures, key engineering materials, vol. 926, 2022, pp. 206–212. doi: 10.4028/p-82vyug [16] a. t. silvestri, i. papa, f. rubino, a. squillace, on the critical technological issues of cff: enhancing the bearing strength, materials and manufacturing processes, 2021. doi: 10.1080/10426914.2021.1954195 [17] y. lu, cyber physical system (cps)-based industry 4.0: a survey, journal of industrial integration and management, vol. 2, 2017, no. 03, p. 1750014. doi: 10.1142/s2424862217500142 [18] b. dafflon, n. moalla, y. ouzrout, the challenges, approaches, and used techniques of cps for manufacturing in industry 4.0: a literature review, the international journal of advanced manufacturing technology, vol. 113, 2021, no. 7, pp. 2395–2412. doi: 10.1007/s00170-020-06572-4 [19] g. de alteriis, v. bottino, c. conte, g. rufino, r. s. lo moriello, accurate attitude inizialization procedure based on mems imu and magnetometer integration, ieee 8th int. workshop on metrology for aerospace (metroaerospace), 2021, pp. 1–6. doi: 10.1109/metroaerospace51421.2021.9511679 [20] s. surdo, a. zunino, a. diaspro, m. duocastella, acoustically shaped laser: a machining tool for industry 4.0, acta imeko, 2020, vol. 9, 2020, no. 4, p. 60-66. doi: 10.21014/acta_imeko.v9i4.740 [21] g. mariniello, t. pastore, a. bilotta, d. asprone, e. cosenza, seismic pre-dimensioning of irregular concrete frame structures: mathematical formulation and implementation of a learn-heuristic algorithm, journal of building engineering, vol. 46, 2022, doi: 10.1016/j.jobe.2021.103733 [22] g. mariniello, t. pastore, d. asprone, e. cosenza, layout-aware extreme learning machine to detect tendon malfunctions in prestressed concrete bridges using stress data, autom constr, vol. 132, 2021. doi: 10.1016/j.autcon.2021.103976 [23] g. raja, s. senthilkumar, s. ganesan, r. edhayachandran, g. vijayaraghavan, a. k. bashir, av-cps: audio visual cognitive processing system for critical intervention in autonomous vehicles, ieee int. conference on communications workshops (icc workshops), 2021, pp. 1–6. doi: 10.1109/iccworkshops50388.2021.9473647 [24] j. wang, z. cai, j. yu, achieving personalized $k$-anonymitybased content privacy for autonomous vehicles in cps, ieee trans industr inform, vol. 16, 2020, no. 6, pp. 4242–4251. doi: 10.1109/tii.2019.2950057 [25] c. conte, g. de alteriis, g. rufino, d. accardo, an innovative process-based mission management system for unmanned vehicles, ieee int. workshop on metrology for aerospace, metroaerospace 2020, pp. 377–381. doi: 10.1109/metroaerospace48742.2020.9160121 [26] j. xie, s. liu, x. wang, framework for a closed-loop cooperative human cyber-physical system for the mining industry driven by vr and ar: mhcps, comput ind eng, vol. 168, 2022, p. 108050. doi: [27] k. kammerer, r. pryss, k. sommer, m. reichert, towards context-aware process guidance in cyber-physical systems with augmented reality, 4th int. workshop on requirements engineering for self-adaptive, collaborative, and cyber physical systems (resacs), 2018, pp. 44–51. doi: [28] t. i. erdei, z. molnár, n. c. obinna, g. husi, a novel design of an augmented reality based navigation system & its industrial applications, acta imeko, vol. 7, 2018, no. 1, pp. 57–62. doi: 10.21014/acta_imeko.v7i1.528 [29] s. aheleroff, x. xu, r. y. zhong, y. lu, digital twin as a service (dtaas) in industry 4.0: an architecture reference model, advanced engineering informatics, vol. 47, 2021, p. 101225. doi: 10.1016/j.aei.2020.101225 [30] faro, technical specification sheet. online [accessed 28 november 2022] https://itknowledge.faro.com/hardware/faroarm_and_scanarm/faroa rm_and_scanarm/technical_specification_sheet_for_the_edge _faroarm_and_scanarm [31] artec3d, geometric wrap. online [accessed 28 november 2022] https://www.artec3d.com/3d-software/geomagicwrap?utm_source=google&utm_medium=cpc&utm_campaign= 12554714117&utm_term=geomagic%20wrap%7c%7ckwd524408837989&utm_content=119631253619%7c%7c&keywor https://doi.org/10.1016/j.cose.2022.102984 http://dx.doi.org/10.1109/mc.2017.105 http://dx.doi.org/10.1109/jiot.2017.2775739 https://doi.org/10.21014/acta_imeko.v9i4.731 https://doi.org/10.1109/wcsp.2011.6096958 https://doi.org/10.21014/acta_imeko.v7i1.513 https://doi.org/10.21014/acta_imeko.v11i3.1255 https://doi.org/10.21014/acta_imeko.v11i1.1085 https://doi.org/10.1109/elconrus54750.2022.9755545 https://doi.org/10.1109/access.2019.2907287 https://doi.org/10.1109/metroaerospace54187.2022.9856222 https://doi.org/10.4028/p-82vyug https://doi.org/10.1080/10426914.2021.1954195 https://doi.org/10.1142/s2424862217500142 https://doi.org/10.1007/s00170-020-06572-4 https://doi.org/10.1109/metroaerospace51421.2021.9511679 https://doi.org/10.21014/acta_imeko.v9i4.740 https://doi.org/10.1016/j.jobe.2021.103733 https://doi.org/10.1016/j.autcon.2021.103976 https://doi.org/10.1109/iccworkshops50388.2021.9473647 https://doi.org/10.1109/tii.2019.2950057 https://doi.org/10.1109/metroaerospace48742.2020.9160121 https://doi.org/10.21014/acta_imeko.v7i1.528 https://doi.org/10.1016/j.aei.2020.101225 https://it-knowledge.faro.com/hardware/faroarm_and_scanarm/faroarm_and_scanarm/technical_specification_sheet_for_the_edge_faroarm_and_scanarm https://it-knowledge.faro.com/hardware/faroarm_and_scanarm/faroarm_and_scanarm/technical_specification_sheet_for_the_edge_faroarm_and_scanarm https://it-knowledge.faro.com/hardware/faroarm_and_scanarm/faroarm_and_scanarm/technical_specification_sheet_for_the_edge_faroarm_and_scanarm https://it-knowledge.faro.com/hardware/faroarm_and_scanarm/faroarm_and_scanarm/technical_specification_sheet_for_the_edge_faroarm_and_scanarm https://www.artec3d.com/3d-software/geomagic-wrap?utm_source=google&utm_medium=cpc&utm_campaign=12554714117&utm_term=geomagic%20wrap%7c%7ckwd-524408837989&utm_content=119631253619%7c%7c&keyword=geomagic%20wrap&gclid=cj0kcqiamaibbhcaarisakulaks3tal5eczffdelgeh https://www.artec3d.com/3d-software/geomagic-wrap?utm_source=google&utm_medium=cpc&utm_campaign=12554714117&utm_term=geomagic%20wrap%7c%7ckwd-524408837989&utm_content=119631253619%7c%7c&keyword=geomagic%20wrap&gclid=cj0kcqiamaibbhcaarisakulaks3tal5eczffdelgeh https://www.artec3d.com/3d-software/geomagic-wrap?utm_source=google&utm_medium=cpc&utm_campaign=12554714117&utm_term=geomagic%20wrap%7c%7ckwd-524408837989&utm_content=119631253619%7c%7c&keyword=geomagic%20wrap&gclid=cj0kcqiamaibbhcaarisakulaks3tal5eczffdelgeh https://www.artec3d.com/3d-software/geomagic-wrap?utm_source=google&utm_medium=cpc&utm_campaign=12554714117&utm_term=geomagic%20wrap%7c%7ckwd-524408837989&utm_content=119631253619%7c%7c&keyword=geomagic%20wrap&gclid=cj0kcqiamaibbhcaarisakulaks3tal5eczffdelgeh acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 8 d=geomagic%20wrap&gclid=cj0kcqiamaibbhcaarisakula ks3tal5eczffdelgeh [32] iso/iec 20922:2016 information technology – message queuing telemetry transport (mqtt) v3.1.1,” 2016. online [accessed 28 november 2022] https://www.iso.org/standard/69466.html [33] oasis, mqtt version 3.1.1, 2015. online [accessed 28 november 2022] http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/ [34] scpi consortium, standard commands for programmable instruments (scpi), volume 1: syntax and style, usa, may 1999. online [accessed 28 november 2022] https://www.ivifoundation.org/docs/scpi-99.pdf https://www.artec3d.com/3d-software/geomagic-wrap?utm_source=google&utm_medium=cpc&utm_campaign=12554714117&utm_term=geomagic%20wrap%7c%7ckwd-524408837989&utm_content=119631253619%7c%7c&keyword=geomagic%20wrap&gclid=cj0kcqiamaibbhcaarisakulaks3tal5eczffdelgeh https://www.artec3d.com/3d-software/geomagic-wrap?utm_source=google&utm_medium=cpc&utm_campaign=12554714117&utm_term=geomagic%20wrap%7c%7ckwd-524408837989&utm_content=119631253619%7c%7c&keyword=geomagic%20wrap&gclid=cj0kcqiamaibbhcaarisakulaks3tal5eczffdelgeh https://www.iso.org/standard/69466.html http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/ https://www.ivifoundation.org/docs/scpi-99.pdf state-of-the art and perspectives of underwater optical wireless communications acta imeko issn: 2221-870x december 2021, volume 10, number 4, 25 35 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 25 state-of-the art and perspectives of underwater optical wireless communications fabio leccese1, giuseppe schirripa spagnolo2 1 dipartimento di scienze, università degli studi “roma tre”,via della vasca navale n. 84, 00146 roma, italy 2 dipartimento di matematica e fisica, università degli studi “roma tre”, via della vasca navale n. 84, 00146 roma, italy section: research paper keywords: underwater communication; visible light communications; optical wireless communication; bidirectional communication; led; photo detector citation: fabio leccese, giuseppe schirripa spagnolo, state-of-the art and perspectives of underwater optical wireless communications, acta imeko, vol. 10, no. 4, article 8, december 2021, identifier: imeko-acta-10 (2021)-04-08 section editor: silvio del pizzo, university of naples 'parhenope', italy received march 7, 2021; in final form june 17, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: giuseppe schirripa spagnolo, e-mail: giuseppe.schirripaspagnolo@uniroma3.it 1. introduction underwater wireless communication (uwc) has many potential applications in the military, industrial and scientific research fields but, for practical applications, significant data bandwidth is required [1]-[3]. generally, underwater wireless communication takes place via acoustic waves due to their relatively low attenuation. they are the normal choice in almost all commercially available submarine transmission systems. unfortunately, acoustic systems have low bandwidth and high latency. therefore, they are not suitable for bandwidth-hungry underwater applications such as image and real-time video transmission. however, as acoustic transmission is the only technology capable of supporting large transmission distances, extensive studies are being conducted to improve the performance of acoustic communication channels [4]-[8]. anyhow, acoustic underwater communication is susceptible to malicious attacks [9]. consequently, complementary technology capable of achieving secure broadband underwater communications is required. wireless communication via radio frequency waves (rf) is the most widespread technology in terrestrial communications. sadly, this technology is not suitable for underwater applications. in water, radio frequency waves are strongly attenuated, especially in seawater where the propagation medium is highly conductive [10]. for short distance communications, underwater wireless optical communication (uowc) can be a viable alternative to that achievable via acoustic waves. this technology, even with all its limitations, can be of great use in specific applications. although not widely used yet, this article provides the state of the art in wireless underwater optical communication. table 1. shows a comparison of acoustic underwater wireless communication technologies vs. underwater wireless optical communication. optical communication is defined as communication at a distance using light to carry information. an optical fibre is the most common type of channel for optical communications, as well as the only medium that can meet the needs for enormous bandwidth in such an information age. replacing the channel abstract in scientific, military, and industrial sectors, the development of robust and efficient submarine wireless communication links is of enormous interest. underwater wireless communications can be carried out through acoustic, radio frequency (rf), and optical waves. underwater optical communication is not a new idea, but it has recently been considered because seawater exhibits a window of reduced absorption both in the visible spectrum and long-wavelength uv light (uv-a). compared to its bandwidth limited acoustic counterpart, underwater optical wireless communications (uowcs) can support higher data rates at low latency levels. underwater wireless communication networks are important in ocean exploration, military tactical operations, environmental and water pollution monitoring. anyway, given the rapid development of uowc technology, documents are still needed showing the state of the art and the progress made by the most current research. this paper aims to examine current technologies, and those potentially available soon, for underwater optical wireless communication and to propose a new perspective using uv-a radiation. mailto:giuseppe.schirripaspagnolo@uniroma3.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 26 from an optical fiber to free space, we achieve free space optical communications [11]. visible light communication (vlc) is a communication technology in which the visible spectrum is modulated to transmit data. due to the propagation distance of the light emitting diodes (leds), the vlc is a technology for short-range communication. pang et al. [12] first introduced the concept of using leds for wireless communication. visible-light communication (vlc) technology was developed to provide both lighting and data transfer with the same infrastructure [13]-[16]. vlc techniques transmit information wirelessly by rapidly pulsing visible light using light emitting diodes (leds). generally, the information data is overlaid on the led light without introducing flickering. the exhaustion of low-frequency bands to cope with the exponential growth for the high-speed wireless access is another reason for exploring new technologies. the visible light spectrum is unlicensed and hardware readily available, which can be used for data transmission. furthermore, the exponential improvement in the high-power light emitting diodes is an enabler for high data rate vlc network. as well as vlc, underwater optical wireless communications (uowcs) systems are currently being studied [17]-[21]. in the uowc systems, light sources are leds or laser diodes (lds). both are extremely interesting. lds for their feature higher modulation bandwidth respect to leds. on the other hand, leds, compared to lds, have higher energy efficiency, lower cost, and longer life. leds seem more suitable for applications where medium transmission bit rate is required. compared to acoustic communication, uowc has great potential; with it, we can make communications with high bit rate and very low latency. currently, the performance of uowc systems is limited to short range applications [22]. submarine optical communication systems are starting to be commercially available [23]-[25]. in the literature, numerous studies have addressed the problem of optical transmission in water through experiments. unfortunately, there are objective difficulties in carrying out the experiments in a real underwater environment. most of the experimental work is done within a controlled laboratory setup. in such configurations, sunlight which induces noise and which, in some cases, saturates the light detectors is neglected. however, in-depth studies are still necessary to create systems that can be used in real operational scenarios. researchers are needed to allow submarine optical transmission even over medium distances (greater than 500 m). table 1 shows the performance features (benefits, limitations, and requirements) of acoustic and optical underwater communication [26]. while, figure 1 compares the performance of acoustics, and uowc, based on transmission range and data rate (bandwidth) [27]. in order to provide a basic overview, we will go through and provide summary is to highlight the perspectives of uowc technologies. the focus of this is to examine current technologies and those potentially available in the next few years, for uowc. the military sector where underwater optical wireless communication finds important applications, thanks to its intrinsic safety and the availability of higher bandwidth. one possible application is communication between divers. during military incursions with divers, it is very important for the command to have secure communications that are difficult to locate. generally, underwater acoustic communications are easily detectable. in this scenario, uowc is an excellent technology. it has the advantage that it is much more difficult to intercept. this application does not require long range and high band. figure 2 shows a typical uowc military application scenario. another scenario is the one shown in figure 3. it is a dynamic positioning buoy [28], buoy capable of communicating with satellite and/or with a terrestrial station and with optical surveillance station positioned on the seabed. the surveillance station can be powered by nuclear batteries [29] and in real time control, through digital optical correlation [30]-[32], if something intrudes into the monitored area. in case of suspect object (e.g., a submarine) the image and related alert is sent back to the buoy and, from it, to the ground costal station via satellite link. this application can grand a very accurate underwater video surveillance. in addition, by using uv light for figure 1. compares the performance of acoustic, and uowc, based on the transmission range and the data speed (bandwidth). table 1. comparison of underwater wireless communication technologies. parameter acoustic optical attenuation 0.1 – 4 db/km 0.39 db/m (ocean) 11 db/m (turbid) speed 1500 ms−1 2.3 × 108 ms−1 data rate kbps gbps latency high low distance > 100 km ≤ 500 m bandwidth 1 khz–100 khz 150 mhz frequency 10–15 khz 5 × 1014 hz power 10 w mw – w figure 2. typical military application scenarios of uowc. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 27 underwater optical wireless communication, the intruder has a hard time realizing that he has been detected. in uowc the link between transmitter and receiver can be mainly of two types [20],[26]: • point-to-point line-of-sight (point-to-point los); • diffuse line-of-sight (diffuse los) configuration. the point-to-point los configuration, shown in figure 4 (a), uses “collimated” light sources. in this arrangement, the receiver is positioned in such a way as to detect the light beam directly pointed in the direction fixed by the transmitter. in contrast, the diffuse los configuration uses light sources with a large divergence angle. this allows for greater flexibility in the reciprocal positioning of the transmitter and receiver, see figure 4 (b). especially in military applications, where it is necessary to communicate between moving units, the diffuse line of sight configuration (diffuse los) must be used. theoretically, in a uowc system we could use any light source as a transmitter [33]. however, the limitations of power, size and switching speed imposed by the practical use of the system restrict the selection to two possible choices: laser diodes (ld) and light emitting diodes (led) laser diodes (ld) make it possible to develop uowc systems with a high modulation bandwidth and a high transmission power density. they generally have small angles of divergence and therefore a strong directionality. they are used in point-to-point los. in most underwater communications between moving objects, it is not easy to achieve perfect alignment between transmitter and receiver. in this scenario, for a realistic application of the laser diodes, beam expansion or active alignment systems are required. this greatly complicates the design of the system. furthermore, such systems are not very economical and often not very reliable. nowadays, high-brightness leds are available, and they represent a valid alternative to the use of laser diodes. the use of leds as light sources for uowc systems offers many advantages such as long life, low energy consumption. in addition, leds with large divergence angles make alignment problems less stringent. generally, leds are used in the diffuse los configuration. by means of leds, it is possible to create simple and compact uowc systems. unfortunately, due to the large divergence angles and low modulation bandwidth, leds are only applicable for short-range transmissions and for applications where relatively low transmission speeds are required. as receivers, a variety of sensors is potentially usable in uowc [34]-[36]: photodiode, pin photodiode, avalanche photo diode, silicon photomultipliers. 2. optical transmission in the aquatic medium beer’s law is commonly used to relate the absorption of diffuse light to the properties of the medium through which the light is traveling. when applied to a liquid medium, it states that the irradiance (e) decreases exponentially as a function of wavelength (λ) and distance (r ) [37],[38]. mathematically, we can write 𝐸(λ, r) = 𝐸0 ∙ exp[−𝐾d(λ) ∙ 𝑟] , (1) where e0 is the initial irradiance (in watts per square meter). in a medium with attenuation coefficient kd (λ), after traveling a distance r, the residual irradiance is e(λ, r). in (1), we assume that kd is constant along r. the aquatic medium contains many different elements, dissolved or suspended. these components cause the spectral attenuation of the radiation. in particular, the concentration of chlorophyll is a very significant parameter for the use of optical radiation in submarine communications [39]-[41]. for this reason, a relationship between the attenuation coefficient kd and the chlorophyll concentration was determined. underwater, the light shows less attenuation in the blue/green wavelength range. however, although light attenuation in seawater is minimum in the blue-green region, the optimal wavelength for underwater optical link is conditioned from the inherent optical properties of the water, which can largely vary in different geographic places. generally, coastal, and oceanic waters are classified according to the jerlov water types [42]-[45]. for jerlov coastal water types figure 3. underwater video surveillance scenario. figure 4. examples of different underwater optical wireless link configuration. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 28 1c, 3c, 5c, 7c, 9c and oceanic water type iii, diffuse attenuation coefficients are shown in figure 5. observing the figure 5, we see that the optical signals are absorbed in water. however, seawater exhibits relatively little absorption in the blue/green region of the visible spectrum. therefore, using wavelengths in this spectral region, high-speed connections can be attained according to the type of water. minimum attenuation is centred near 460 nm in clear ocean waters and shifts to higher values for coastal waters. seawater light transmission model is shown in figure 6. the optical power reaching the receiver can be written as [47][50]: 𝑃rx=𝑃tx∙𝜂tx∙𝜂rx∙exp [− 𝐾d(𝜆)∙𝑧 cos 𝜃 ] ∙ 𝐴𝑅𝑥 ∙ cos 𝜃 2 π∙𝑧2(1 − cos 𝜃0) , (2) where 𝑃tx is the transmitted power, 𝜂tx and 𝜂rx are the optical efficiencies of the 𝑇𝑥 and 𝑅𝑥 correspondingly, kd (λ) is the attenuation coefficient, 𝑧 is the perpendicular distance between the 𝑇𝑥 plane and the 𝑅𝑥 plane, 𝜃0 is the 𝑇𝑥 beam divergence angle, 𝜃 is the angle between the perpendicular to the 𝑅𝑥 plane and the 𝑇𝑥‐ 𝑅𝑥 trajectory, and 𝐴rx is the receiver aperture area. the transmitted power is limited by the energy that can be used by the transmitter apparatus. it must be as small as possible. in this way, it is possible to have low power supply; very useful in underwater applications. equation (2) shows that for the same energy used by the transmitter, if you want to increase the transmission distance, it is essential, among other things, to improve the efficiency of the transmitter and of the receiver. obviously, the transmission distance can also be increased by using reception systems capable of capturing, theoretically, even the single photon. as for light sources, technology offers increasingly efficient and reliable devices. current light sources (laser diode and led) have excellent efficiency, high reliability, low power consumption and low cost. on the contrary, as far as the receiver is concerned, there is still a lot of research work to be done. generally, the detected light from the receiver is small and disturbed by noise, especially if the transmission is not over a very short distance. for this reason, new error-corrected modulation systems that are relatively immune to noise must be studied, especially if we want to use submarine optical transmission with high bitrate. 3. basic components of uowc an uowc link can be schematized in three parts, the transmitter unit (tx), the underwater channel and the receiver module (rx). the schematic in figure 7 shows the components of a typical system. 3.1. the transmitter (tx) for uowc systems, the function of the transmitter is to transform the electrical signal in optical one, projecting the carefully aimed light pulses into the water. as already mentioned, the optical light sources are based on led or ld one [51]–[58]. the transmitter consists of four principal components: a modulator and pulse shape circuit, a driver circuit, which converts the electrical signal into an optical signal suitable for transmission and a lens to realize the optical link configuration. a critical parameter in optical transmission is the modulation scheme used. different modulation schemes can be used in uowc systems. each varies in complexity, implementation cost, bandwidth, power consumption, noise robustness, and bit error rate (ber). table 2 shows a summary on uowc modulation schemes. typical rf modulating schemes are not applicable in vlc. recent uowc studies have tried to characterize the performance of communication systems using different modulation techniques to increase together the data transmission rate and the link distance [59]–[66]. table 1 summarizes the main modulation schemes that can be used in the uowc [67]. 3.2. the receiver (rx) the receiver has the task of capturing the transmitted optical signal and transforming it into an electrical signal. in many applications, it is important to select a specific wavelength that impacts on the light detector [86]. the light figure 5. diffuse light attenuation coefficient (kd) vs. wavelength for jerlov water types. data from tables xxvi and xxvii in ref. [46]. figure 6. seawater light transmission model. figure 7. schematic of a typical uowc link. the transmitter (tx) is composed of a modulator, optical driver, light source, and projection lens. the receiver (rx) is made of optical bandpass filter, photodetector, low noise electronics and demodulator. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 29 coming on the receiver should have no noise introduced by sunlight and the presence of other light sources [87]. to try to solve this problem, the wavelength band (the one transmitted) is selected by using a narrow optical band-pass filter [88],[89]. when the receiver receives the transmitted optical signal, this provides to transform it into an electric signal by using photo detectors. there are many different types of photo detectors currently commonly used, e.g., the photodiodes. these devices, for their characteristics of small size, suitable material, high sensitivity, and fast response time, are commonly used in optical communication applications. there are two types of photodiodes: the pin photodiode and the avalanche photodiode (apd). unfortunately, due to the high detection threshold and high noise intensity, linked to trans-conductance amplifier, that limit their practical application, photodiodes are not advisable for long distance uowc systems. for traditional detection devices and methods, due to the exponential attenuation of the water, the optical communication distance is less than 100 m [43],[90]. this constraint severely limits the performance of uowc systems. especially for the management of auvs and remotecontrol vehicles (rov) [91]-[95]. recent research is focused on the possible application of single photon avalanche diodes (spads) technology to uowc systems. the avalanche photodiodes have a similar structure to that of the pin photodiodes and operate at a much higher reversed bias. this physical characteristic allows to a single photon to produce a significant avalanche of electrons. this way of operation is called the single-photon avalanche mode or even the geiger’s mode [96]-[98]. the great advantage of spads is that their detectors do not need to a trans-conductance amplifier. this intrinsically leads that optical communications realized with this kind of diodes could provide high detection, high accuracy, and low noise measurements [99]-[108]. 4. underwater communications by uv-a radiation in the literature, almost all studies do not consider the presence of sunlight. it is inevitable that uowc systems are exposed to sunlight. furthermore, it should be noted that the optical absorption spectrum of seawater aligns with the maximum amplitude of the solar spectrum, see figure 8 [109]. generally, solar intensity decreases with depth. by examining how light is absorbed in water, see figure 5, we see that the best wavelengths to use in uowc are 450-500 nm for clear waters and 570-600 nm for coastal waters. this same attenuation is also true for the solar spectrum. figure 9 shows how sunlight penetrates seawaters. in the presence of sunlight, the receivers see very high white noise and can often go into saturation. this problem is particularly important with spads. of course, in real applications, the viewing direction of the photo-sensor is also important. an upward facing detector is exposed to sunlight a few orders of magnitude greater than when facing downward or to the side. all this, in many practical applications, makes it difficult to use the spectrum of visible light. for this reason, submarine communication systems that use uv-a band communication channels are extremely interesting. we must also observe two other important characteristics of optical communication that uses near ultraviolet. (1) this communication channel is not identifiable and difficult to intercept. it is particularly attractive for military applications. (2) using uv radiation makes it easier to maintain alignment between transmitter and receiver [111],[112]. an important application of uowc is underwater communication from diver-to-diver. there are commercially available audio interphones that work quite well. table 2. summary on uowc modulation schemes. modulation drawbacks advantages ref. ook-nrz low energy efficiency simple and low cost [68]-[70] ppm high requirements on timing low bandwidth utilization rate more complex transceivers high power efficiency [71]-[75] pwm low power efficiency very good noise immunity [20], [58] dpim error spread in demodulation complex modulation devices high bandwidth efficiency [58], [76], [77] psk high implementation complexity high cost high receiver sensitivity [78]-[80] qam high implementation complexity high cost high system spectral efficiency better rejection on noise [80]-[82] polsk short transmission distance low data rate high tolerance to underwater turbulence [83], [84] sim complex modulation/demodulation devices and suffers from poor average power efficiency increase system capacity low cost [85], [86] figure 8. solar irradiance and oceanic water diffuse light attenuation coefficient abortion. the curves show that the minimum of the optical absorption of water is aligned with the maximum of solar radiation. since sunlight is an important source of noise, the best ratio propagated signal on solar radiation is obtained using radiation cantered around 385 nm. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 30 however, these systems in military tactical applications (such as raids by military divers) have the drawback of being easily identifiable. in order to understand if a system capable of implementing a communication between divers that is not easily identifiable is feasible, some preliminary studies have been carried out. in particular, we tried to understand if it is possible to create a compact uowc system that requires few energy resources. for this purpose, we tried to verify the feasibility of an optical communication made through a led-to-led link. due to the growing demand for high-power uv leds for commercial applications, cost-effective and efficient leds that emit in the near ultraviolet (uv-a) are currently available. leds with emission in the range from 350 to 400 nm (uva) are light sources that allow you to work just beyond the visible light radiation. that is, in that region of the spectrum where most of the sunlight does not penetrate the water. this part of the spectrum is still quite close to the minimum attenuation in water. therefore, it is useable in uowc systems. in addition to be excellent light sources, leds can be used both as temperature sensors [113] and as light detectors [114],[115]. leds can detect a narrow band of wavelengths, close to what they emit when used as a source. in our experiment, we used a bivar uv5tz-385-30 led as a transmitter and a bivar uv5tz-390-30 led as a receiver [116]. light intensities vs. the wavelengths of the leds used are shown in the figure 10. the two leds were inserted in a tank filled with real seawater (water taken from the tyrrhenian coast anzio italy). the leds are placed at a distance of 50 cm and facing one towards the other. the led used as a transmitter was driven by the circuit shown in the figure 11. the led driver shown in the figure has a restricted baud rate. the main reason is the limited switching speed of silicon devices. a maximum data rate of 100 kbps can be achieved with this driver. in any case, this speed of data transmission is more than enough to implement an excellent audio connection. obviously, if the driver is made with transistors made in gan technology, data transmissions with speeds higher than 1 mbps can be obtained [118]. figure 12 shows the rx led driver circuit. figure 13 shows the signal received by the led used as light detector. the transmitter led is polarized with 25 ma and switched with a frequency of approximately 80 khz. figure 9. (a) spectral irradiance of sunlight at the level of the sea; (b) light penetration in open ocean, (c) light penetration in coastal water. [110]. figure 10. spectral intensity of the leds used as tx and rx. figure 11. tx led driver circuit using mic3289 pwm boost switching regulator [117]. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 31 the received signal, after traversing 50 cm of seawater, is “good”. obviously, further studies are needed to implement and characterize an underwater uv-a communication realized using the led-to-led link. 5. conclusions recently many studies have been conducted to use uowc technology to transmit information safely with high data rate in underwater environment. today, uowc systems usable in real operating conditions (with some exceptions) are not yet available. therefore, a lot of research in this area has yet to be done. in particular: • currently, an inevitable phenomenon for uowc link is the misalignment between transmitter and receiver. to limit the impact of misalignments between transmitter and receiver, research is underway for the development of smart transceivers. however, the need to develop robust and reliable uowc transceivers that do not require rigorous alignment is urgent. • the design innovative modulation and coding schemes that can adapt the characterizations of underwater environment. • since most uowc systems are integrated into a batterypowered platform, energy efficiency is therefore important. the systems must be designed with high energy efficiency. • possibility of using different colored light sources at the same time to increase data transfer speed and / or allow simultaneous use by multiple users. • development of new underwater communication channel modeling. when environmental conditions deviate from ideality, the light signal rapidly degrades. it is essential to study the propagation of the light beam with models that simulate real conditions as much as possible (even in “difficult”" environments). all this to allow the optimization of transmission and reception techniques, both in terms of transmitter and sensor used as receiver. furthermore, we have presented a preliminary study to verify the feasibility of a simple, economical and reliable communication system that uses uv-a radiation. the possibility of using near ultraviolet radiation should favor the development of uowc systems that can also be used in the presence of solar radiation. finally, almost all the studies available in the literature are conducted by simulation or by laboratory experiments. studies in real marine environment are needed. the interest in uowc is mainly outside the academic field. in fact, the possibility to use uowc is based on future military applications for secure under water telephones (uwts), necessary for allowing secure communications between vessels and submarines, considering the possibility to use both direct and spread light channels. in addition, the usage of point-to-point optical communications can allow a better usage of torpedoes, not specifically for their guidance, but for reporting sonar information back to the base station with a high rate, even in case of not wire-guided solution. references [1] i. f. akyildiz, d. pompili, t. melodia, underwater acoustic sensor networks: research challenges, ad hoc networks 3(3) (2005), pp. 257 – 279. doi: 10.1016/j.adhoc.2005.01.004 [2] c. m. g. gussen, p. s. r. diniz, m. l. r. campos, w. a. martins, f. m. costa, j. n. gois, a survey of underwater wireless communication technologies, j. of commun. and info. sys. 31(1) (2016), pp. 242–255. doi: 10.14209/jcis.2016.22 [3] m. f. ali, d. n. k. jayakody, y. a. chursin, s. affes; s. dmitry, recent advances and future directions on underwater wireless communications, archives of computational methods in engineering, pp. 1-34. 2019. doi: 10.1007/s11831-019-09354-8 [4] e. demirors, g. sklivanitis, g.e. santagati, t. melodia, s. n. batalama, high-rate software-defined underwater acoustic modem with real-time adaptation capabilities, ieee access (6) (2018), pp. 18602-18615. doi: 10.1109/access.2018.2815026 [5] d. centelles, a. soriano-asensi, j. v. martí, r. marín, p. j. sanz, underwater wireless communications for cooperative robotics with uwsim-net, appl. sci. 9 (2019), 3526. doi: 10.3390/app9173526 [6] m. j. bocus, a. doufexi, d. agrafiotis, performance of ofdmbased massive mimo otfs systems for underwater acoustic communication, iet communications 14(4) (2020), pp. 588-593. doi: 10.1049/iet-com.2019.0376 [7] e. demirors, g. sklivanitis, g.e. santagati, t. melodia, s. n. batalama, high-rate software-defined underwater acoustic modem with real-time adaptation capabilities, ieee access 6, 2018, pp. 18602–18615. doi: 10.1109/access.2018.2815026 figure 12. rx led driver circuit using the ltc1050 operational amplifier [119]. figure 13. receiver output signal. rx implemented according to the circuit of figure 12. https://doi.org/10.1016/j.adhoc.2005.01.004 https://doi.org/10.14209/jcis.2016.22 file:///c:/users/schirripa/appdata/local/packages/microsoft.windowscommunicationsapps_8wekyb3d8bbwe/localstate/files/s0/3459/attachments/doi.org/10.1007/s11831-019-09354-8 https://doi.org/10.1109/access.2018.2815026 https://doi.org/10.3390/app9173526 https://doi.org/10.1049/iet-com.2019.0376 https://doi.org/10.1109/access.2018.2815026 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 32 [8] d. centelles, a. soriano-asensi, j. v. martí, r. marín, p. j. sanz, underwater wireless communications for cooperative robotics with uwsim-net, appl. sci. 9(3526) (2019). doi: 10.3390/app9173526 [9] m. c. domingo, securing underwater wireless communication networks, ieee wireless communications, 18, no. 1, 2011, pp. 22-28. doi: 10.1109/mwc.2011.5714022 [10] x. che, i. wells, g. dickers, p. kear, x. gong, re-evaluation of rf electromagnetic communication in underwater sensor networks, ieee commun. mag. 48, no 12, 2010, pp. 143–151. doi: 10.1109/mcom.2010.5673085 [11] m. a. khalighi, m. uysal, survey on free space optical communication: a communication theory perspective, ieee communications surveys & tutorials 16(4) (2014), pp. 2231-2258. doi: 10.1109/comst.2014.2329501 [12] g. pang, t. kwan, chi-ho chan, h. liu, led traffic light as a communications device, in proceedings 1999 ieee/ieej/jsai international conference on intelligent transportation systems (cat. no.99th8383), 1999, pp. 788-793. doi: 10.1109/itsc.1999.821161 [13] z. ghassemlooy, l. n. alves, s. zvanovec, m. a. khalighi, visible light communications: theory and applications, crc press, boca raton, fl, usa, 2017. doi: 10.1201/9781315367330-3 [14] a. al-kinani, c. x. wang, l. zhou, w. zhang, optical wireless communication channel measurements and models, ieee commun. surv. tutor. 2018, 20, 2018, pp. 1939–1962. doi: 10.1109/comst.2018.2838096 [15] s. u. rehman, s. ullah, p. h. j. chong, s. yongchareon, d. komosny, visible light communication: a system perspective overview and challenges, sensors vol.19, no. 5, 2019, 1153. doi: 10.3390/s19051153 [16] g. schirripa spagnolo, l. cozzella, f. leccese, s. sangiovanni, l. podestà, e. piuzzi, optical wireless communication and li-fi: a new infrastructure for wireless communication in saving energy era, 2020 ieee international workshop on metrology for industry 4.0 & iot, roma, italy, 2020, pp. 674-678. doi: 10.1109/metroind4.0iot48571.2020.9138180 [17] g. schirripa spagnolo, l. cozzella, f. leccese, underwater optical wireless communications: overview, sensors, 20, 2261, 2020. doi: 10.3390/s20082261 [18] g. schirripa spagnolo, l. cozella, f. leccese, a brief survey on underwater optical wireless communications, 2020 imeko tc-19 international workshop on metrology for the sea, naples, italy, october 5-7, 2020, pp. 79-84. online [accessed 01 september 2021] https://www.imeko.org/publications/tc19-metrosea2020/imeko-tc19-metrosea-2020-15.pdf [19] h.m. oubei, c. shen, a. kammoun, e. zedini, k.h. park, x. sun, g. liu, c.h. kang, t.k. ng, n.s. alouini, light based underwater wireless communications, japanese journal of applied physics, 57, no. 8s2, 08pa06, 2018. doi: 10.7567/jjap.57.08pa06 [20] z. zeng, s. fu, h. zang, a survey of underwater optical wireless communications, ieee commun. surv. tutorials, 19, no. 1, pp. 204–238, 2017. doi: 10.1109/comst.2016.2618841 [21] b. cochenour, k. dunn, a. laux, l. mullen, experimental measurements of the magnitude and phase response of highfrequency modulated light underwater, appl. opt. 56(14) (2017), pp. 4019-4024. doi: 10.1364/ao.56.004019 [22] d. anguita, d. brizzolara, g. parodi, building an underwater wireless sensor network based on optical: communication: research challenges and current results, 2009 third international conference on sensor technologies and applications, athens, greece, 2009, pp. 476-479. doi: 10.1109/sensorcomm.2009.79 [23] bluecomm underwater optical communication. online [accessed 01 september 2021] https://www.sonardyne.com/product/bluecomm-underwateroptical-communication-system/ [24] sa photonics neptune™ fso. onlin [accessed 01 september 2021] https://www.saphotonics.com/communicationssensing/optical-communications/ [25] shimadzu, mc100 underwater optical wireless communication modem. online [accessed 01 september 2021] https://www.shimadzu.com/news/g16mjizzgbhz3--y.html [26] h. kaushal, g. kaddoum, underwater optical wireless communication, ieee access 4 (2016), pp. 1518–1547. doi:10.1109/access.2016.2552538 [27] hamamatsu, underwater optical communications. online [accessed 01 september 2021] https://www.hamamatsu.com/eu/en/applications/underwateroptical-communication/index.html [28] introduction to dynamic positioning (dp) systems. online [accessed 01 september 2021] https://safety4sea.com/wp-content/uploads/2019/12/uscgintroduction-to-dynamic-position-systems2019_12.pdf?__cf_chl_jschl_tk__=pmd_r1lflkph3zckqcopt pdg.khcd0hz2ggcbgdxtkvgyge-1630499591-0gqntzgznajujcnbszqi9 [29] i. hamilton, n. patel, nuclear batteries for maritime applications, marine technology society journal 53(4) (2019), pp. 26-28. doi: 10.4031/mtsj.53.4.5 [30] h. ma, y. liu, correlation based video processing in video sensor networks, in international conference on wireless networks, communications and mobile computing 2, pp. 987-992, maui: hi, usa, 3-16 june 2005. doi: 10.1109/wirles.2005.1549547 [31] g. schirripa spagnolo, l. cozzella, f. leccese, phase correlation functions: fft vs. fht, acta imeko 8(1) (2019), pp. 87-92. doi: 10.21014/acta_imeko.v8i1.604 [32] m. darwiesh, a. f. el-sherif, h. s. ayoub, y. h. el-sharkawy, m. f. hassan, y. h. elbashar, hyperspectral laser imaging of underwater targets, j. opt 47, 2018, 553. doi: 10.1007/s12596-018-0493-7 [33] m. kong, y. chen, r. sarwar, b. sun, z. xu, j. han, j. chen, h. qin, j. xu, underwater wireless optical communication using an arrayed transmitter/receiver and optical superimposition-based pam-4 signal, opt. express 26, no. 3, 2018, pp. 3087-3097. doi: 10.1364/oe.26.003087 [34] s. donati, photodetectors: devices, circuits and applications, wiley-ieee press; 2nd edition (january 7, 2021) [35] s. gundacker, a. heering, the silicon photomultiplier: fundamentals and applications of a modern solid-state photon detector, phys. med. biol. 65, 2020, 17tr01. doi: 10.1088/1361-6560/ab7b2d [36] m. a. khalighi, h. akhouayri, s. hranilovic, siliconphotomultiplier-based underwater wireless optical communication using pulse-amplitude modulation, journal of oceanic engineering 45(4), 2020, pp. 1611-1621. doi: 10.1109/joe.2019.2923501 [37] h.r. gordon, can the lambert‐beer law be applied to the diffuse attenuation coefficient of ocean water?, limnology and oceanography, 34(8) (1989), pp. 1389-1409. doi: 10.4319/lo.1989.34.8.1389 [38] f. campagnaro, m. calore, p. casari, v. sanjuan calzado, g. cupertino, c. moriconi, m. zorzi., measurement-based simulation of underwater optical networks, oceans 2017 aberdeen, aberdeen, 2017, pp. 1-7. doi: 10.1109/oceanse.2017.8084671 [39] c.f. bohren, d.r.; huffman, absorption and scattering of light by small particles, wiley: new york, ny (usa), 1988. doi: 10.1002/9783527618156 [40] f. a. tahir, b. das, m. f. l. abdullah, m. s. m. gismalla, design and analysis of variation in chlorophyll and depth for open https://doi.org/10.3390/app9173526 https://doi.org/10.1109/mwc.2011.5714022 https://doi.org/10.1109/mcom.2010.5673085 http://dx.doi.org/10.1109/comst.2014.2329501 https://doi.org/10.1109/itsc.1999.821161 https://doi.org/10.1201/9781315367330-3 https://doi.org/10.1109/comst.2018.2838096 https://doi.org/10.3390/s19051153 https://doi.org/10.1109/metroind4.0iot48571.2020.9138180 https://doi.org/10.3390/s20082261 https://www.imeko.org/publications/tc19-metrosea-2020/imeko-tc19-metrosea-2020-15.pdf https://www.imeko.org/publications/tc19-metrosea-2020/imeko-tc19-metrosea-2020-15.pdf https://doi.org/10.7567/jjap.57.08pa06 https://doi.org/10.1109/comst.2016.2618841 https://doi.org/10.1364/ao.56.004019 https://doi.org/10.1109/sensorcomm.2009.79 https://www.sonardyne.com/product/bluecomm-underwater-optical-communication-system/ https://www.sonardyne.com/product/bluecomm-underwater-optical-communication-system/ https://www.saphotonics.com/communications-sensing/optical-communications/ https://www.saphotonics.com/communications-sensing/optical-communications/ https://www.shimadzu.com/news/g16mjizzgbhz3--y.html https://doi.org/10.1109/access.2016.2552538 https://www.hamamatsu.com/eu/en/applications/underwater-optical-communication/index.html https://www.hamamatsu.com/eu/en/applications/underwater-optical-communication/index.html https://safety4sea.com/wp-content/uploads/2019/12/uscg-introduction-to-dynamic-position-systems-2019_12.pdf?__cf_chl_jschl_tk__=pmd_r1lflkph3zckqcoptpdg.khcd0hz2ggcbgdxtkvgyge-1630499591-0-gqntzgznajujcnbszqi9 https://safety4sea.com/wp-content/uploads/2019/12/uscg-introduction-to-dynamic-position-systems-2019_12.pdf?__cf_chl_jschl_tk__=pmd_r1lflkph3zckqcoptpdg.khcd0hz2ggcbgdxtkvgyge-1630499591-0-gqntzgznajujcnbszqi9 https://safety4sea.com/wp-content/uploads/2019/12/uscg-introduction-to-dynamic-position-systems-2019_12.pdf?__cf_chl_jschl_tk__=pmd_r1lflkph3zckqcoptpdg.khcd0hz2ggcbgdxtkvgyge-1630499591-0-gqntzgznajujcnbszqi9 https://safety4sea.com/wp-content/uploads/2019/12/uscg-introduction-to-dynamic-position-systems-2019_12.pdf?__cf_chl_jschl_tk__=pmd_r1lflkph3zckqcoptpdg.khcd0hz2ggcbgdxtkvgyge-1630499591-0-gqntzgznajujcnbszqi9 https://safety4sea.com/wp-content/uploads/2019/12/uscg-introduction-to-dynamic-position-systems-2019_12.pdf?__cf_chl_jschl_tk__=pmd_r1lflkph3zckqcoptpdg.khcd0hz2ggcbgdxtkvgyge-1630499591-0-gqntzgznajujcnbszqi9 https://doi.org/10.4031/mtsj.53.4.5 https://doi.org/10.1109/wirles.2005.1549547 http://dx.doi.org/10.21014/acta_imeko.v8i1.604 https://doi.org/10.1007/s12596-018-0493-7 https://doi.org/10.1364/oe.26.003087 https://doi.org/10.1088/1361-6560/ab7b2d https://doi.org/10.1109/joe.2019.2923501 https://doi.org/10.4319/lo.1989.34.8.1389 https://doi.org/10.1109/oceanse.2017.8084671 https://doi.org/10.1002/9783527618156 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 33 ocean underwater optical communication, wireless personal communications, pp. 1-19, 2020. doi: 10.1007/s11277-020-07275-5 [41] s. k. sahu, p. shanmugam, a study on the effect of scattering properties of marine particles on underwater optical wireless communication channel characteristics, in oceans 2017: aberdeen uk. doi: 10.1109/oceanse.2017.8084720 [42] n. g. jerlov, irradiance, ch. 10 in optical oceanography (elsevier, 1968), pp. 115–132. doi: 10.1016/s0422-9894(08)70929-2 [43] j. w. giles, i. n. bankman, underwater optical communications systems. part 2: basic design considerations, milcom 2005 2005 ieee military communications conference, atlantic city, nj, usa, 2005, pp. 1700-1705 3. doi: 10.1109/milcom.2005.1605919 [44] j. sticklus, p. a. hoeher, r. röttgers, optical underwater communication: the potential of using converted green leds in coastal waters, ieee journal of oceanic engineering, 44, no. 2, pp. 535-547, april 2019. doi: 10.1109/joe.2018.2816838 [45] m.g. sticklus, c.d. mobley, inherent optical properties of jerlov water types, applied optics, 54, no. 17, 2015, pp. 5392-5401. doi: 10.1364/ao.54.005392 [46] n. g. jerlov, marine optics, elsevier oceanography series, 276, 1976. doi: 10.1016/s0422-9894(08)70795-5 [47] l. k. gkoura, g. d. roumelas, h. e. nistazakis, h. g. sandalidis, a. vavoulas, a. d. tsigopoulos, g. s. tombras, underwater optical wireless communication systems: a concise review, turbulence modelling approaches current state, development prospects, applications, konstantin volkov, intechopen, 26 july 2017. doi: 10.5772/67915 [48] r. a khalil, m. i. babar, n. saeed, t. jan, h. s. cho, effect of link misalignment in the optical-internet of underwater things, electronics 9(4) (2020), 646. doi: 10.3390/electronics9040646 [49] s. arnon, d. kedar, non-line-of-sight underwater optical wireless communication network, j. opt. soc. am. a 26(3) (2009), pp. 530–539. doi: 10.1364/josaa.26.000530 [50] s. arnon, underwater optical wireless communication network, optical engineering, 49, no. 1, 015001, january 2010. doi: 10.1117/1.3280288 [51] t. wiener, s. karp, the role of blue/green laser systems in strategic submarine communications, ieee trans. commun. 28, 1980, pp. 1602–1607. doi: 10.1109/tcom.1980.1094858 [52] c. shen, y. guo, h. m. oubei, t. k. ng, g. liu, k. h. park, k. t. ho, m. s.; alouini, b.s. ooi, 20-meter underwater wireless optical communication link with 1.5 gbps data rate, opt. express, 24, 2016, pp. 25502–25509. doi: 10.1364/oe.24.025502 [53] t. wu, y. chi, h. wang, c. tsai, g. lin, blue laser diode enables underwater communication at 12.4 gbps, sci. rep., 7, 2017, 40480. doi: 10.1038/srep40480 [54] p. tian, x. liu, s. yi, y. huang, s. zhang, x. zhou, l.; hu, l. zheng, r. liu, high-speed underwater optical wireless communication using a blue gan-based micro-led, opt. express 25 (2017), 1193. doi: 10.1364/oe.25.001193 [55] j. sticklus, p.a. hoeher, r. röttgers, optical underwater communication: the potential of using converted green leds in coastal waters, ieee j. ocean. eng., 44, 2018, pp. 535–547. doi: 10.1109/joe.2018.2816838 [56] l. grobe, a. paraskevopoulos, j. hilt, d. schulz, f. lassak, f. hartlieb, c. kottke, v. jungnickel, k.d. langer, high-speed visible light communication systems, ieee commun. mag., 51, 2013, pp. 60–66. doi: 10.1109/mcom.2013.6685758 [57] k. suzuki, k. asahi, a. watanabe, basic study on receiving light signal by led for bidirectional visible light communications, electron. commun. jpn., 98, 2015, pp. 1–9. doi: 10.1002/ecj.11608 [58] c. gabriel, m. a. khalighi, s. bourennane, p. léon, v. rigaud, investigation of suitable modulation techniques for underwater wireless optical communication, in proceedings of the international workshop on optical wireless communications, pisa, italy, 22 october 2012; pp. 1–3. doi: 10.1109/iwow.2012.6349691 [59] h. m. oubei, c. shen, a. kammoun, e. zedini, k. h. park, x. sun, g. liu, c. h. kang, t. k. ng, n. s. alouini, light based underwater wireless communications, jpn. j. appl. phys., 57, 2018, 08pa06. doi: 10.7567/jjap.57.08pa06 [60] j. xu, m. kong, a. lin, y. song, x. yu, f. qu, n. deng, ofdmbased broadband underwater wireless optical communication system using a compact blue led, opt. comm., 369, 2016, pp.100–105. doi: 10.1016/j.optcom.2016.02.044 [61] c. lu, j. wang, s. li, z. xu, 60m/2.5gbps underwater optical wireless communication with nrz-ook modulation and digital nonlinear equalization, in proceedings of the conference on lasers and electro-optics (cleo), san jose, ca, usa, 5–10 may 2019; pp. 1–2. doi: 10.1364/cleo_si.2019.sm2g.6 [62] 802.15.7-2011—ieee standard for local and metropolitan area networks--part 15.7: short-range wireless optical communication using visible light. ieee 2011. doi: 10.1109/ieeestd.2011.6016195 [63] n. suzuki, h. miura, k. matsuda, r. matsumoto, k. motoshima, 100 gb/s to 1 tb/s based coherent passive optical network technology, j. lightwave technol., 36, 2018, pp. 1485–1491. doi: 10.1109/jlt.2017.2785341 [64] h. ma, l. lampe, s. hranilovic, integration of indoor visible light and power line communication systems, in proceedings of the ieee 17th international symposium on power line communications and its applications, johannesburg, south africa, 24–27 march 2013; pp. 291–296. doi: 10.1109/isplc.2013.6525866 [65] s. dimitrov, h. haas, information rate of ofdm-based optical wireless communication systems with nonlinear distortion, j. lightwave technol., 31, 2012, pp. 918–929. doi: 10.1109/jlt.2012.2236642 [66] m. a. khalighi, m. uysal, survey on free space optical communication: a communication theory perspective. ieee commun. surv. tutor., 16, 2014, pp. 2231–2258. doi: 10.1109/comst.2014.2329501 [67] g. napoli, j. v. m. avilés, r. m. prades, p. j. s. valero, survey and preliminary results on the design of a visual light communication system for radioactive and underwater scenarios, in proceedings of the 17th international conference on informatics in control, automation and robotics (icinco 2020), pp. 529-536. doi: 10.5220/0009889805290536 [68] s. jaruwatanadilok, underwater wireless optical communication channel modeling and performance evaluation using vector radiative transfer theory, ieee journal on selected areas in communications 26(9) (2008), pp. 1620–1627. doi: 10.1109/jsac.2008.081202 [69] f. akhoundi, j. a. salehi, a. tashakori, cellular underwater wireless optical cdma network: performance analysis and implementation concepts, ieee transactions on communications 63(3) (2015), pp. 882–891. doi: 10.1109/tcomm.2015.2400441 [70] z. wang, y. dong, x. zhang, s. tang, adaptive modulation schemes for underwater wireless optical communication systems. wuwnet '12: proceedings of the seventh acm international https://doi.org/10.1007/s11277-020-07275-5 https://doi.org/10.1109/oceanse.2017.8084720 https://doi.org/10.1016/s0422-9894(08)70929-2 http://dx.doi.org/10.1109/milcom.2005.1605919 https://doi.org/10.1109/joe.2018.2816838 https://doi.org/10.1364/ao.54.005392 https://doi.org/10.1016/s0422-9894(08)70795-5 https://doi.org/10.5772/67915 https://doi.org/10.3390/electronics9040646 https://doi.org/10.1364/josaa.26.000530 https://doi.org/10.1117/1.3280288 https://doi.org/10.1109/tcom.1980.1094858 https://doi.org/10.1364/oe.24.025502 https://doi.org/10.1038/srep40480 https://doi.org/10.1364/oe.25.001193 https://doi.org/10.1109/joe.2018.2816838 https://doi.org/10.1109/mcom.2013.6685758 https://doi.org/10.1002/ecj.11608 https://doi.org/10.1109/iwow.2012.6349691 https://doi.org/10.7567/jjap.57.08pa06 https://doi.org/10.1016/j.optcom.2016.02.044 https://doi.org/10.1364/cleo_si.2019.sm2g.6 https://doi.org/10.1109/ieeestd.2011.6016195 https://doi.org/10.1109/jlt.2017.2785341 https://doi.org/10.1109/isplc.2013.6525866 https://doi.org/10.1109/jlt.2012.2236642 https://doi.org/10.1109/comst.2014.2329501 https://doi.org/10.5220/0009889805290536 https://doi.org/10.1109/jsac.2008.081202 https://doi.org/%2010.1109/tcomm.2015.2400441 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 34 conference on underwater networks and systems, november 2012 article no. 40 pp. 1-2. doi: 10.1145/2398936.2398985 [71] x. he, j. yan, study on performance of m-ary ppm underwater optical communication systems using vector radiative transfer theory, isape2012, 2012, pp. 566-570. doi: 10.1109/isape.2012.6408834 [72] s. tang, y. dong, x. zhang, receiver design for underwater wireless optical communication link based on apd, 7th international conference on communications and networking in china, 2012, pp. 301-305. doi: 10.1109/chinacom.2012.6417495 [73] p. swathi, s. prince, designing issues in design of underwater wireless optical communication system, 2014 international conference on communication and signal processing, 2014, pp. 1440-1445. doi: 10.1109/iccsp.2014.6950087 [74] m. chen, s. zhou, t. li, the implementation of ppm in underwater laser communication system, 2006 international conference on communications, circuits and systems, 2006, pp. 1901-1903. doi: 10.1109/icccas.2006.285044 [75] s. zhu, x. chen, x. liu, g. zhang, p. tian, recent progress in and perspectives of underwater wireless optical communication, progress in quantum electronics, 73, 2020, 100274. doi: 10.1016/j.pquantelec.2020.100274 [76] x. mi, y. dong, polarized digital pulse interval modulation for underwater wireless optical communications, oceans 2016 shanghai, 2016, pp. 1-4. doi: 10.1109/oceansap.2016.7485450 [77] m. doniec, d. rus, bidirectional optical communication with aquaoptical ii, 2010 ieee international conference on communication systems, 2010, pp. 390-394. doi: 10.1109/iccs.2010.5686513 [78] w. c. cox, j. a. simpson, j. f. muth, underwater optical communication using software defined radio over led and laser based links, 2011 milcom 2011 military communications conference, baltimore, md, usa, 2011, pp. 2057-2062. doi: 10.1109/milcom.2011.612762 [79] m. sui, x. yu, f. zhang, the evaluation of modulation techniques for underwater wireless optical communications, 2009 international conference on communication software and networks, 2009, pp. 138-142. doi: 10.1109/iccsn.2009.97 [80] b. cochenour, l. mullen, a. laux, phase coherent digital communications for wireless optical links in turbid underwater environments, oceans 2007, 2007, pp. 1-5. doi: 10.1109/oceans.2007.4449173 [81] y. zhao, a. wang, l. zhu, w. lv, j. xu, s. li, j. wang, performance evaluation of underwater optical communications using spatial modes subjected to bubbles and obstructions, optics letters 42(22) (2017), pp. 4699-4702. doi: 10.1364/ol.42.004699 [82] n. saeed, a. celik, t. y. al-naffouri, m. s. alouini, underwater optical wireless communications, networking, and localization: a survey, ad hoc netw., 94, 2019, 101935. doi: 10.1016/j.adhoc.2019.101935 [83] w. c. cox, b. l. hughes, j. f. muth, a polarization shift-keying system for underwater optical communications, oceans 2009, biloxi, ms, usa, 2009, pp. 1-4. doi: 10.23919/oceans.2009.5422258 [84] x. zhang, y. dong, s. tang, polarization differential pulse position modulation. in proceedings of uwnet '12: seventh acm international conference on underwater networks and systems november 2012 article no.: 41, pp. 1–2 doi: 10.1145/2398936.2398986 [85] g. cossu, r. corsini, a. m. khalid, s. balestrino, a. coppelli, a. caiti, e. ciaramella, experimental demonstration of high speed underwater visible light communications, 2013 2nd international workshop on optical wireless communications (iwow), 2013, pp. 11-15. doi: 10.1109/iwow.2013.6777767 [86] g. cossu, a. sturniolo, a. messa, d. scaradozzi, e. ciaramella, full-fledged 10base-t ethernet underwater optical wireless communication system, ieee journal on selected areas in communications, 36, no. 1, pp. 194-202, 2018. doi: 10.1109/jsac.2017.2774702 [87] g. schirripa spagnolo, d. papalillo, c. malta, s. vinzani, led railway signal vs full compliance with colorimetric specification, int. j. transp. dev. integr., 1, no. 3, pp. 568–577. 2017. doi: 10.2495/tdi-v1-n3-568-577 [88] t. hamza, m. a. khalighi, s. bourennane, p. léon, j. opderbecke, investigation of solar noise impact on the performance of underwater wireless optical communication links. opt. express 24(22) (2016), pp. 25832-25845. doi: 10.1364/oe.24.025832 [89] j. sticklus, m. hieronymi, p. a. hoeher, effects and constraints of optical filtering on ambient light suppression in led-based underwater communications, sensors 18(11) (2018), art. no. 3710. doi: 10.3390/s18113710 [90] t. j. petzold, volume scattering functions for selected ocean waters (no. sio-ref-72-78), scripps institution of oceanography, la jolla ca visibility lab, 1972. online [accessed 1 september 2021] https://apps.dtic.mil/dtic/tr/fulltext/u2/753474.pdf [91] e. petritoli, f. leccese, m. cagnetti, high accuracy buoyancy for underwater gliders: the uncertainty in the depth control, sensors 19(8) (2019), art. no. 1831. doi: 10.3390/s19081831 [92] e. petritoli, f. leccese, high accuracy attitude and navigation system for an autonomous underwater vehicle (auv), acta imeko 7 (2018) 2, pp. 3-9. doi: 10.1109/10.21014/acta_imeko.v7i2.535 [93] f. leccese, m. cagnetti, s. giarnetti, e. petritoli, i. luisetto, s. tuti, r. durovic-pejcev, t. dordevic, a. tomašević, v. bursić, v. arenella, p. gabriele, a. pecora, l. maiolo, e. de francesco, g. schirripa spagnolo, r. quadarella, l. bozzi, c. formisano, a simple takagi-sugeno fuzzy modelling case study for an underwater glider control system, 2018 ieee international workshop on metrology for the sea; learning to measure sea health parameters (metrosea), bari, italy, 2018, pp. 262-267. doi: 10.1109/metrosea.2018.8657877 [94] m. tabacchiera, s. betti, s. persia, underwater optical communications for swarm unmanned vehicle network, 2014 fotonica aeit italian conference on photonics technologies, naples, italy, 2014, pp. 1-3. doi: 10.1109/fotonica.2014.6843839 [95] c. lodovisi, p. loreti, l. bracciale, s. betti, performance analysis of hybrid optical–acoustic auv swarms for marine monitoring, future internet, 10, no. 7, 2018, 65. doi: 10.3390/fi10070065 [96] f. zappa, s. tisa, a. tosi, s. cova, principles and features of single-photon avalanche diode arrays, sens. actuators a phys., 140, 2007, pp. 103–112. doi: 10.1016/j.sna.2007.06.021 [97] j. kirdoda, d.c.s. dumas, k. kuzmenko, p. vines, z.m. greener, r.w. millar, m.m. mirza, g.s.; buller, d.j. paul, geiger mode ge-on-si single-photon avalanche diode detectors, in proceedings of the 2019 ieee 16th international conference on group iv photonics (gfp), singapore, 28–30 august 2019. doi: 10.1109/group4.2019.8853918 [98] s. donati, t. tambosso, single-photon detectors: from traditional pmt to solid-state spad-based technology, ieee j. sel. top. quantum electron. 20(2014), pp. 204–211. doi: 10.1109/jstqe.2014.2350836 [99] t. shafique, o. amin, m. abdallah, i. s. ansari, m. s. alouini, k. qaraqe, performance analysis of single-photon avalanche diode underwater vlc system using arq, ieee photonics j. 9 (2017), https://doi.org/10.1145/2398936.2398985 https://doi.org/10.1109/isape.2012.6408834 https://doi.org/10.1109/chinacom.2012.6417495 https://doi.org/10.1109/iccsp.2014.6950087 https://doi.org/10.1109/icccas.2006.285044 https://doi.org/10.1016/j.pquantelec.2020.100274 https://doi.org/10.1109/oceansap.2016.7485450 https://doi.org/10.1109/iccs.2010.5686513 https://doi.org/10.1109/milcom.2011.6127621 https://doi.org/10.1109/iccsn.2009.97 https://doi.org/10.1109/oceans.2007.4449173 https://doi.org/10.1364/ol.42.004699 https://doi.org/10.1016/j.adhoc.2019.101935 https://doi.org/10.23919/oceans.2009.5422258 https://doi.org/10.1145/2398936.2398986 https://doi.org/10.1109/iwow.2013.6777767 https://doi.org/10.1109/jsac.2017.2774702 https://doi.org/10.2495/tdi-v1-n3-568-577 https://doi.org/10.1364/oe.24.025832 https://doi.org/10.3390/s18113710 https://apps.dtic.mil/dtic/tr/fulltext/u2/753474.pdf https://doi.org/10.3390/s19081831 https://doi.org/10.1109/10.21014/acta_imeko.v7i2.535 https://doi.org/10.1109/metrosea.2018.8657877 https://doi.org/10.1109/fotonica.2014.6843839 https://doi.org/10.3390/fi10070065 https://doi.org/10.1016/j.sna.2007.06.021 https://doi.org/10.1109/group4.2019.8853918 https://doi.org/10.1109/jstqe.2014.2350836 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 35 pp. 1–11. doi: 10.1109/jphot.2017.2743007 [100] r. hadfield, single-photon detectors for optical quantum information applications, nat. photon 3 (2009), pp. 696–705. doi: 10.1038/nphoton.2009.230 [101] d. chitnis, s. collins, a spad-based photon detecting system for optical communications, j. lightwave technol., 32, 2014, pp. 2028–2034. doi: 10.1109/jlt.2014.2316972 [102] e. sarbazi, m. safari, h. haas, statistical modeling of singlephoton avalanche diode receivers for optical wireless communications, ieee trans. commun., 66, 2018, pp. 4043– 4058. doi: 10.1109/tcomm.2018.2822815 [103] m. a. khalighi, t. hamza, s. bourennane, p. léon, j. opderbecke, underwater wireless optical communications using silicon photo-multipliers, ieee photonics j., 9, 2017, pp. 1–10. doi: 10.1109/jphot.2017.2726565 [104] z. hong, q. yan, z. li, t. zhan, y. wang, photon-counting underwater optical wireless communication for reliable video transmission using joint source-channel coding based on distributed compressive sensing, sensors, 19, 2019, 1042. doi: 10.3390/s19051042 [105] s. pan, l. wang, w. wang, s. zhao, an effective way for simulating oceanic turbulence channel on the beam carrying orbital angular momentum, sci. rep. (9) (2019), pp. 1–8. doi: 10.1038/s41598-019-50465-w [106] m. sait, x. sun, o. alkhazragi, n. alfaraj, m. kong, t.k. ng, b.s. ooi, the effect of turbulence on nlos underwater wireless optical communication channels, chinese opt. lett. 17 (2019), art. no. 100013. doi: 10.3788/col201917.100013 [107] l. zhang, d. chitnis, h. chun, s. rajbhandari, g. faulkner, d. o'brien, s. collins, a comparison of apd-and spad-based receivers for visible light communications, j. lightwave technol. 36 (2018), pp. 2435–2442. doi: 10.1109/jlt.2018.2811180 [108] c. wang, h. y. yu, y. j. zhu, t. wang, y. w. ji, multi-led parallel transmission for long distance underwater vlc system with one spad receiver, opt. commun., 410, 2018, pp.889–895. doi: 10.1016/j.optcom.2017.11.069 [109] n. e. farr, c. t. pontbriand, j. d. ware, l.-p. a. pelletier, nonvisible light underwater optical communications, ieee third underwater communications and networking conference (ucomms), lerici, italy, 2016, pp. 1-4. doi: 10.1109/ucomms.2016.7583454 [110] j. marshall, vision and lack of vision in the ocean, current biology, volume 27(11), 2017, pp. r494-r502. doi: 10.1016/j.cub.2017.03.012 [111] xiaobin sun, wenqi cai, omar alkhazragi, ee-ning ooi, hongsen he, anas chaaban, chao shen, hassan makine oubei, mohammed zahed mustafa khan, tien khee ng, mohamed-slim alouini, boon s. ooi, 375-nm ultraviolet-laser based non-line-ofsight underwater optical communication, optics express, 26(10) (2018), pp. 12870-12877. doi: 10.1364/oe.26.012870 [112] xiaobin sun, meiwei kong, omar alkhazragi, chao shen, eening ooi, xinyu zhang, ulrich buttner, tien khee ng, boon s. ooi, non-line-of-sight methodology for high-speed wireless optical communication in highly turbid water, optics communications, 461, 2020, 125264. doi: 10.1016/j.optcom.2020.125264 [113] g. schirripa spagnolo, f. leccese, led rail signals: full hardware realization of apparatus with independent intensity by temperature changes, electronics 10 (2021), art. no. 1291. doi: 10.3390/electronics10111291 [114] r. filippo, e. taralli, m. rajteri, leds: sources and intrinsically bandwidth-limited detectors, sensors, 17, 2017, 1673. doi: 10.3390/s17071673 [115] g. schirripa spagnolo, f. leccese, m. leccisi, led as transmitter and receiver of light: a simple tool to demonstration photoelectric effect, crystals, 9, 2019, 531. doi: 10.3390/cryst9100531 [116] bivar uv5tz leds datasheet. online [accessed 01 september 2021] https://www.mouser.it/datasheet/2/50/biva_s_a0002780821_1 -2262009.pdf [117] mic3289 datasheet. online [accessed 01 september 2021] https://ww1.microchip.com/downloads/en/devicedoc/mic32 89.pdf [118] c. s. a. gong, y. c. lee, j. l. lai, c. h. yu, l. r. huang, c. y. yang, the high-efficiency led driver for visible light communication applications, scientific reports, 6, no.1, 2016, pp.1-7. doi: 10.1038/srep30991 [119] ltc1050 data sheet. online [accessed 01 september 2021] https://www.analog.com/media/en/technicaldocumentation/data-sheets/1050fb.pdf https://doi.org/10.1109/jphot.2017.2743007 https://doi.org/10.1038/nphoton.2009.230 https://doi.org/10.1109/jlt.2014.2316972 https://doi.org/10.1109/tcomm.2018.2822815 https://doi.org/10.1109/jphot.2017.2726565 https://doi.org/10.3390/s19051042 https://doi.org/10.1038/s41598-019-50465-w https://doi.org/10.3788/col201917.100013 https://doi.org/10.1109/jlt.2018.2811180 https://doi.org/10.1016/j.optcom.2017.11.069 https://doi.org/10.1109/ucomms.2016.7583454 https://doi.org/10.1016/j.cub.2017.03.012 https://doi.org/10.1364/oe.26.012870 https://doi.org/10.1016/j.optcom.2020.125264 https://doi.org/10.3390/electronics10111291 https://doi.org/10.3390/s17071673 https://doi.org/10.3390/cryst9100531 https://www.mouser.it/datasheet/2/50/biva_s_a0002780821_1-2262009.pdf https://www.mouser.it/datasheet/2/50/biva_s_a0002780821_1-2262009.pdf https://ww1.microchip.com/downloads/en/devicedoc/mic3289.pdf https://ww1.microchip.com/downloads/en/devicedoc/mic3289.pdf https://doi.org/10.1038/srep30991 https://www.analog.com/media/en/technical-documentation/data-sheets/1050fb.pdf https://www.analog.com/media/en/technical-documentation/data-sheets/1050fb.pdf accuracy – review of the concept and proposal for a revised definition acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 414 acta imeko issn: 2221-870x december 2020, volume 9, number 5, 414 418 accuracy – review of the concept and proposal for a revised definition c. müller-schöll1 1 mettler-toledo international inc., greifensee, switzerland, christian.mueller-schoell@mt.com abstract: accuracy as a concept is widely used in metrology. it has undergone a historical development and is defined and used differently in different documents, communities and usage situations. present existing definitions are sometimes not clear, not unambiguous and sometimes even not useful. this paper explains the difficulties regarding present definitions (section 3 and 4), throws a spotlight on present day use of the concept (section 5), clarifies the question if accuracy is a qualitative concept (section 6) and finally presents conceptual ideas and a proposal for a new wording of a sustainable definition for accuracy (section 7). this paper is intended to support the highly estimated work of jcgm wg2. keywords: accuracy; terminology; metrology 1. introduction, motivation language is subject to ongoing change, due to its usage and due to its users. the term “accuracy” in the context of metrology has undergone changes in its definition, in its use and in its understanding and application in the past. there is still today uncertainty about its proper use and its proper meaning, and there is more than one normative document defining accuracy, using different words. it might be time to reflect on the language and it might be time for a clarification resulting in a clear and (for the field of metrology) universally applicable definition. it turns out that two distinguishable concepts are in use: one is taken from the international vocabulary of metrological terms and concepts [1] henceforth referred to as "the vim", the other one is more prevalent in standard documents published by the international organization for standardization (iso)1. 2. review of the vim definition2 2.1. text review the definition of accuracy according to [1] reads: 1 the vim is not considered an "iso document" here, although it is also published as "iso guide 99" under an iso name. "2.13 (3.5) measurement accuracy; accuracy of measurement; accuracy closeness of agreement between a measured quantity value and a true quantity value of a measurand note 1 the concept ‘measurement accuracy’ is not a quantity and is not given a numerical quantity value. a measurement is said to be more accurate when it offers a smaller measurement error. note 2 the term “measurement accuracy” should not be used for measurement trueness and the term “measurement precision” should not be used for ‘measurement accuracy’, which, however, is related to both these concepts. note 3 ‘measurement accuracy’ is sometimes understood as closeness of agreement between measured quantity values that are being attributed to the measurand." additionally, an "annotation" can be found in the internet-version of the vim, reading: "annotation (informative) [9 june 2016] historically, the term "measurement accuracy" has been used in related but slightly different ways. sometimes a single measured value is considered to be accurate (as in the vim3 definition), when the measurement error is assumed to be small. in other cases, a set of measured values is considered to be accurate when both the measurement trueness and the measurement precision are assumed to be good. sometimes a measuring instrument or measuring system is considered to be accurate, in the sense that it provides accurate indications. care must therefore be taken in explaining in which sense the term "measurement accuracy" is being used. in no case is there an established methodology for assigning a numerical value to measurement accuracy." 2.2. text analysis the vim definition uses the concept “closeness of agreement”, however, the term “closeness” is not defined and therefore is subject to interpretation. the definition speaks of the closeness of agreement of “a” measured quantity value, so it applies to a single measured value. note 2 mentions a "relation" between accuracy and precision and between accuracy and trueness, but leaves open what kind of relation that is. although accuracy being "not a quantity" [1], a comparative statement like “more accurate" is possible according to note 1. it is left open how two things can be judged “more” and “less”, when being not a quantity. 2 in the following paragraphs we investigate some definitions of the term accuracy as examples. other definitions might exist. http://www.imeko.org/ mailto:christian.mueller-schoell@mt.com acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 415 a relation between "measurement accuracy" and “measurement error” is mentioned in note 1. “measurement error” in turn is given a relation to “quantity value” in its vim-definition (2.16), meaning that both “measurement error” and “quantity value” are of the same kind. “quantity value” in turn is used as the first fraction/component of a “measurement result” (vim definition 2.9) where measurement result consists of a first fraction/component "value" and a second fraction/component "uncertainty associated to it". consequently, measurement error is clearly a one-dimensional property (it is a one-dimensional quantity value) and not a two dimensional vector like "measurement result" (with its two fractions/dimensions/components "value" and "uncertainty"). the "annotation" (2016) that can be found in the internet-version of the vim clearly points at ambiguities of interpretations of the definition, calling them "historical". but in the text of the annotation no hint is given that either of the two interpretations given there is considered outdated, so they still both exist. the two interpretations a) either relate "a" measured value to its error (note "a" value, not "values"), so "smaller error = more accurate" b) or relate "a set of measured values" to both its trueness and its precision (where both the trueness and the precision need to be "good" for the set to be considered "accurate"). but this is not consistent in itself, because even a single value can be attributed both a trueness and also a precision (!) if the distribution is known from which this single value is drawn (type a uncertainty of a single measured value). so the relation of "a" measured value to its trueness and its precision is missing here. the npl has published interpretations referencing to the vim-definition where accuracy is independent from precision [2] and where a set of measurements (!) is said to have "high accuracy" while having "low precision" at the same time. the consistency with case b) mentioned above appears unclear or not being given. 2.3. conclusion regarding vim with regards to the vim definition, it is remains unclear how accuracy is defined and how accuracy can be expressed. it is said to be "not a quantity", but vim does clearly not state it be "qualitative". accuracy can be judged "more accurate [than]" but accuracy cannot be assigned a value. it is unclear how this goes together. accuracy is stated to relate to "error" (smaller error = more accurate), however an additional property like dispersion or variance or uncertainty is not an explicitly mentioned part of this definition (only in the notes). on the other hand, accuracy is said to be "related to both" trueness and precision (note 2), where "precision" (vim 2.15) is clearly not related to error, but to dispersion. this is a contradiction in the vim in the view of the author. since the idea of accuracy being applied to more than one value and the idea of dispersion of values are not visible in the vim definition text (not considering the non-normative annotation 2), the commonly used dartboard model with dispersing hits cannot be considered an appropriate illustration for the definition of accuracy according to the vim. accuracy and precision are on the same hierarchical level in this concept (they are like apples and pears). within this concept it is possible that a result can be attributed e.g. “accurate but not precise” which can be found in publications (e.g. [2]). 3. review of the "iso definition" 3.1. text review iso (iso 5725-1:1994) the title of the standard iso 5725-1 [3] is: “accuracy (trueness and precision) of measurement methods and results". this standard gives definitional phrases in two different locations: one rather explanatory (yet definitional in character) in section 0.1, and a definition in section 3.6. (in chapter 3 named "definitions"): section 0.1 of iso 5725-1 uses two terms "trueness" and "precision" to describe the accuracy of a measurement method. "trueness'' refers to the "closeness of agreement between the arithmetic mean of a large number of test results and the true or accepted reference value". "precision" refers to the "closeness of agreement between test results". additionally, we find a so-declared "definition" in the same standard: 3.6 accuracy: the closeness of agreement between a test result and the accepted reference value. note 2 the term accuracy, when applied to a set of test results, involves a combination of random components and a common systematic error or bias component [emphasis by the author]. 3.2. text analysis – iso 5725-1:1994 section 0.1 of iso 5725-1:1994 according to section 0.1 of [3], it is both “trueness” and “precision” that are used to form accuracy. the text of 0.1 refers to accuracy as a property of a measurement method (and not of a single result and not of a set of results). this is a clear, yet understandable difference to the vim concept since the focus of [3] is methods. both trueness and precision are explained using the words “a large number of” results or at least “results” (in plural form) in [3]. according to this, accuracy, as explained here, cannot be applied to a single measurement result, since the language does not cover a single result (again in contradiction to vim). http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 416 precision is not the same as uncertainty. so, accuracy – according to this concept – is not related to what is usually perceived as being the quality of a "measurement result" (vim 2.9) which consists (vim 2.9 note 2) of "a measured quantity value and a measurement uncertainty" [emphasis by author] and is thus a two-dimensional statement. the two dimensions of "measurement result" introduced in this definition are: • the first dimension, either the measured quantity value or the measurement error (with reference to a reference value) calculated from it and, • a second dimension characterizing the dispersion which is the uncertainty associated with the first component. the wording of 0.1 of [3] is in contradiction with the title of the standard, which announces both the accuracy of "methods and results". the accuracy of results is neither explained nor even mentioned here. section 3.6 of iso 5725-1:1994 in section 3.6 of the same standard, a definition is given for accuracy. this definition is the same as the vim definition which refers to “a” test result (singular form) and to a “test result”. methods are not mentioned in the definition. however, a note (note 2) has been added, broadening the scope to “a set of … results”. this circumstance in the note is inconsistent with the definition. the note is not in alignment with the definition and a definition that can be applied to the accuracy of a method is missing in the whole standard despite its title. additionally, the restriction "when applied to a set of results" is misleading (or maybe even wrong), since also a single value can be assigned a dispersion originating from the population this single value was drawn from (as explained above). according to 3.6 of [3], accuracy embeds both trueness and precision. this defines a hierarchy: trueness and precision are on the same level and accuracy combines both, defining a parent hierarchical level. (accuracy is like fruits and trueness and precision are like apples and pears.) a two-dimensional dartboard model is frequently used to visualize the concept depicting trueness and precision with a set of hits (by means of centering of the mean value and spread of the hits). 3.3. conclusion regarding iso 5725-1:1994 it seems that in the definition of accuracy given in [3], the authors intended to take both, a "deviation component" and a "spread component" into account. however, instead of measurement uncertainty, the precision was chosen to describe what was missing when accuracy "at one time" (quoted from section 0.6) contained only what we today call trueness. nevertheless, the different statements within the standard are hard to match (if not impossible to match). note 2 broadens the scope of the definition to "a set of results" which is not consistent with the mere text of the definition. 3.4. text review iso iwa 15 iso iwa 15 [4] reproduces the vim definition for accuracy. however, in a specific definition of "uncertainty" (section 3.1.3), we find the following phrase in a note: "uncertainty is inversely related to accuracy, and is a quantity value." this is one of the rare occasions that in the defining literature uncertainty comes explicitly into play in a relation to the concept of accuracy. nevertheless, it is neither detailed, how this "relation" looks like nor how a relation would lead from a non-quantitative accuracy to a quantitative uncertainty nor if and how "inversely" is to be understood mathematically. chapter 5.1 of [4] reads: "accuracy may be improved by improving precision and trueness." this hints again at the concept of [3] where accuracy is (whatever kind of) a combination trueness and precision and accuracy is on a higher hierarchical level than trueness and precision (please note that at this point, no connection to "uncertainty" is made). however, immediately afterwards, we read "accuracy, precision and trueness are conceptual terms. quantitative expressions of these concepts are given in terms of uncertainty, random error and systematic error, respectively." this is clarifying accuracy: accuracy is said to be quantified by uncertainty, which in turn is said to be combined of systematic and random errors. the latter of course is wrong since uncertainty according to the concept of the gum [5] is not just a combination of random and systematic "errors". (in order to clearly separate these conceptual areas, the gum has introduced "type a" and "type b" uncertainties and does not support the wording quoted from [4] above.) later in [4], accuracy is even expressed as a quantity value (e.g. in table 2, column name: "typical accuracy", column content is values). this is in clear contradiction to the definition given in the same document. and also later in the standard (section b.6.1.4) "accuracy" and "precision" are mentioned on the same hierarchical level, which again is an internal contradiction in the same document and in section b.7.1 systematic error is equalized to accuracy which also contradicts internally in the document. 3.5. conclusion regarding iso iwa 15 also [4] does not clarify the matter. it adds ideas of uncertainty to the concept of accuracy, however, also appears not fully internally consistent. 4. current use of the term accuracy a recent high-level metrology publication adds an interesting view to the discussion: it is the paper titled "evaluation of the accuracy, consistency, and http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 417 stability of measurements of the planck constant…" by a possolo, s schlamminger et al. [6]. in section 3 of the paper (section named "accuracy"!), the authors detail on the "accuracy requirements" of the ccm regarding the re-definition of the kilogram. in fact, the requirements made by the ccm are not given this specific name ("accuracy requirements") in the original ccm document [7]. in the publication, these requirements and the values that are compared to them are solely expressed in terms of uncertainties. (it is obvious that "errors" are no subject in this field of work.) the only accuracy measure (quantification) the authors take into consideration (and relate to the title of the publication) is uncertainty. this is an encouraging indicator that according to present recognized metrologists, uncertainty must be at least part of the concept of accuracy. 5. qualitative or quantitative? note 1 to the vim definition of accuracy states "the concept ‘measurement accuracy’ is not a quantity and is not given a numerical quantity value.". this statement (being "not a quantity") obviously tempts a number of authors to make statements of accuracy being "qualitative", e.g. [2] and [8]. however, this is not what is said in the referred vim text and is probably also not the intention of the wording of the vim. there is a fine yet significant difference between being "not a quantity" and being "qualitative": accuracy is said to be "not a quantity" [1] and not being given "a quantity value". this, according to the understanding of the author, wants to express that – in in simple words – there is no globally accepted number scale and no unit to express accuracy to make accuracy a metrologically traceable or metrologically comparable property (= a quantity). this note statement should probably separate the concept of accuracy from concepts like "quantity" or "uncertainty", where there is a common, global understanding how to quantify these. however, there is clearly no statement and no hint anywhere in the vim definition that accuracy be qualitative! common usage of "accuracy" requests comparative statements like "more" or "less" accurate which indicates the necessity to give accuracy "some kind of number" (a value), at least in a given situation or for a given purpose. this gives room for a user to apply a purpose oriented algorithm to get and assign an indicative accuracy number which allows comparative statements in a given situation. 6. synthesis, proposal for a solution it appears that the concept "accuracy" as we find it today in the vim and in iso documents is a leftover of an ongoing, not yet completed, development. it is stated that "at one time" ([3], 0.6) the concept of accuracy, being perceived as one-dimensional (only related to measurement error), was amended by the concept of precision as additional information to take account of the possible dispersion of values as a second dimension. probably, the concept of uncertainty was not yet fully established at that time. however today, according to the vim, it is exactly measurement uncertainty that is "characterizing the dispersion of the quantity values being attributed to the measurand" ([1] 2.26), which is exactly what should be used if a "dispersion dimension" is to be considered in the concept of accuracy in addition to a "trueness dimension". in addition, it is historically obvious and intentional, that there is not one metrologically traceable and metrologically comparable ([1] 2.46) way of quantifying accuracy. yet, it is necessary in practice that accuracy can be quantified in order to compare or rank. these quantifications may be done using an algorithm (e.g. a mathematical equation) which may follow a purpose given by the specific situation. therefore, we propose the introduction of a new definition for “accuracy” which should take into account: • the modern understanding of a measurement result, consisting of a "measured quantity value and a measurement uncertainty" (vim 2.9, note 2), • backwards compatibility still allowing to combine accuracy from a combination of trueness and precision, • the possibility of applying the concept of accuracy to a single value and to a set of values and to a method and to a procedure, • clarifying that "not a quantity" does not mean "being qualitative", • the fact that there may be (various possible) ways of assigning values to accuracy for making comparative statements. this could be realized with the following wording which is submitted to further discussion and consideration: http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 418 accuracy (measurement accuracy, accuracy of measurement) term to describe the closeness of agreement between one or several measurement results and a true quantity value. accuracy consists of a combination of a trueness property and a dispersion property. any algorithm to combine these two to yield a quantification, may follow its intended purpose. quantifications of accuracy which origin from the same algorithm may offer comparability (statements like "more accurate" or "less accurate" are then possible). note 1: the trueness property is preferably the measurement error (vim 2.16) and the dispersion property is preferably the measurement uncertainty (vim 2.26). note 2: accuracy according to this definition can be applied, if the necessary information is available, to a set of measurement results, to a single measurement result and to a measurement method and to a measurement procedure. note 3: possible algorithms can be summations of e.g. measurement error and measurement uncertainty in quadrature or considering the absolute of measurement error and the quadrature of measurement uncertainty etc. also a two-dimensional dartboard model can used to illustrate the concept, see figure 1: figure 1: accuracy dart board model; x-axis: improving dispersion, y-axis: improving trueness the x-axis is labelled "improving dispersion", the y-axis is labelled "improving trueness". it is left to the user whether "dispersion" is substituted by "precision" or by "uncertainty". the definition encompasses both concepts, which is no problem due to the fact that according to the definition proposed above, accuracy does neither deliver mathematically unambiguous nor metrologically traceable or comparable figures. it is admitted that the above proposed definition is rather lengthy and it would be desirable that definitions were brief and clear without "notes". however, given the history of this concept and the ambiguities explained in this paper, it appears to be necessary to use more words to connect the new wording to its history and to clarify misconceptions. 7. summary in this paper, we have analysed current use of the concept "accuracy". as means of example, we investigated three normative documents in which accuracy is defined. we have detected various incompatibilities, even within the same document but also in the community using this concept. we tried to understand the historical development that might have affected the understanding of this concept and conclude with a synthesis proposal for a new definition that encompasses historical as well as modern concepts and thus offers full backwards-compatibility. 8. references [1] jcgm 200:2012: international vocabulary of metrology – basic and general concepts and associated terms (vim) 3rd edition 2008 version with minor corrections (including "vim definitions with informative annotations", last updated 29 april 2017), accessed 06/2020 [2] npl: the difference between accuracy & precision https://www.npl.co.uk/skills-learning/outreach/school-posters/npl-schools-poster-_accuracy-precision-v7-hr-nc.pdf (accessed 06/2020) [3] international organization for standardization iso 5725-1:1994 accuracy (trueness and precision) of measurement methods and results, geneva. switzerland [4] international organization for standardization iso iwa 15:2015 specification and method for the determination of performance of automated liquid handling systems, geneva. switzerland [5] jcgm 100:2008. evaluation of measurement data – guide to the expression of uncertainty in measurement (gum 1995 with minor corrections) [6] a possolo, s schlamminger et al. evaluation of the accuracy, consistency, and stability of measurements of the planck constant… metrologia 55 29 [7] recommendation of the consultative committee for mass and related quantities submitted to the international committee for weights and measures; recommendation g 1 (2013) on a new definition of the kilogram https://www.bipm.org/cc/ccm/allowed/14/31a_recommendation_ccm_g1(2013).pdf (accessed 06/2020) [8] nist engineering statistics handbook https://www.itl.nist.gov/div898/handbook/mpc/section1/mpc113.htm (accessed 01/2020) http://www.imeko.org/ https://www.npl.co.uk/skills-learning/outreach/school-posters/npl-schools-poster-_-accuracy-precision-v7-hr-nc.pdf https://www.npl.co.uk/skills-learning/outreach/school-posters/npl-schools-poster-_-accuracy-precision-v7-hr-nc.pdf https://www.npl.co.uk/skills-learning/outreach/school-posters/npl-schools-poster-_-accuracy-precision-v7-hr-nc.pdf https://www.bipm.org/cc/ccm/allowed/14/31a_recommendation_ccm_g1(2013).pdf https://www.bipm.org/cc/ccm/allowed/14/31a_recommendation_ccm_g1(2013).pdf https://www.bipm.org/cc/ccm/allowed/14/31a_recommendation_ccm_g1(2013).pdf https://www.itl.nist.gov/div898/handbook/mpc/section1/mpc113.htm https://www.itl.nist.gov/div898/handbook/mpc/section1/mpc113.htm development and characterization of a self-powered measurement buoy prototype by means of piezoelectric energy harvester for monitoring activities in a marine environment acta imeko issn: 2221-870x december 2021, volume 10, number 4, 201 208 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 201 development and characterisation of a self-powered measurement buoy prototype by means of piezoelectric energy harvester for monitoring activities in a marine environment damiano alizzio1, antonino quattrocchi1, roberto montanini1 1 department of engineering, university of messina, c.da di dio, vill. s. agata, 98166, messina, italy section: research paper keywords: piezoelectric patch; ripples waves; uncertainty estimation; motion frequency transformer citation: damiano alizzio, antonino quattrocchi, roberto montanini, development and characterization of a self-powered measurement buoy prototype by means of piezoelectric energy harvester for monitoring activities in a marine environment, acta imeko, vol. 10, no. 4, article 31, december 2021, identifier: imeko-acta-10 (2021)-04-31 section editor: roberto montanini, università di messina and alfredo cigada, politecnico di milano, italy received september 10, 2021; in final form december 11, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: damiano alizzio, e-mail: damiano.alizzio@unime.it 1. introduction the purposes of monitoring procedures for marine environment are multiple and useful both for measuring the quality of the specific ecosystem and for estimating of the global anthropogenic impact, also in the field of smart city [1], [2]. most widespread activities range from detection of pollutants to meteorological or climatic measurements [3]. however, other aims concern the use of marine sensors for safety evaluation in coastal areas and protection from seismic events or biological hazards [4]. scientific literature already presents technological solutions in the field of floating measurement devices with specialized sensors and data communication systems for different applications [5]. measurement buoys provide a reading service for chemical and/or physical quantities through sensors and low energy electronics and guarantee a stable communication to the ground through suitable data transfer channels (wi-fi, bluetooth, gsm) [6], [7]. although a modest power input is foreseen for supplying the on-board instrumentation, the need to make these devices energy self-sufficient remains open. for such purpose, technologies are employed for ensuring a buoys independence from human presence and a constant source of electricity [8], [9]. traditionally, measurement buoys are powered by photovoltaic systems and by wind turbines. alippi et al. [10], albaladejo et al. [11] and hormann et al. [12] employed different abstract in the interest of our society, for example in smart city but also in other specific backgrounds, environmental monitoring is an essential activity to measure the quality of different ecosystems. in fact, the need to obtain accurate and extended measurements in space and time has considerably become relevant. in very large environments, such as marine ones, technological solutions are required for the use of smart, automatic, and self-powered devices in order to reduce human maintenance service. this work presents a simple and innovative layout for a small self-powered floating buoy, with the aim of measuring and transmitting the detected data for visualization, storage and/or elaboration. the power supply was obtained using a cantilever harvester, based on piezoelectric patches, converting the motion of ripple waves. such type of waves is characterized by frequencies between 1.50 hz and 2.50 hz with oscillation between 5.0 ° and 7.0 °. specifically, a dedicated experimental setup was created to simulate the motion of ripple waves and to evaluate the suitability of the proposed design and the performance of the used harvester. furthermore, a dynamic analytical model for the harvester has been defined and the uncertainty correlated to the harvested power has been evaluated. finally, the harvested voltage and power have shown how the presented buoy behaves like a frequency transformer. hence, although the used cantilever harvester does not work in its resonant frequency, the harvested electricity undergoes a significant increase. mailto:damiano.alizzio@unime.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 202 size photovoltaic cells to feed sensor buoy systems for sea trials. in these studies, the main limitations were found in the dependence of the electricity production by the sun, by lacking in bad weather conditions and by inactivity during the night, and in the need to equip the measurement buoys with bulky and heavy systems and batteries. instead, for wind turbines, the meteorological conditions are not optimal on the sea surface. in fact, the wind speed at 1 m above the sea level is a third of that at 80 m. this results in a 73% reduction in wind power [13]. recently, new energy sources, such as tidal currents and sea or ocean waves, have been investigated. trevathan et al. [14] proposed tidal currents to supply sensors for marine measurements and compared the performances of different types of wind turbines. however, they concluded that such devices may not be cost-effective and risk-prone to apply due to biofouling and entanglement from drifting algae and sea grass wrack. sea or ocean waves represent an attractive renewable source, indirectly related to both the sun and the wind, and with a high energy density [15]. pelc et al. [16] stated that wave energy is vast and more reliable than most renewable sources. furthermore, they highlighted that such energy at a given site is available up to 90 % of the time, while photovoltaic and wind energy tend to be available only 20-30 % of the time. the adoption of an energy harvester by piezoelectric transducers (peh) is an advantageous solution to convert wave motion in electric energy. peh are a well-known technique in literature. for example, bolzea et al. [17] used such devices in cantilever configuration, demonstrating that the maximum power output is generated when the mechanical resonance is reached. toyabur et al. [18] designed a multimode peh, consisting of four elements, connected in parallel and in a cantilever configuration, to achieve different low frequency resonance modes (10-20 hz). the authors showed that their system generates about four times more power than a single peh. pradeesh et al. [19] analysed, both experimentally and numerically, the effect of a proof mass fixed to a peh in a cantilever configuration. the researchers obtained the best results as the proof mass was glued close to the clamped end. these works highlight the importance of a correct design for a cantilever type peh, but they do not discuss the effect of the proof mass (i. e. the resonant frequency variation of the peh) on the harvested power. recently, montanini et al. [20] developed a peh, using a glass fibre reinforced beam support. the authors studied the correlation between the mechanical working frequency and the harvested electrical power, analysing the deflection shape of the peh under operating conditions by means of a scanning laser-doppler vibrometer. after that, they investigated the conversion efficiency of this peh [21] and the applicability of a low-power single-stage converter, able to automatically follow the changes in the resistive component of the output impedance, in order to maximize the energy yield [22], [23]. in the field of power supply for measurement buoys, pehs have been sporadically applied due to the low frequency of the sea and ocean waves. wu et al. [24] developed a peh fixed to a floating buoy, that was anchored to the ocean floor. this device consisted of several cantilevers, on which many piezoelectric patches (pcs) were attached. the authors analyzed the size effect of the float and derived a numerical model to calculate the harvested energy. the research findings show that up to 24 w electric power can be generated with the piezoelectric cantilevers length of 1 m and the length of the buoy of 20 m. nabavi et al. [25] proposed the design of a beam-to-column piezoelectric system, able to power a large floating and instrumented ocean buoy. they derived and experimentally verified the equations of the electromechanical behavior of the device, demonstrating that the height amplitude and the low frequency of the wave guarantee the best performance. additionally, using a baffled water tank, they developed a self-tuning buoy, which works based on the frequency of ocean waves. recently, alizzio et al. [26], [27] proposed an instrumented spar and fixed-point buoy, equipped with a peh, to convert the energy of the wave motion into electricity through some pcs, glued on deformable and floating bands. the buoy was designed, numerically simulated, and experimentally verified, obtaining a light structure able to self-power its on-board sensors and carrying out data transmission. in this paper, the performance of a simple and innovative layout for a small measurement buoy, supplied by a cantilever peh, has been studied. the structure has been designed to convert the motion of the ripple waves in a cyclic oscillation with harmonics of high frequency, to which the peh is subjected thanks to a suitable proof mass. in this context, a dedicated experimental setup has been implemented to simulate the motion of the ripple waves and to evaluate its effect on the electric response of the peh. such motion has been discretized on amplitude and frequency configurations, characterizing the dynamics of the proposed buoy with an analytical model of the peh. finally, the harvested power has been estimated and the related uncertainty has been evaluated. 2. materials and methods 2.1. prototype of the measurement floating buoy the presented buoy prototype (figure 1) was made of a floating structure, manufactured with a 3d printer using a highly durable photo-polymeric resin. it was divided in two pieces: the bottom one was designed with a hemispherical shape for an appropriate matching with the sea waves, while the upper one had a cylindrical shape where the peh was set by a fixed joint. the two parts were connected by means of a thread, to hermetically contain a dedicated measurement instrumentation for monitoring the marine environment with a data transmission system. this peh allowed to convert the alternative rotational (rolling) motion of the buoy, while this one was subject to ripple figure 1. concept design of the measurement floating buoy prototype. 1. = hemispherical part, 2. = cylindrical part, 3. = pc and 4. = proof mass. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 203 waves, into electrical energy in order to provide an adequate power to supply all electronic devices within the same buoy. 2.2. piezoelectric energy harvester (peh) the used peh consisted of a cantilever support, on which a pc and a proof mass were glued at the opposite ends. the pc (duraact p-876 a.12, physik instrumente gmbh) has sizes of 61.0 mm × 35.0 mm × 0.5 mm and an electrical capacity of 90 nf. the active element of the pc, a thin layer of piezoceramic pic255 powder, is encapsulated in a kapton case. the pc shows a symmetrical structure; if it is deformed, the same amount of voltage is generated, opposite in charge, on the two surfaces of the electrodes. these types of devices have the double advantage of being also used as an actuator. when driven by an alternating voltage, the pcs act with a multiaxial deformation which depends on the amplitude and frequency of the power signal [28], [29]. the cantilever support, measuring 105 mm × 35 mm × 1 mm, was made by manual layering of three layers of 0◦/90◦ oriented glass fibre and epoxy resin. 2.3. experimental setup and procedure the experimental setup (figure 2a) included the buoy with its peh and an electrodynamic shaker (mod. s 513, tira), driven by a power amplifier (mod. baa 120, tira) and a function generator (mod. 33220 a, agilent). in order to simulate the motion of the ripple waves in amplitude and frequency, a conversion system (figure 2b) of the linear motion of the shaker was implemented. it consisted of a frame with two flanges, able to rotate the buoy around a fixed horizontal axis by means of two bearings. the imposed rolling motion was applied by connecting the stinger of the shaker to the bottom of the buoy (i. e. the vertex of the hemisphere) using a ball joint. the geometry of the conversion system is reported in table 1. the imposed rolling motion to the buoy was monitored by a rotational transducer (mod. 0600-0000, trans-tek), set in the fixed horizontal axis of the two flanges. an oscilloscope (mod. tds 5054b, tektronix) was employed to measure the previous oscillation signal and the voltage response of the peh on a resistive load of 100 kω. specifically, this resistive load was chosen in accordance with the maximum power transfer theorem, knowing the internal impedance of the pc [20]. the behaviour of the buoy was studied by varying the working frequency, the amplitude of the imposed rolling motion and the proof mass glued at the end of the cantilever support. the frequencies and amplitudes of the ripple waves were appropriately following the results of [27]. therefore, the sinusoidal functions of the imposed rolling motion to the buoy were chosen with the characteristics shown in table 2. the acquisition frequency of the signals was set at 2 khz and each test was repeated 5 times for every combination of working frequency fw and angular displacement θ. 2.4. simplified model of the mechanical behaviour of the peh the mechanical behavior of the analyzed peh (i.e. a clamped-free beam with a proof mass) can be explained by a single degree of freedom (sdof) model (figure 3), according to [30]-[32]. for a peh without a proof mass, the equation of motion of euler-bernoulli beam for undamped free vibrations can be considered: 𝐸𝐼 𝜕4𝑤(𝑥,𝑡) 𝜕𝑥4 + 𝑚 𝜕2𝑤(𝑥,𝑡) 𝜕𝑡 2 = 0, (1) where m, e and i indicate the mass, the young’s modulus and the inertia momentum of the peh, respectively. 𝑤(𝑥, 𝑡) is the absolute motion of the peh along its axis expressed as: 𝑤(𝑥, 𝑡) = 𝑤𝑟𝑒𝑙 (𝑥, 𝑡) + 𝑤𝑏 (𝑥, 𝑡), (2) table 1. geometric characteristics of the conversion system for the linear motion of the shaker in the imposed rolling motion to the buoy. geometric parameters r1 in mm r2 in mm l in mm d in mm 70 70 100 90 a) b) figure 2. a) image of the experimental setup and b) schema of the conversion system for the linear motion of the shaker in the imposed rolling motion to the buoy. 1. = stinger of the shaker, 2. = frame, 3. = body of the buoy, 4. = peh, 5. = proof mass, r1 = height of the hemispherical part of the buoy, r2 = height of the cylindrical part of the buoy, l = arm of the peh, d = diameter of the buoy and θ = angular displacement. table 2. characteristics of the imposed rolling motion to the buoy. case 1 case 2 case 3 case 4 case 5 working frequency fw in hz 1.50 1.75 2.00 2.25 2.50 angular displacement θ in ° 5.0 5.3 5.9 6.6 7.0 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 204 where 𝑤𝑟𝑒𝑙 (𝑥, 𝑡) is the displacement, relative to the clamped end, and 𝑤𝑏 (𝑥, 𝑡) is the absolute displacement of the buoy. by introducing a proof mass and according to equation (2), equation (1) takes the form reported as follows: 𝐸𝐼 𝜕4 𝜕𝑥 4 𝑤𝑟𝑒𝑙 (𝑥, 𝑡) + 𝑚 𝜕2 𝜕𝑡 2 𝑤𝑟𝑒𝑙 (𝑥, 𝑡) = −[𝑚 + (𝑥 − 𝑙)𝑀𝑡 ] 𝜕2 𝜕𝑡 2 𝑤𝑏 (𝑥, 𝑡) , (3) where mt denotes the proof mass and produces a new contribution. 𝑤𝑏 (𝑥, 𝑡) consists of the composition of an orthogonal translation 𝑔(𝑡) and a rotation ℎ(𝑡) of the peh clamped end: 𝑤𝑏 (𝑥, 𝑡) = 𝛿1(𝑥)𝑔(𝑡) + 𝛿2(𝑥)ℎ(𝑡) . (4) in our case (i.e. clamped-free beam and figure 2b), 𝛿1(𝑥) = 1, 𝛿2(𝑥) = 𝑟2 + 𝑥, 𝑔(𝑡) = 𝑟2 sin 𝜃(𝑡) and ℎ(𝑡) = 𝜃(𝑡), and equation (5) can be reported as: 𝑤𝑏 (𝑥, 𝑡) = 𝑟2 sin 𝜃(𝑡) + (𝑟2 + 𝑥)𝜃(𝑡) . (5) 2.5. estimation of the harvested power by the peh the harvested power by the peh was evaluated according to the model proposed by shu et al. [33], using equation (6): 𝑃 = π 𝜔 𝑉 2 8 𝑅 , (6) where 𝑉 and 𝜔 are respectively the amplitude and the pulsation of the harvested voltage and 𝑅 is the resistive load wired to the harvester. considering the complex trend of the harvested voltage 𝑉𝑃𝐸𝐻 of the peh, 𝑉 was estimated as a specific average voltage, indicated with �̅�. it is the accumulated voltage inside the excitation period by the integral average of the rectified signal in equation (7): �̅� = 1 𝑇 ∫|𝑉𝑃𝐸𝐻 (𝑡)| 𝑑𝑡 𝑇 0 , (7) where t = 1 / fw is the excitation period and fw is the working frequency of the imposed rolling motion of the buoy. for discrete signals, the equation (7) becomes: �̅� = 1 𝑇 ∑ |𝑉𝑃𝐸𝐻 (𝑡)| 𝑇 𝑡=0 . (8) hence, equation (6) takes the form of equation (9): �̅� = 1 16𝑅𝑇 (∑ |𝑉𝑃𝐸𝐻 (𝑡)| 𝑇 𝑡=0 ) 2 = = 𝑓𝑤 16𝑅 ( ∑ |𝑉𝑃𝐸𝐻 (𝑡)| 1/𝑓𝑤 𝑡=0 ) 2 , (9) where �̅� is the specific power, harvested by the peh. it is here mentioned as specific power, hence it is a complex function of the working frequency due to its dependence on �̅�. 3. results 3.1. effects of the mechanical frequency figure 4 shows the typical signals concerning the angular displacement θ of the buoy, measured on the fixed horizontal axis, and the harvested voltage vpeh by the peh, following the imposed rolling motion. in this application the mechanical operating conditions of the peh are quite different from those reported in the literature [24]-[27]. indeed, the piezoelectric component is mechanically stressed by a non-inertial force field in which the motion is alternative. the working frequencies fw of the presented peh are significantly lower than the typical ones of this devices, while the amplitude of the angular displacement θ does not allow the hypothesis of small oscillations. in figure 4, although a certain period can be identifiable, the two acquired signals do not have the same dynamics. the angular displacement θ has a sufficiently sinusoidal behavior, according to the imposed rolling motion, while the harvested voltage vpeh is oscillating with a variable amplitude. figure 5 reports the magnitude of the discrete fourier transform (dft) of the signals of figure 4, computed using a resolution of 0.10 hz. the imposed rolling motion is characterized by a single frequency at 2.00 hz, vice versa the harvested voltage vpeh has some harmonic components with higher order and with maximum amplitude at 10.00 hz. the identified phenomenon occurs for all the conducted tests and denotes a multimodal enhancement of the peh. it must be pointed out that, although the employed shaker ensures a linear trend in the frequency range from 2 hz to 7000 hz, its operation at the lower frequency limit remains optimal. in fact, there are no distortions since the amplitudes of the dft of the angular displacement θ, reported in figure 5, do not show significant higher order harmonics. figure 6 illustrates the comparison between the magnitudes of the dfts of the harvested voltage vpeh by varying the proof mass, glued to the free end of the peh, and following an angular displacement θ of the buoy of 6.6° amplitude at a working frequency of 2.00 hz. a different applied proof mass (i. e. the resonant frequency of the peh [20], [21]) does not cause a figure 3. scheme of the model for the peh. figure 4. typical signal of the angular displacement θ of the buoy (on the top) and of the harvested voltage vpeh by the peh (on the bottom) at 2.00 hz. -5.0 0.0 5.0 10.0 0.0 0.5 1.0 1.5 2.0 / ° time / s angular displacement θ -1.5 -0.5 0.5 1.5 0.0 0.5 1.0 1.5 2.0 / v time / s harvested voltage vpeh acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 205 change in the frequency of the dft, but it acts on the amplitude of vpeh. in fact, a greater proof mass induces a consequent increase in the dft magnitude and a modification of the modal eigenvalues associated with the motion. moreover, in these cases, the accelerations due to the proof masses assumed values between 0.77 ms-2 and 2.99 ms-2, calculated considering the parameters from table 1 and table 2. figure 7 presents the frequency effect of the imposed rolling motion on the harvested voltage vpeh with an angular displacement θ of 6.6° and a proof mass of 24.1 g. the frequency variation of the imposed rolling motion involves a shift of the frequency, but not an alteration in the magnitude of the harvested voltage. specifically, the components of higher order are characterized by a greater frequency difference than those of lower order. figure 8 compares the frequency amplification factors of the buoy-peh system as the angular displacement θ and working frequency fw of the imposed alternative rotational motion vary, using a proof mass of 24.1 g. the amplification factors were obtained from the ratio between the frequency of the principal component of the harvested voltage and the working frequency fw of the imposed rolling motion. it was found that these factors are not very sensitive to either the frequency or the amplitude of the imposed rolling motion. 3.2. power estimation of the peh figure 9 shows the area that collects the values of the specific average voltages �̅�, calculated according to equation (8), as the working frequency fw varies. this area was obtained by changing the angular displacement θ of the boy and the proof masses glued to the peh. the results are in perfect agreement with the literature [17]-[23]. in fact, it can be noted that as the proof mass increases (i. e. as the resonant frequency of the peh decreases), a greater oscillation of the peh is obtained with a consequent increment in the harvested voltage. a similar observation can be made by considering the increase in the working frequency fw of the imposed rolling motion with the same proof mass. figure 10 underlines the trend of the specific power �̅�, harvested by the peh and calculated according to equation (9), as the working frequency fw and the amplitude of the angular displacement θ vary and using a fixed proof mass of 24.1 g. a relative maximum is obtained at the working frequency of 2.25 hz and its amplitude rises as the angular displacement θ of the buoy increases. this result also conforms to the data already present in the literature [17]-[23]. in the optimal conditions of a working frequency of 2.25 hz, an angular displacement of 7.0° figure 5. magnitude of the dft of the angular displacement θ (on the top) and of the harvested voltage vpeh (on the bottom) with a working frequency fw of 2.00 hz. figure 6. comparison between the magnitude of the dfts of the harvested voltage vpeh by varying the proof mass at an angular displacement θ of the buoy of 6.6° at a working frequency fw of 2.00 hz. figure 7. comparison between the magnitudes of the dfts of the harvested voltage vpeh by varying the imposed rolling motion at an angular displacement θ of the buoy of 6.6 ° and using a proof mass of 24.1 g. figure 8. comparison of the frequency magnification factors at different angular displacement θ of the imposed rolling motion and using a proof mass of 24.1 g. figure 9. specific average voltages �̅� by the peh obtained with respect the working frequency fw. 0.0 2.0 4.0 6.0 0 5 10 15 20 25 30 / v frequency / hz dft of angular displacement θ 0.0 2.0 4.0 6.0 0 5 10 15 20 25 30 / v frequency / hz dft of harvested voltage 0.0 1.0 2.0 3.0 4.0 5.0 6.0 0.00 2.50 5.00 7.50 10.00 12.50 15.00 d f t m a g n it u d e / v frequency / hz 14.7 g 18.1 g 24.1 g v p e h 0.0 1.0 2.0 3.0 4.0 5.0 6.0 0.00 2.50 5.00 7.50 10.00 12.50 15.00 d f t m a g n it u d e / v frequency / hz 1.75 hz 2.00 hz 2.25 hz i operative freq. ii operative freq. iii operative freq. v p e h acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 206 and a proof mass of 24.1 g, the harvested energy in 1 h for a specific power of 0.08 mw is 274.6 mwh. figure 11 reports a comparison of the specific power �̅�, made with different proof masses and at the same angular displacement θ. because of that it is found in figure 6, the different proof mass influences the harvested voltage 𝑉𝑃𝐸𝐻 , and consequently the specific power 𝑃 ̅, the latter proportional to the square of the voltage as visible in equation (9). as an example, figure 12 exhibits the specific power �̅� evaluated for the acquisitions with proof mass equal to 28.8 g. �̅� assumes two relative maxima corresponding to the working frequencies fw equal to 2 hz and 2.5 hz. generally, harvesters using piezoelectric patches in cantilever configuration are able to produce a power peak at its resonant frequencies. it has already been observed that the first mode of vibration, usually at low frequencies, is responsible for the greater energetic contribution as it provides the components with the highest rate of deformation compared to the other modes of vibration [16]-[20]. 3.3. uncertainty evaluation of the harvested power by the peh the uncertainty of specific power �̅� by the peh is estimated by analysing the relative weight of each of the quantities in equation (9). the estimation of the combined uncertainty was based on the iso/iec guide 98-3:2008 [34], by applying the following propagation law: 𝑢(𝑦 ) = √∑ 𝑢2(𝑥𝑖 ) ( 𝛿�̅� 𝛿𝑥𝑖 ) 2 𝑖 . (9) the previous is explicated as: 𝑢(�̅� ) = [𝑢2(𝑓𝑤) ( 𝛿�̅� 𝛿𝑓𝑤 ) 2 + 𝑢2(𝑅) ( 𝛿�̅� 𝛿𝑅 ) 2 + 𝑢2(�̅�) ( 𝛿�̅� 𝛿�̅� ) 2 ] 0.5 . (10) the extended uncertainty (uc) was then estimated by assuming a coverage factor (k) equal to 2.57, based on a t distribution with five degrees of freedom at a confidence level of 98%. the detailed computation is shown in table 3. similar results were obtained for other acquisitions at different motion parameters. looking at the relative weight of the different quantities affecting the uncertainty, it can be highlighted that the main contribution derives from the specific average voltages �̅�, whereas the other quantities have a much lower influence. 4. conclusions for a measurements buoy, the power consumption mainly depends on the components that deal with data communication. in fact, data collection does not take place continuously over figure 10. specific power �̅� , harvested by the peh, as a function of the working frequency fw and the angular displacement θ with a proof mass of 24.1 g. figure 11. specific power �̅� , harvested by the peh, as a function of the working frequency fw and the proof mass at different angular displacement θ. figure 12. specific power �̅� , harvested by the peh, for angular displacement θ and working frequency fw with a proof mass of 28.8 g. table 3. uncertainty evaluation at the working frequency fw equal 2 hz with an angular displacement θ of 6.6° and proof mass of 24.1 g. parameter value uncertainty type u(xi) u2 ∙ (δp/δxi)2 u(p) uc(p) (k = 2.57) �̅� / v 0.3193 a 2.85e-2 4.80e-5 9.49e-5 2.44e-4 fw / hz 2.00 a 5.57e-14 4.34e-9 r / ω 1e+6 b 1.50e+3 8.88e-21 0 0.02 0.04 0.06 0.08 1.25 1.5 1.75 2 2.25 2.5 2.75 s p e ci fi c p o w e r p / m w working frequency fw / hz 7.0 6.6 5.9 5.3 5.0 y = 0.02x 0.0341 r² = 0.9695 0 0.02 0.04 0.06 0.08 28.8 g 24.1 g 18.1 g amplitude 5.3 ° y = 0.0328x 0.0485 r² = 0.9963 0 0.02 0.04 0.06 0.08 amplitude 5.9 ° y = 0.0967x 0.1642 r² = 0.9788 0 0.02 0.04 0.06 0.08 1.25 1.50 1.75 2.00 2.25 2.50 2.75 working frequency fw / hz amplitude 6.6 ° s p e ci fi c p o w e r p / m w 0 0.02 0.04 0.06 0.08 angular displacement θ / ° s p e ci fi c p o w e r p / m w working frequency fw / hz proof mass of 28.8 g acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 207 time. therefore, these devices do not need to be constantly active and can often be put into sleep mode and waked up at fixed time intervals or in response to some external event. typically, the power consumption for such routines is on the order of a few mw and can be ensured by rechargeable batteries and appropriate power electronics downstream the harvesters [35]. in this work, a simple and innovative layout for a small selfpowered floating buoy, employed for environmental monitoring activities, has been presented. this device makes use of a peh, consisted of a pc, set in a cantilever configuration and excited by the rolling motion of the ripple waves, as supply source. the voltage and the power response of the peh has shown a particularly advantageous behavior because the imposed motion has excited the pc with a multi-frequency combination of vibration modes. a multiplication of the oscillation frequencies for the peh has been found as the working frequency varies. in fact, although the frequencies of the harvested voltage and power do not match the resonant frequency of the used peh and therefore the best deformation for the pc is not obtained [16][21], the calculated specific power has relative maxima with respect to the input parameters (i. e. the frequency and amplitude of ripple waves). in this setup configuration and in optimal conditions of a working frequency of 2.25 hz, an angular displacement of 7.0° and a proof mass of 24.1 g, the peh had reached the harvested energy in 1 h of 274.6 mwh for a specific value of harvested power equals to 0.08 mw. in this way the described floating buoy assumes the configuration of a frequency transformer. for these reasons, the analyzed layout allows to considerably increase the power generated by a single peh, compared to that obtainable in typical conditions [26], [27], especially if such harvesters are coupled to a suitable impedance matching circuit [22], [23]. future purposes will be aimed to identify the optimal position of the pc on the beam for a more efficiently conversion of the mechanical energy provided by ripple waves, to evaluate the effect of the noise superimposed on the main motion of the buoy, to define the scalability of the buoy after an appropriate fluid-dynamic sizing, and finally to estimate the buoy performance in real conditions. references [1] t. p. bean, n. greenwood, r. beckett, l. biermann, j. p. bignell, j. l. brant (+28 authors), a review of the tools used for marine monitoring in the uk: combining historic and contemporary methods with modeling and socioeconomics to fulfill legislative needs and scientific ambitions, frontiers in marine science 4 (2017) n. 263 pp. 1-29. doi: 10.3389/fmars.2017.00263 [2] h. kim, l. mokdad,, j. ben-othman, designing uav surveillance frameworks for smart city and extensive ocean with differential perspectives, ieee communications magazine 56 (2018) pp. 98104. doi: 10.1109/mcom.2018.1700444 [3] l. g. a. barboza, a. cózar, b. c. gimenez, t. l. barros, p. j. kershaw, l. guilhermino, macroplastics pollution in the marine environment, world seas: an environmental evaluation, academic press (2019) pp. 305-328. doi: 10.1016/b978-0-12-805052-1.00019-x [4] a. l. sobisevich, d. a. presnov, m. v. agafonov, l. e. sobisevich, new-generation autonomous geohydroacoustic ice buoy, seismic instruments 54 (2018) pp. 677-681. doi: 10.3103/s0747923918060117 [5] s. savoca, g. capillo, m. mancuso, c. faggio, g. panarello, r. crupi, m. bonsignore, l. d’urso, g. compagnini, f. neri, e. fazio, t. romeo, t. bottari, n. spanò, detection of artificial cellulose microfibers in boops boops from the northern coasts of sicily (central mediterranean), science of the total environment 691 (2019) pp. 455-465 doi: 10.1016/j.scitotenv.2019.07.148 [6] z. chenbing, w. xinpeng, l. xiyao, z. suoping, w. haitao, a small buoy for flux measurement in air-sea boundary layer, proc. of the 13th ieee international conference on electronic measurement & instruments, icemi 2017, 20-22 october 2017, yangzhou, china. doi: 10.1109/icemi.2017.8265999 [7] x. roset, e. trullos, c. artero-delgado, j. prat, j. del rio, i. massana, m. carbonell, g. barco de la torre, d. mihai toma, real-time seismic data from the bottom sea, sensors 18 (2018) n. 1132 doi: 10.3390/s18041132 [8] l. m. tender, s. a. gray, e. groveman, d. a. lowy, p. kauffman, j. melhado, j. dobarro, the first demonstration of a microbial fuel cell as a viable power supply: powering a meteorological buoy, journal of power sources 179 (2008) pp. 571-575. doi: 10.1016/j.jpowsour.2007.12.123 [9] j. chen, y. li, x. zhang, y. ma, simulation and design of solar power system for ocean buoy, journal of physics: conference series, iop publishing, 1061 (2018) pp. 012018. doi: 10.1088/1742-6596/1061/1/012018 [10] c. alippi, r. camplani, c. galperti, m. roveri. a robust, adaptive, solar-powered wsn framework for aquatic environmental monitoring, sensors journal, ieee, 11 (2011) pp. 45-55. doi: 10.1109/jsen.2010.2051539 [11] c. albaladejo, f. soto, r. torres, p. sanchez, juan a. lopez, a low-cost sensor buoy system for monitoring shallow marine environments, sensors 12 (2012) pp. 9613-9634. doi: 10.3390/s120709613 [12] l. b. hormann, p. m. glatz, c. steger, r. weiss, a wireless sensor node for river monitoring using msp430® and energy harvesting, proc. of the 4th education and research conference, ederc 2010, 1-2 december 2010, nice, france, pp. 140-144. [13] j. f. manwell, j. g. mcgowan, a. l. rogers, wind energy explained: theory, design and application, john wiley & sons, 2010. [14] j. trevathan, r. johnstone, t. chiffings, i. atkinson, n. bergmann, w. read, s. theiss, t. myers, t. stevens, semat the next generation of inexpensive marine environmental monitoring and measurement systems, sensors 12 (2012) pp. 9711-9748. doi: 10.3390/s120709711 [15] j. falnes, a review of wave-energy extraction, marine structures 20 (2007) pp. 185–201. doi: 10.1016/j.marstruc.2007.09.001 [16] r. pelc, r. m. fujita, renewable energy from the ocean. marine policy 26 (2002) pp. 471-479. doi: 10.1016/s0308-597x(02)00045-3 [17] c. borzea, d. comeagă, a. stoicescu, c. nechifor, piezoelectric harvester performance analysis for vibrations harnessing. upb scientific bulletin, series c electrical engineering and computer science 81 (2019) pp. 237-248. [18] r. m. toyabur, m. salauddin, and j. y. park, design and experiment of piezoelectric multimodal energy harvester for low frequency vibration, ceram. int. 43 (2017) pp. 675–681. doi: 10.1016/j.ceramint.2017.05.257 [19] e. l. pradeesh, s. udhayakumar, effect of placement of piezoelectric material and proof mass on the performance of piezoelectric energy harvester, mech. syst. signal process. 130 (2019) pp. 664–676. doi: 10.1016/j.ymssp.2019.05.044 [20] r. montanini, a. quattrocchi, experimental characterization of cantilever-type piezoelectric generator operating at resonance for vibration energy harvesting, aip conf. proc. 1740 (2016) n. 60003. doi: 10.1063/1.4952675 https://doi.org/10.3389/fmars.2017.00263 https://doi.org/10.1109/mcom.2018.1700444 https://doi.org/10.1016/b978-0-12-805052-1.00019-x https://doi.org/10.3103/s0747923918060117 https://doi.org/10.1016/j.scitotenv.2019.07.148 https://doi.org/10.1109/icemi.2017.8265999 https://doi.org/10.3390/s18041132 https://doi.org/10.1016/j.jpowsour.2007.12.123 https://doi.org/10.1088/1742-6596/1061/1/012018 https://doi.org/10.1109/jsen.2010.2051539 https://doi.org/10.3390/s120709613 https://doi.org/10.3390/s120709711 https://doi.org/10.1016/j.marstruc.2007.09.001 https://doi.org/10.1016/s0308-597x(02)00045-3 https://doi.org/10.1016/j.ceramint.2017.05.257 https://doi.org/10.1016/j.ymssp.2019.05.044 https://doi.org/10.1063/1.4952675 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 208 [21] a. quattrocchi, f. freni, r. montanini, power conversion efficiency of cantilever-type vibration energy harvesters based on piezoceramic films, ieee transactions on instrumentation and measurement 70 (2021) n. 1500109, pp.1-9. doi: 10.1109/tim.2020.3026462 [22] s. de caro, r. montanini, s. panarello, a. quattrocchi, t. scimone, a. testa, a pzt-based energy harvester with working point optimization, proc. of the 6th international conference on clean electrical power, iccep 2017, 27-29 june 2017, santa margherita ligure, italy, pp. 699–704. doi: 10.1109/iccep.2017.8004767 [23] a. quattrocchi, r. montanini, r., s. de caro, s. panarello, s. scimone, s. foti, a. testa, a new approach for impedance tracking of piezoelectric vibration energy harvesters based on a zeta converter. sensors 20 (2020) n. 5862. doi: 10.3390/s20205862 [24] n. wu, q. wang, x. xie, ocean wave energy harvesting with a piezoelectric coupled buoy structure, applied ocean research 50 (2015) pp. 110-118. doi: 10.1016/j.apor.2015.01.004 [25] s. f. nabavi, a. farshidianfar, a. afsharfard, novel piezoelectricbased ocean wave energy harvesting from offshore buoys. applied ocean research 76 (2018) pp. 174-18. doi: 10.1016/j.apor.2018.05.005 [26] d. alizzio, m. bonfanti, n. donato, c. faraci, g. m. grasso, f. lo savio, r. montanini, a. quattrocchi, design and performance evaluation of a “fixed-point” spar buoy equipped with a piezoelectric energy harvesting unit for floating near-shore applications, sensors 21 (2021) n. 1912. doi: 10.3390/s21051912 [27] d. alizzio, m. bonfanti, n. donato, c. faraci, g. m. grasso, f. lo savio, r. montanini, a. quattrocchi, design and verification of a “fixed-point” spar buoy scale model for a “lab on sea” unit, proc. of the 2020 imeko tc19 international workshop on metrology for the sea, imeko tc19, october 5-7, 2020 naples, italy, pp. 27-32. online [accessed 15 december 2021] https://www.imeko.org/publications/tc19-metrosea2020/imeko-tc19-metrosea-2020-11.pdf [28] a. quattrocchi, f. freni, and r. montanini, self-heat generation of embedded piezoceramic patches used for fabrication of smart materials, sens. actuators a phys. 280 (2018) pp. 513-520. doi: 10.1016/j.sna.2018.08.022 [29] s. sternini, a. quattrocchi, r. montanini, a. pau, f. l. di scalea, a match coefficient approach for damage imaging in structural components by ultrasonic synthetic aperture focus, procedia eng. 199 (2017) pp. 1544–1549. doi: 10.1016/j.proeng.2017.09.503 [30] a. erturk, d.j. inman, piezoelectric energy harvesting, wiley, united states, 2011 isbn: 978-0-470-68254-8 [31] a. erturk, d.j. inman, on mechanical modeling of cantilevered piezoelectric vibration energy harvesters, j. intell. mater. syst. struct. 19 (2008). [32] a. amanci, f. giraud, c. giraud-audine, m. amberg, f. dawson, b. lemaire-semail, analysis of the energy harvesting performance of a piezoelectric bender outside its resonance, sens. actuators a: phys. 17 (2014) 129–138. [33] y. c. shu, i. c. lien, efficiency of energy conversion for a piezoelectric power harvesting system, j. micromech. microeng. 16 (2006) n. 11 pp. 2429–2438. doi: 10.1088/0960-1317/16/11/026 [34] uncertainty of measurement—part 3: guide to the expression of uncertainty in measurement, document iso/iec guide 983:2008, 2008 [35] j. m. gilbert, f. balouchi, comparison of energy harvesting systems for wireless sensor networks. int. j. autom. comput. 5 (2008), pp. 334–347 doi: 10.1007/s11633-008-0334-2 https://doi.org/10.1109/tim.2020.3026462 https://doi.org/10.1109/iccep.2017.8004767 https://doi.org/10.3390/s20205862 https://doi.org/10.1016/j.apor.2015.01.004 https://doi.org/10.1016/j.apor.2018.05.005 https://doi.org/10.3390/s21051912 https://www.imeko.org/publications/tc19-metrosea-2020/imeko-tc19-metrosea-2020-11.pdf https://www.imeko.org/publications/tc19-metrosea-2020/imeko-tc19-metrosea-2020-11.pdf https://doi.org/10.1016/j.sna.2018.08.022 https://doi.org/10.1016/j.proeng.2017.09.503 https://doi.org/10.1088/0960-1317/16/11/026 https://doi.org/10.1007/s11633-008-0334-2 acta imeko  september 2014, volume 3, number 3, 38 – 42  www.imeko.org    acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 38  electrolytic conductivity as a quality indicator for bioethanol  steffen seitz 1 , petra spitzer 1 ,  hans  d. jensen 2 , elena orrù 3 , francesca durbiano 3   1  physikalisch‐technische bundesanstalt, bundesallee 100, 38116 braunschweig, germany   2  danish fundamental metrology ltd., matematiktorvet 307 dk‐2800 kgs. lyngby, denmark  3  istituto nazionale di ricerca metrologica, strada delle cacce 91, 10135 turin, italy      section: research paper   keywords: bioethanol; electrolytic conductivity; measurement uncertainty  citation: steffen seitz, p. spitzer, f. durbiano, h. jensen , electrolytic conductivity as a quality indicator for bioethanol, acta imeko, vol. 3, no. 3, article 9,  september 2014, identifier: imeko‐acta‐03 (2014)‐03‐09  editor: paolo carbone, university of perugia   received june 12 th , 2013; in final form may 27 th , 2014; published september 2014  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by measurement science consultancy, the netherlands  corresponding author: steffen seitz, e‐mail: steffen.seitz@ptb.de    1. introduction  electrochemical characterisation of bioethanol is of interest in terms of the identification of impurities at trace levels to assess risk of corrosion and potential damage to engines. high measurement accuracy and a strict application of metrological principles in establishing traceability for these measurements is mandatory to achieve meaningful measurements. in particular, electrolytic conductivity is a quality indicator for bioethanol that is needed as an easy-to-use tool to assess the amount of impurities. substantial work is still required to underpin the traceability of this parameter in order to guarantee metrological comparability [1] of the results. comparability is a prerequisite for standardization of measurement procedures and essential for the reliability of measured material properties for engineering. moreover, an assessment of sensitivity, significance and uncertainty of these measurements is required. to establish comparability of measurement results they must be traceable to an agreed common metrological reference, which, whenever possible, should be the international system of units (si). nowadays, the result of an electrolytic conductivity measurement at the application level is linked to the conductivity value of a reference solution. typically, the conductivity value of the reference solution is measured traceable to the si by national metrology institutes by means of a primary reference measurement procedure [2]. the value indicated by a conductivity measuring system is usually adjusted by a calibration measurement, such that the actually measured resistance rref is scaled by the so called cell constant kcell to match the conductivity value ref of the reference solution: ref cell ref r k  . (1) cells, which cell constants are adjusted in this way, are referred to as secondary cells in contrast to primary cells, where the cell constant is determined by dimensional measurements [2]. the measured resistance is affected by the electric field distribution and the correlated spatial distribution of the current density within the measuring cell [3]. additionally it is affected by electrode polarisation. both effects depend on the design of the cell, the kind of solution and its ion concentration. consequently, conductivity cells of different cell abstract  we present results of the european metrology research project on the si traceability of electrolytic conductivity measurements in  bioethanol. as a first step to this aim secondary conductivity measurements have been performed to characterize reproducibility,  stability, measurement uncertainty and the significance of the measurement results. the relative standard measurement uncertainty  is of the order 0.3 %, while inter‐laboratory reproducibility is around 6.9 %. the measured conductivities of two samples from different  sources  show  a  relative  difference  of  around  30%.  these  results  show  that  conductivity  is  an  appropriate  quality  indicator  for  bioethanol. however, it also demonstrates that inter‐laboratory reproducibility has to be improved, in particular with respect to si  traceability.  acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 39  design can provide different conductivity results for an equivalent sample, even if their cell constant is adjusted with the same reference solution. therefore the comparability of conductivity measurement results is more questionable, the further the properties of the solution under investigation deviate from those of the reference solution. it is practically not possible to provide a matrix-matched primary reference solution for any kind of solution. however, in any case the measurement uncertainty must consider the effect of matrixmismatch. concerning bioethanol, reference solutions based on ethanol are inappropriate mainly due to stability issues. aqueous kcl solutions are typically used for cell calibration [4]. it must be emphasised that the nominal conductivity value of the lowest stable aqueous kcl reference solution recommended by oiml is 140.83 ms m-1, [5], that of iupac is 140.82 ms m-1) [6] at 25°c and that of astm solution d is 14.693 ms m-1[7], while the conductivity of bioethanol is in the order of 0.1 to 0.2 ms m-1. hence, the common calibration procedure makes use of a reference solution that significantly differs in the matrix and in the conductivity value of bioethanol. as a consequence, it must be investigated, if they can nevertheless be used as reference solutions and to what extent the measurement uncertainty must be increased due to the matrix-mismatch. in particular, comparison measurements, in which cells of different design are used, could give more insight into the effect of the matrix-mismatch. currently, there exist no conductivity measurements of bioethanol, based on primary reference procedures, which could be used as a basis for providing traceability of measurement results at the application level. therefore, a work package has been established within the european metrology research project eng09 [8, 9] that covers among others two main objectives, related to the use of electrolytic conductivity as an important ‘quality indicator’ for bioethanol: (i) research into the measurement of electrolytic conductivity from the primary level to the application level in order to establish si traceability and (ii) to provide exemplary reference data of bioethanol. as a first step to establish traceability, the conductivities of two bioethanol samples from different origins, one from brazil and one from a german producer, were measured with a secondary conductivity measurement cell. the cell constant was determined after calibration with a glycerol based kcl solution, which conductivity was in the conductivity range of bioethanol. a method, which has recently been investigated by the authors [10], has been used to determine the solution resistance from impedance spectroscopy measurements of the cell/solution system. this method has particularly been developed to minimize the effect of electrode polarisation on the derived solution resistance in the low conductivity range. additionally, the measurement uncertainties which particularly include contributions from stability and reproducibility have been determined from the derived solution resistances. significant differences in the conductivity values of the two different bioethanol samples have been observed. 2. measurement procedure  conductivity measurements were performed with a two electrode jones-type like cell. a sketch of the setup is shown in figure 1. the general design of the cell is similar to that described in [6], but does not have a removable centre section. two round and flat electrodes (diameter 2 cm), made of blank platinum, are arranged opposite to each other in a cylindrical body (inner diameter 2.2 cm), made of bore silicate glass. the distance between the electrodes is around 1 cm. two glass pipes are connected to the main cylinder to fill and empty the cell. the cell constant was determined with a glycerol based kcl solution at 25°c. the conductivity value ref of the reference solution has been determined with the primary conductivity measurement setup of ptb [2] to be (133.0  0.17) µs m-1. the resulting cell constant is 18.61 m-1. if not mentioned otherwise all stated uncertainties are standard uncertainties according to the “guide to the expression of uncertainty in measurement” (gum) [11]. the cell was placed in an air thermostat. temperature of the solution was measured with a calibrated pt-100 temperature sensor connected to a measurement bridge mkt50 from anton paar. the sensor is coated with ptfe and was placed in one of the filling tubes to measure the temperature directly in the solution. after the cell was filled, it took about 60 to 90 minutes until stable temperature conditions were achieved. then the temperature variation was less than 2 mk around the mean temperature. two different kinds of bioethanol samples, one from brazil (produced from sugar cane) and one from germany (produced from sugar beet), were measured. 2 l of each sample have been and finally bottled into 250 ml bore silicate bottles under an argon atmosphere that had been bubbled through ethanol in a gas washing bottle before. the measurements where performed according to the following steps: 1) the conductivity measurement cell was cleaned several times with ultra pure water and finally filled with ultra pure water. then, a bottle with the sample p t 1 0 0 ar c 2 h 6 o z( f ) figure 1. sketch of the measurement setup, using a two electrode cell that  is placed  in a temperature controlled air bath. the contact of the sample  with ambient air is minimised by pumping it into the cell. argon is used to dry the cell after cleaning it with ultra pure water.  acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 40  and the cell were put into the air bath at 25 °c for at least 12 hours before the measurement. 2) the cell was emptied and flooded with argon for about half an hour until it was dry. using a peristaltic pump and chemical inert norprene tubes the sample was pumped into the cell until it was filled almost up to the rim of the filling tubes. finally the inlets were sealed with tape. evaporation of ethanol within the cell was not completely prevented during this filling step. however, the surface of the solution that was exposed to air or argon was small and the filling time was less than 30 s. the corresponding measurement uncertainty has been considered in terms of measurement reproducibility. 3) an impedance spectrum between 20 hz and 500 khz, 5 steps per decade, was measured and the best frequency range (see below) was chosen for the measurement. 4) afterwards impedance spectra were recorded together with temperature for more than 2 h measuring time. at the end the cell was emptied and cleaned several times with ultra pure water. 3. conductivity calculation  the determination of the resistance rsol of the solution between the electrodes is based on an analysis of impedance spectra of various low conductivity solutions measured with different cell types. the basic concept has been developed within the imera-plus european metrology research program tp2jrp10 [10]. in brief, the determination of the solution resistance is based on the equivalent circuit shown in figure 2. the corresponding impedance spectrum can be separated into two regions. the low frequency part of the spectrum is dominated by electrode polarisation, which is represented by the cpe element and the polarisation resistance rp. the latter accounts for a residual charge transfer across the electrodes. in a complex plane plot this part of the spectrum is nearly a linear line, slightly curved due to the influence of rp. in the high frequency part of the spectrum polarisation effects can be neglected and the complex plane plot in this part of the spectrum is a semicircle. concerning the cell used in this investigation the effect of electrode polarisation on the spectrum can be neglected above 10 khz for high resistive solutions like ethanol. in this region the equivalent circuit simplifies to the parallel of cg and rsol. we have chosen measurement frequencies between around 10 and 400 khz that result in fairly equidistant impedance values across the semicircle. at each given frequency the mean impedance was calculated from at least 15 measurements. these mean values were used for the semicircle fit. the solution resistance was derived from the corresponding radius r: rsol = 2r. this procedure has turned out to be more robust than calculating the solution resistance analytically from the impedances by assuming the parallel of rsol and cg. the latter way usually shows a significant dependence of the resistance on frequency resulting from small impedance measurement errors. figure 3 shows the impedances of a typical measurement of bioethanol and the corresponding semicircle fit in a complex plane plot. the average relative deviation of the measured data points from the fit is less than 0.1%. the impedance measurements were performed with a high precision commercial lcr-meter (agilent 4284a). the conductivity value sol(t) at the mean measurement temperature t, given in the unit °c, is calculated from rsol and the calibrated cell constant kcell in analogy to equation (1). the impedances are typically not measured at the exact set temperature of 25 °c, but the measurement temperature deviates about a few tens of mk. the conductivity value at the measurement temperature t is therefore linearly corrected to the value sol(25°c) at 25°c using     sol sol(25 c) / 1 25 ct t       . (2) for bioethanol a linear relative temperature coefficient be = (2.0  0.15)% c-1 at 25°c has been determined from conductivity measurements between 20°c and 27°c. the linear temperature coefficient ref of the reference solution is 5.09% °c-1 at 25°c. using equations (1) and (2) the final conductivity value be(25°c) of a bioethanol sample has been calculated from the input variables: ref ref ref ref be be be be (25 ) (1 ( 25 ) (25 c) (1 ( 25 c)) c r t c r t                . (3) in equation (3) the index “ref” refers to the reference solution and the index “be” to bioethanol. 0 20 40 60 80 0 50 100 150 real(z ) (k) im ag ( z ) (k  ) figure 3. impedances of a bioethanol sample in a complex plane plot. the  dots  are  the  measured  impedances  z,  the  solid  line  is  a  semi  circle  fit.  frequency range is from 10 to 400 khz.  figure  2.  equivalent  circuit  used  to  model  the  cell  solution  system  to derive the solution resistance rsol. electrode polarisation is represented by the cpe element and the polarisation resistance rp.   cg  is the geometric capacitance of the electrodes.  acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 41  4. results  the measured conductivity values of the two bioethanol samples are brazil sample: (108.37 +/-0.33) µs m-1, german sample: (142.67 +/0.43) µs m-1. the values are significantly larger compared to pure synthetic ethanol, which has a conductivity of a few µs m-1. additionally, the difference of the results is much larger than their uncertainties. consequently, conductivity measurements can well serve to characterise bioethanol samples. there are no details available about residual ion concentrations or the production conditions of the samples. so it is difficult to reason the difference. using ion chromatography, we have performed a first analysis of one of the samples. the anionic ion chromatogram (not calibrated) showed a significant and predominant amount of chloride compared to a measurement of pure synthetic ethanol. therefore the measured difference of the conductivity values could be the result of residual dissolved chloride salts. however, this assumption still needs to be verified quantitatively. figure 4 demonstrates the stability of the measurement results after temperature equilibrium has been reached. the error bars indicate the expanded measurement uncertainty (coverage factor 2). an unspecific, small drift can be seen. the reason for it is not clear. however, the drift within the measurement period is considered in the stated measurement uncertainty. table 1 identifies the main sources of uncertainty (first column) and their estimated standard uncertainties (second column) exemplarily for the bioethanol sample from germany. note that resistance and temperature uncertainties considered systematic and statistical contributions. inaccuracies of the measuring devices and, in case of the resistances, of the method to derive them, entered into the systematic contributions. it should also be noted that the systematic uncertainties of the solution resistances have been calculated with a monte carlo method [12], since it is practically impossible to use the analytical gum framework to handle the complex-valued impedances and the fitting procedures involved in resistance calculation. the statistical contributions reflect the measurement stability and were calculated from the standard deviation of the mean of the measured values. uncertainty propagation has then been calculated straight forward from equation (3) according to the general gum uncertainty framework [11]. the last column shows the relative uncertainty contributions of the input variables to the uncertainty of the conductivity value. the main contributions to the measurement uncertainty result from the conductivity of the reference solution and the repeatability of the measurement results. the latter has been determined from independent measurements of four samples that have been homogenised and afterwards bottled as described above. the observed variation of the values within a relative standard deviation of 0.27% is probably due to the instability of the measurement shown in figure 4. the uncertainty calculation also accounted for correlations between the input quantities: 107.5 108.0 108.5 109.0 109.5 50 100 150 measuring time (min) co n d u ct iv it y ( µ s m -1 )   141.5 142.0 142.5 143.0 143.5 144.0 80 100 120 140 measuring time (min) co n d u ct iv it y ( µ s m -1 ) figure  4.  stability  of  the  conductivity  measurement  results  of  bioethanol from  brazil  (above)  and  germany  (below).  the  error  bars  indicate  the expanded (k = 2) uncertainty.  table  1. contributions  to  the  combined  measurement  uncertainty  of  the  conductivity  value  be,  exemplarily  for  the  bioethanol  sample  from  germany.  uxibe)  is  the  propagated  uncertainty  contribution  of  xi  to  the  uncertainty of be.  source of uncertainty of input  quantity xi  uncertainty  u(xi)  uxi(be)/be  (%)  conductivity of reference  solution  0.17 µs m ‐1    temperature of reference  solution (systematic)  10 mk  0.051  temperature stability of  reference solution   0.4 mk  0.002  temperature of bioethanol  (systematic)  10 mk  0.020  temperature stability of bioethanol  1.6 mk  0.002  resistance of reference  solution (systematic)  124   0.099  resistance stability of reference solution  2.3   0.002  resistance of bioethanol (systematic)  149   0.114  resistance stability of  bioethanol  2.9   0.022  repeatability  0.27 %  0.27  acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 42  (i) rref and rbe values with respect to the systematic uncertainty contributions, (ii) temperature measurement results of calibration measurement and bioethanol measurement with respect to the systematic uncertainty contributions, (iii) temperature values and temperature corrected conductivity values (corresponding resistance values, respectively), which are measured at the same time. for (i) and (ii) a correlation coefficient of one has been assumed, since all the measurements have been performed with the same system, using the same evaluation method. any systematic measurement error in (i) or (ii) due to an offset is therefore nearly equal in the measurement of the reference solution and the solution under investigation. scaling effects have been neglected, since the measurement results are of similar magnitude. as a consequence, although the relative uncertainties of the measured resistances are compatible to that of the conductivity reference value and to that attributed to repeatability, they barely contribute to the combined uncertainty of the conductivity value. for (iii) the correlation coefficient has been statistically calculated from temperature corrected resistance values and the corresponding temperatures, in order to account for correlations that are not covered by the linear temperature correction. here the correlation coefficient is typically around -0.5 to -0.7. the described measurements have been performed at the physikalisch-technische bundesanstalt (ptb) and reflect the characteristics of the setup used there. in order to estimate inter laboratory reproducibility a conductivity comparison measurement of bioethanol was performed, including the german (ptb), the danish (danish fundamental metrology) and the italian (istituto nazionale di ricerca metrologica) metrology institutes. all participants used secondary cells for the measurements. two institutes calibrated the cells with glycerol based kcl solutions, and one institute used a water based kcl solution. the relative standard deviation of the results was 6.9%, which is significantly larger than the individually reported standard measurement uncertainties (0.3% to 1%). this cannot be explained by inhomogeneity of the samples. it is more likely due to differences in sample handling, in cell design, in measurement and data evaluation procedure. the comparison will be repeated with more institutes and a more detailed sample handling and measurement instruction. however, the result of the comparison gives an upper limit for inter laboratory reproducibility of conductivity measurements of bioethanol, even though it is, for the time being, rather large compared to typical conductivity measurements. 5. conclusion  the results indicate that conductivity measurements can well serve to measure differences in the composition of bioethanol samples. under laboratory conditions the combined relative standard uncertainty of such measurements is around 0.3%. this particularly includes contributions from the stability of the solution during the measurement and the repeatability of the measurement results (at a single institute). however, the measured conductivities have been related to the conductivity value of the glycerol based kcl solution that has been used to adjust the cell constant. the measured cell constant of a secondary cell depends on the matrix of the reference solution. therefore the matrix-mismatch of bioethanol and the reference solution cast doubts on the comparability of the measured values, if these are measured using a different cell type. this assessment is also supported by the relatively bad inter laboratory reproducibility of 6.9%. in other words measurements of the same solutions using another cell type could provide different conductivity values, even if the cell constant is determined with the same reference solution. however, getting consistent, i.e. comparable, measurement results is a prerequisite for any standardisation work and a reliable data base for engineering. therefore further work is needed to achieve this aim. the next steps will be to investigate conductivity measurements of bioethanol on the primary level and to perform further comparison measurements to investigate the effect of different designs of secondary cells on the measured values. acknowledgement  the research leading to these results has received funding from the european union on the basis of decision no 912/2009/ec. references  [1] international vocabulary of metrology, iso/iec, 2012. [2] f. brinkmann, n. e. dam, e. deák, f. durbiano, e. ferrara, j. fükö, h. d. jensen, m. máriássy, r. h. shreiner, p. spitzer, u. sudmeier, m. surdul. vyskocil, primary methods for the measurement of electrolytic conductivity, accred qual assur 8 (2003), pp. 346-353. [3] s. l. schiefelbein, n. a. fried, k. g. rhoadsd. r. sadoway, a high-accuracy, calibration-free technique for measuring the electrical conductivity of liquids, rev. sci. instrum. 69 (1998), pp. 3308-3313. [4] j. barthel, f. feuerlein, r. neuederr. wachter, calibration of conductance cells at various temperatures, journal of solution chemistry 9 (1980), pp. 209-219. [5] standard solutions reproducing the conductivity of electrolytes, oiml, 1981. [6] k. w. pratt, w. f. koch, y. c. wup. a. berezansky, molalitybased primary standards of electrolytic conductivity, pure appl. chem. 73 (2001), pp. 1783-1793. [7] standard test methods for the electrical conductivity and resistivity of water, astm-international, 1995. [8] european metrology research project. available: http://www.emrponline.eu/ [9] publishable jrp summary: emrp jrpeng09 (2009). available: http://www.euramet.org/fileadmin/docs/emrp/jrp /jrp_summaries_2009/eng09_publishable_jrp_summary.pdf. [10] publishable jrp summary: emrp t2 jrp10 tracebioactivity (2007). available: http://www.euramet.org/fileadmin/docs/ emrp/jrp/imera-plus_jrps_2010-06-22/t2.j10_tracebi oavtivity_publishable_summary_april10_v1.pdf [11] guide to the expression of uncertainty in measurement, jcgm, 2008. [12] guide to the expression of uncertainty in measurement, supplement 1: propagation of distributions using a monte carlo method, jcgm, 2008. overview of the modified magnetoelastic method applicability acta imeko issn: 2221-870x september 2021, volume 10, number 3, 167 176 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 167 overview of the modified magnetoelastic method applicability tomáš klier1, tomáš míčka1, michal polák2, milan hedbávný3 1 pontex, bezová 1658, 147 14, prague 4, czech republic 2 faculty of civil engineering, czech technical university in prague, thákurova 7, 166 29, prague 6, czech republic 3 freyssinet cz, zápy 267, 250 01, brandýs nad labem, czech republic section: research paper keywords: tensile force; magnetoelastic method; prestressed strand; prestressed cable; sensor citation: tomáš klier, tomáš mícka, michal polák, milan hedbávný, overview of the modified magnetoelastic method applicability, acta imeko, vol. 10, no. 3, article 23, september 2021, identifier: imeko-acta-10 (2021)-03-23 section editor: lorenzo ciani, university of florence, italy received february 9, 2021; in final form july 30, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by ministry of industry and trade of the czech republic. corresponding author: tomáš klier, e-mail: tkl@pontex.cz 1. introduction five experimental techniques are usually applied in civil engineering practice for evaluation and verification of actual values of axial tensile forces in important structural elements on building constructions. if the total value of the tensile force in the investigated structural elements has to be determined, the two of these methods (namely, the direct measurement of the force by a preinstalled load cell and the approach based on a strain measurement with strain gauges) can be applied only for an experiment in which the applied sensors were installed before the investigated structural elements were activated. compared to that, the next three methods (namely, the vibration frequency method [1], [2], the force determination in a flexible structural element based on the relation between the transverse force and the caused transverse displacement [3]-[5] and the magnetoelastic method [6]-[24] can be used during newly started experiments on existing structures that have already been in service for some time. the basic advantage of these three methods is that the investigated structural elements remain activated all the time (namely, in the period of the structure service, the experiment preparation, the experiment realization and also after the experiment completion). however, the applicability of the method using the relation between the transverse force and the element transverse displacement is significantly limited in practice. it can be effectively applied only for the flexible elements with a relatively small cross section, as are strands with diameter smaller than approximately 25 mm, and with a relatively long free length, because the measurable transverse displacement increases considerably the observed tensile force in the substantially short abstract a requirement of axial force determination in important structural elements of a building or engineering structure during its construction or operational state is very frequent in technical practice. in civil engineering practice, five experimental techniques are usually used for evaluation of axial tensile forces in these elements. each of them has its advantages and disadvantages. one of these methods is the magnetoelastic method, that can be used, for example, on engineering structures for experimental determination of the axial forces in prestressed structural elements made of ferromagnetic materials, e.g., prestressed bars, wires and strands. the article presents general principles of the magnetoelastic method, the magnetoelastic sensor layout and actual information and knowledge about practical application of the new approach based on the magnetoelastic principle on prestressed concrete structures. subsequently, recent results of the experimental verification and the in-situ application of the method are described in the text. the described experimental approach is usable not only for newly built structures but in particular for existing ones. furthermore, this approach is the only one effectively usable experimental method for determination of the prestressed force on existing prestressed concrete structures in many cases in the technical practice. mailto:tkl@pontex.cz acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 168 investigated elements even in the order of tens of percent in some special cases. similarly, the vibration frequency method is suitable for application on structural elements with a relatively long free vibrating length and with a relatively low bending stiffness. if this method is used on the relatively short one, the uncertainty of the specified tensile force is significantly influenced by the element bending stiffness and by the element boundary conditions, especially if they are complicated and vague. in this case, the results obtained by the vibration frequency method can be improved not only by using measured natural frequencies of the element but also by using measured mode shapes in the process of results evaluation [1], [2]. however, the time necessary for realization of such experiment is significantly longer compared to the similar one, when only the natural frequencies of the elements are evaluated, especially in the situation, if the tested elements are difficult to access for the installation of vibration sensors, as are, for example, cable stays on cable-stayed bridges. then the time required for the experiment can be more than ten times longer. the next advantages and disadvantages of all five above mentioned experimental techniques are discussed in more detail in the reference [11]. the selection of the most suitable experimental method for a particular practical application depends on specific element parameters and specific conditions in which the experiment is going to be performed. according to the authors opinion, the modified magnetoelastic method is the only one that can be applied effectively and expediently for evaluation of the prestressed force on existing structures made from the prestressed concrete when the prestressed reinforcement is embedded inside the concrete. 2. related results in the literature in civil engineering practice, the utilization of the magnetoelastic method for experimental evaluation of axial tensile forces in structural elements started about thirty-five years ago. the original inventors developed gradually the method theory, the magnetoelastic sensors (hereinafter the me sensors) and their practical utilization. they published their obtained knowledge regularly, see [12]-[21] for example. the me sensors and their appropriate equipment that have been standardly used in civil engineering practice in the recent past and at present [12]-[24] evaluate measured prestressed forces in a relatively simple way. these standard me sensors are composed from two basic parts only, from the primary and secondary coil. this is the minimal possible configuration of the me sensor, as it is described below. the basic advantages of the application of the me sensor in its standard configuration are that the tensile force in the structural element is evaluated contactless, the observed element is not locally deformed and its anti-corrosive layer is not abraded, the sensor body is robust, long lasting and resistant to accidental mechanical damage. it is possible to evaluate the instantaneous magnitude of the tensile force with high accuracy. however, the important requirement for high accuracy of the obtained results is the sensitivity assessment of each particular standard me sensor in concrete conditions its practical application using an independent force sensor. in the case of the prestressed reinforcement (namely the strand or the cable), the force sensor is used as a part of a hydraulic jack and therefore it is necessary to install the me sensor before the activation of the observed prestressed reinforcement. an additional installation of the standard me sensor on the activated prestressed reinforcement is, of course, technologically possible. however, it is time consuming and the sensor sensitivity assessment cannot be realized in the concrete conditions of the magnetic surroundings of the location where the me sensor is installed. the modified magnetoelastic method, its physical principle, the fully equipped me sensor, an experiment on a real structure and its result evaluation are described in more details in reference [9]. other supplementary information about the method, its practical applications and about a removable me sensor can be found in references [6]-[8] and [10],[11]. 3. description of the method the method is based on an experimental estimation of the magnetic response of the tensile stressed structural element on an external magnetic field. the magnetic field intensity h and the magnetic flux density b are ones of the basic physical quantities describing the magnetic field arrangement. the relation between b and h in the form of the so-called hysteresis loop is given by the kind of material exposed to the effects of the magnetic field, its properties and its current conditions (e.g., its tensile stress and temperature). for the purposes of applications of the modified magnetoelastic method, differently arranged me sensors are used depending on the specific experiment and its concrete conditions. a diagram of a fully equipped cylindrical me sensor is shown in figure 1 that was adopted from reference [9]. fundamental components of this me sensor variant are a controlled magnetic field source (for example, the primary coil that is drawn in figure 1), a sensor of magnetic field intensity "h" in a measured cross section (the system of hall's sensors and/or the secondary coil 2), a sensor of magnetic flux that is closely related to the magnetic induction "b" in the measured section of the strand (the secondary coil 1) and the me sensor protection against magnetic influences from its surroundings (the steel shield). the function and principle of hall´s sensors were explained in more detail in the reference [10]. the fully featured me sensor offers the greatest possibilities to increase accuracy and reduce uncertainties in evaluating of the tension force in the observed prestressed element. on the other side, the fully equipped me sensor is spatially larger and that may restrict its applicability in some practical cases and it is also more figure 1. diagram of a fully equipped me sensor published also in [9] acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 169 complicated. this complexity affects its longer production time, higher requirements for its application in civil engineering practice and as well as higher requirements on the used measuring system and its equipment. the above-described standard me sensors (see chapter 2) represent the minimalist variant of the me sensor that consists of a primary coil and a secondary coil 1 only. the intensity of the magnetic field "h" is determined, in this case, indirectly from a completely different physical quantity, which is the electric current flowing through the primary coil. however, there is a risk that results from this approach with application of the minimalist variant of the me sensor can be affected. any change in the magnetic surroundings in the sensor vicinity (e.g., a removal of a massive steel falsework after concrete hardening) causes completely "silently" substantial or even severe changes in sensor parameters. the sensor's steel shielding reduces the impact of this effect on the obtained results. the quality of the me sensor shielding influences the level of this reduction, however, it is never one hundred percent. for example, the application of the minimum me sensor configuration on a prestressed prefabricated concrete structure reinforced by steel-fibre is completely unusable. 4. experimental analysis of evaluating curves for the purpose of the modified magnetoelastic method, magnetic behaviour of the standard prestressed elements used both in the past and today is appropriate and necessary to know. in november 2019, a laboratory experiment concentrated on the systematic study of variations in the magnetic behaviour of two selected standard prestressed elements (namely, the patented wire p7, that was applied in the past, and the prestressed strand lp15.7 currently used by the freyssinet company with righthanded threading). it was investigated the magnetic behaviour of both elements especially in dependence on its immediate temperature and its rate of the mechanical stress. this experiment was realized in the experimental centre of the klokner institute (the research institute at the czech technical university in prague). results of similar experiments realized for different standard prestressed elements are described in [10] (namely, for the patented wires p4.5, unknown prestressed strand lp15.7 with left-handed threading, the prestressed bars 15/17 made by dywidag company) and in [8] (namely, for the full locked cable pv 150 and the mukusol threadbar 15fs 0000). the ideal condition for precise evaluation of some particular experiment realized on an existing prestressed concrete structure is, when the specific evaluating curve is available. this specific curve can be obtained only by the magnetoelastic analysis of the specific prestressed element, which was removed directly from the investigated structure. the specimen should be minimally 1.2 m long and its extraction is generally very difficult and laborious. moreover, the load bearing system of the observed structure is partially weakened. also, the realization and evaluation of the experimental analysis necessary for determination of the evaluating curve is significantly timeconsuming. alternatively, it is possible to use the general evaluating curves that are available in the material library gradually compiled by the authors, some examples are shown in figure 7. the results of the chemical composition and the microscopic structure analysis realized for the material of the investigated prestressed reinforcement can be used for the selection of the appropriate general evaluating curve from the material library. the substantially shorter test specimen is needed for this purpose. usually only one wire 10 cm long is sufficient enough even if it is removed from the strand. the structure weakening is generally acceptable in this scale and the material sample can be removed from the opening used for installation of the me sensor without the additional partial damage of the structure. 4.1. the experiment realized on the previously used patent wire p7 the patent wire p7 used during the experiment described in this chapter was removed from the chamber of the existing prestressed concrete bridge in prague that was put into operation in the year 1974. in the course of the bridge inspection in the year 2019, the fully equipped me sensors were installed on two selected prestressed cables assembled from twelve patent wires p7 (see figure 2). the opening created for the purpose of the me sensor installation was consequently filled in by the special grout (see figure 3). the installed me sensors are intended for long-term monitoring of the prestressed forces in the selected cables. the basic reason for realization of the experiment described in this chapter was to determine the accurate parameters of magnetic behaviour of the prestressed reinforcement used in the figure 2. the fully equipped me sensor installed on the prestressed cable in the inspected bridge figure 3. the fully equipped me sensor installed on the prestressed cable, consequent filling of the created opening by the special grout acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 170 inspected bridge for more precise evaluation of the results. the fully equipped laboratory me sensor used in the course of the experiment is shown in figure 4. in the course of the experiment, the investigated patent wire p7 was placed in a climatic chamber (see figure 5) and loaded in a steel tensile testing machine. the magnetic properties of the studied wire were investigated for two temperature levels of the wire surface, namely around 0° c and around +25 °c. the studied patent wire p7 was loaded in five force steps for each temperature level according to its design resistance, namely, 10 kn, 20 kn, 30 kn, 40 kn and 47 kn. it is roughly about 20 %, 40 %, 60 %, 80 % and 100 % of the design strength. the temperature of the observed wire cross-section was evaluated as a linear interpolation between two measured temperature values. the first one was observed on the element surface in the close vicinity of the me sensor and the second one was the temperature of the air measured inside the me sensor. the specific hysteresis loop was measured and evaluated ten times for each particular temperature level and force step. the hysteresis loop, in general, characterizes the relation between magnetic flux density b and the magnetic field intensity h and it changes its shape depending on the actual force magnitude in the investigated prestressed element and also on its temperature. however, it is not effective and also necessary to evaluate the measured hysteresis loops in their whole range. the dimensionless parameter p is used to convert a complex measured shape of the hysteresis loop, that depends on the actual force magnitude, to one simple numeric value. and this parameter is standardly evaluated for the purpose of a practical application of the modified magnetoelastic method. some examples are published in [6]-[11]. the particular parameter p is described as a fraction. the numerator is the most important value for evaluation of the parameter p and it describes the level of the magnetic field intensity "h" in the main node point. the numerator value of the parameter p indicates the preference of the portion of the hysteresis loop close to the remanence (the intersection with the vertical axis in the b–h curve). on the contrary, its denominator value prefers the loop portion near to the saturation. the more exact definition of the parameter p is an industrial secret. in the course of analysis of the experiment results, several parameters p with different definitions were evaluated, for example parameters p 10/45, p 15/45 or p 20/45. the dimensionless parameter p 15/45 was finally chosen as the most suitable one for further analysis of the particular examined patent wire p7 and eventually for another similar one because of its maximal sensitivity to the prestressing force and its minimal disturbance by negative influences. the resultant regression fitting curve using polynomial regressions was calculated for the investigated patent wire p7 and the chosen resultant parameter p 15/45. the temperature effect on the parameter p was considered as linear and the force effect was considered as 3rd degree polynomial. calculated curve is one of several ones, which is drawn as curve “d7” in figure 7. the differences between the theoretical fitting curve and the used input experimental results are small. the maximal deviation between them is 1.6 % of the design strength of the investigated patent wire and the standard deviation of all particular results is 0.6 % of the design strength. 4.2. the experiment realized on the currently used prestressed strand lp 15.7 the prestressed strand lp 15.7, that was the subject of the experiment described in this chapter, is standardly used by the freyssinet company at the present time. the arrangement of the experiment, its procedure and results evaluation were similar to those described in the previous chapter. in the course of the experiment, the investigated prestressed strand was placed in the climatic chamber again (see figure 5) figure 4. the fully equipped laboratory me sensor intended for the experiment on the patent wire p7, the assembled sensor inside the steel shield (above) and view on its disassembled basic parts (below) figure 5. the exterior view on the climatic chamber and the steel tensile testing machine (on the left) and on the laboratory me sensor installed on the prestressed strand lp15.7 inside the chamber (on the right) acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 171 and loaded in the steel tensile testing machine. the magnetic properties of the studied strand were investigated for three temperature levels of the strand surface namely, around 0 °c, +20 °c and +35 °c. the strand was loaded in five force steps for each temperature level, namely, 40 kn, 80 kn, 120 kn, 160 kn and 200 kn. it is roughly about 20 %, 40 %, 60 %, 80 % and 100 % of the strand design resistance. the temperature of the observed strand was evaluated in the same way as for the patent wire p7 in chapter 4.1. the specific hysteresis loop was measured and determined again multiple times for each particular temperature level and force step as can be seen, for example, from figure 6. for the observed strand lp 15.7, the example of evaluated dependence of the chosen resultant parameter p 15/60 on the strand temperature (prestressed force is constant 120 kn) is shown in figure 6. the resultant regressive fitting curves for the several chosen parameters p were also calculated using the same methods of mathematical analysis and statistics analogous with chapter 4.1. the differences between the theoretical fitting curves and the used input experimental results are even smaller than for the wire p7. the maximal deviation between them is 1.4 % of the design resistance of the observed prestressed strand and the standard deviation of all particular results is 0.2 % of the design strength. the example of calculated regression fitting curve that expresses the dependence of the particular chosen parameter p 15/45 on the stress in the observed strand at the strand temperature 20 °c is shown in figure 7 where the relevant curve is labelled “l 2019”. 5. practical application of the method on different prestressed elements in this chapter, a brief summary of typical utilizations of the modified magnetoelastic method is described on some practical applications. 5.1. the experiment realized on the prestressed cables composed of twelve strands on an existing bridge the actual tensile forces in prestressed cables on an existing prestressed concrete bridge were investigated during this experiment. in general, the specific position for installation of the me sensor on the load bearing structure of an existing bridge made from prestressed concrete is usually chosen based on three basic reasons. firstly, the created opening in concrete (see figure 2, figure 3 or figure 8 for example) does not weaken substantially the bridge load bearing structure. secondly, the substantial degradation of the concrete or some prestressed reinforcement is supposed in the selected location, for example where water with chlorides penetrates into the structure. thirdly, the position is chosen based on an interest of the structural designer. the investigated bridge is about thirty years old and its prestressed system is composed of the longitudinal prestressing cables. the typical prestressed cable on this bridge is the post figure 6. the prestressed strand lp 15.7 – the relation between the temperature of the observed strand and the chosen resultant dimensionless parameter p 15/60 on the specific force step (120 kn) figure 7. the comparison of the relations between the stress in the six observed prestressed elements (three patent wires p4.5, one patent wire p7 and two strands lp15.7 / 1860 mpa) and the chosen resultant dimensionless parameter p 15/45 for the prestressed element temperature 20 °c figure 8. the post installed me sensor wounded on the prestressed cable composed of twelve strands lp 15.7 mm acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 172 tensioned one and it is composed of twelve strands lp 15.7. it is led through the web of the box girder made from monolithic concrete inside a thin-wall steel duct and it was injected with cement mortar after its tension. the separates parts of one prestressed cable were interconnected by couplers in working joints. in the course of the experiment, six fully equipped me sensors were installed additionally on selected existing prestressed cables. it means, all sensor constructions among others included two hall's sensors and both secondary coil no. 1 and no. 2. in general, individual observed cross sections of the investigated prestressed cables are unique at least due to the variable arrangement of the strands in the prestressed cable (see figure 9 for example), therefore the developed methodology for application of the modified magnetoelastic method on existing prestressed concrete structures that is described in more detail in reference [9] had to be used during the results evaluations. in brief, the prestressed force evaluation in the cable by the methodology consists of four main stages. the first one is the as accurate as possible measurement of the real geometric arrangement of the strands in the observed cross-section of the prestressed cable. the second stage is theoretical modelling of the measured geometric shape of the cable in a 3d software for the electromagnetic field analysis (see figure 9). the third one is the in situ observation of the electromagnetic behaviour of the me sensor installed on the cable (see figure 8). the fourth stage is the application of the available calibration curve that was gained in laboratory test realized for the strand type from which the analysed cable is assembled. the short study of the result uncertainties caused by the partially variable mutual position of the sensor body and the cable was carried out during the described experiment. the prestressed forces in the observed cables were determined repeatedly three times by the modified magnetoelastic method and they were subsequently mutually compared. for example, the mutual geometric arrangements of the me sensors and the investigated prestressed cables were changed repeatedly. each individual me sensor was partially displaced several times in the plane of observed cable cross section in different directions perpendicular to the longitudinal axis of the cable within a range of clearance gap between the sensor body and the cable surface. subsequently, the measurement of the prestressed force was realized for the each adjusted mutual position of the sensor body and the cable. the above-mentioned analysis of the obtained results provided information about partial uncertainties of the evaluated tension forces which are related with the geometric arrangement between the sensor body and the cable and the numerical modelling of the problem (see figure 9). the variance of the resulting values on each observed cable did not exceed 1 % of the prestressed force magnitudes. the tensile forces evaluated by the modified magnetoelastic method in the individual observed cable locations were mutually compared as well. the force magnitude of some investigated cables was only about 50 % of the prestressed forces evaluated in other ones. these cables were examined in detail subsequently. their significant weakening by corrosion was detected near their measured cross section and the corrosion process was distinctly uneven in these cases. the authors tried to reduce uncertainties of the experiments as much as possible. the main sources of the uncertainties are follows: the uncertainty related to the material of the investigated prestressed reinforcement, the uncertainty connected with the fem numerical modelling, the uncertainty caused by the actual temperature of the reinforcement in experiment time and the uncertainties of the hall's sensors. the chemical compositions and the microscopic structures of the materials of the prestressed reinforcements are more or less different. if it is possible, the reference sample of the investigated prestressed reinforcement should be taken from the observed existing structure and then analysed in an ideal variant of the experiment. the application of more hall's sensors to different positions in the investigated cross-section of the prestressed cable should be used for more accurate verification of the numerical model based on the multiple comparison of the experimental and theoretical results. the evaluating curves (see figure 7 for example) for particular material of some prestressed reinforcement that characterize the temperature effect on the analysed results should be realized using calibrated force and temperature sensors and devices. the sensitivity of hall's sensors applied by the experiments should be verified using a calibrated source of a reference magnetic field. as can be seen from figure 8 the installation of the me sensors was associated with the laborious and precise local demolition of the concrete layer in locations with cables selected for the experiment. the additional benefit of this experiment is fact, that the created openings for the me sensor installation can be used for a proper inspection of the prestressed reinforcements and their potential corrosion. due to the fact that the original post-injection grouting of the steel ducts with the cables was not performed well, the me sensor can also identify weakening of the cable cross section by corrosion that is located out of the investigated place and the created opening, because there is not redistribution of the prestress losses of the cable tensile stress on other strands or cables. 5.2. the experiments realized on the prestressed bars the experiments described in this chapter were carried out on some samples of the currently used prestressed threadbar of type mukusol 15fs made by dywidag company. the bar diameter is 15 mm, the thread diameter is 17 mm, its characteristics figure 9. the important output of the numerical model necessary for the correct interpretation of experimental data (the numerical model of the observed cable cross section with precisely located positions of twelve strands lp15,7 and two used hall's sensors) acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 173 strength is 960 mpa and appropriate characteristics breaking load is 170 kn. in the past years, the authors realized similar experiments focused on the threadbars [11]. however, the simpler constructions of the me sensors with only one secondary coil were used during these experiments. it was stated in conclusion of the paper [11] that the supervision of the presstressed bars using the modified magnetoelastic method does not seem appropriate. the supposed advantages of the newly used me sensor arrangement with both secondary coils were verified on the threadbars in the course of the recent experiments [8]. the secondary coils were applied as the detector of the of magnetic field intensity "h" instead of the originally used hall's sensors, that failed in the conditions of the swirling magnetic field caused by the threads of the prestressed bar [11]. the differential signal evaluated from the data observed by the pair of the secondary coils is used for evaluating of the same physical quantity that is also measured by hall's sensors. however, quantity magnetic field intensity "h" is not observed only locally as in the case of hall's sensors but the differential signal corresponds to value of quantity “h” averaged over the volume of the annulus between two secondary coils. both secondary coils used in the applied me sensor were made with the same length as the screw thread pitch of the investigated threadbar for the purpose to reduce the negative influence of the threads and to eliminate all abnormalities of the magnetic field in their neighborhood. see reference [8] for more details. the phenomenon of the inverse sensitivity of the threadbar material to the applied axial load was found out during the analysis of experimental data. this phenomenon is probably caused by the disturbing effects of eddy currents. the development of their influence is produced by the compact massive cross section of the bar in comparison with prestressed wires or strands. in terms of the magnetic behaviour, the strand is close to a system of prestressed wires and not to the compact threadbar. more details of the phenomenon are described in reference [8]. the maximal difference between the force actual values, that were measured by the standard force sensor used during the experiment, and the force determined using the evaluation curve applied to the experimental data measured by the me sensor was about 2.0 % of the characteristic strength of the investigated threadbar. based on the new experiences of the authors with application of the me sensor containing both secondary coils, it can be stated that modified me method enables to determine the actual value in the standard prestressed threadbars. however, during the design of an experiment, it is necessary to consider the lower sensitivity of the modified magnetoelastic method for the prestressed threadbars compared to the strands for example. this lower sensitivity is probably related to the lower strength of the material used for production of the threadbars. 5.3. the experiment realized on the standard full locked cable pv-150 the experiment described in this part was realized on the currently used standard full locked cable pv 150 made by firm pfeifer seilund hebetechnik gmbh. the arrangement of the experiment and its results are described in more detail in the reference [8]. the basic objective of the experiment was to verify if the modified me method can be used for determination of the axial tensile forces in the standardly produced full locked cables of type pv. the core of this cable type is composed of several layers of circular cross-section wires and the locked outer layers are formed from z shaped wires. the outer diameter of the investigated cable pv 150 is 40 mm and its characteristics strength is 1435 mpa and appropriate characteristics breaking load is 1520 kn. the magnetic behaviour of the investigated cable pv 150 was observed by the fully equipped me sensor (see figure 1). its construction involved the following parts: the sensor body made by a 3d printer (see figure 10), the primary coil used as the controlled magnetic field source, the secondary coil 1 (see figure 10) applied as a sensor of magnetic flux that is closely related to the magnetic induction "b" in the measured cable cross section, the system of two hall sensors (see figure 10) and the secondary coil 2 used as two independent sensors of magnetic field intensity "h" in a measured cable cross section and steel shield of the me sensor applied as a sensor protection against magnetic influences from its surroundings. in the course of the experiment, the cable was anchored in a stand and it was loaded twice in nine force steps by the tensile force produced by a calibrated tensional hydraulic jack. the experiment was primarily focused on the verification of usability of the modified magnetoelastic method for this type of the structural element and it was not intended as the full experimental analysis of the evaluating curves because it was realized only for one temperature of the cable. however, several various evaluation procedures were examined based on the measured data. the obtained experimental results show that the modified magnetoelastic method can be applied meaningfully for experimental determination of the tensile force in the standard fully locked cables of type pv. the magnetic behaviour of these cables is analogical to the standard prestressed strands of type lp. 6. alternative arrangement of the me sensor in the course of the research project, the design of the prototype of the removable me sensor was also developed. its each part can be simply mounted on the observed structural element and then removed easily and no one coil has to be figure 10. the production of the me sensor on the cable pv 150, the process of the winding of the secondary coil 1, the placement of hall’s sensor acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 174 produced in situ, so the time-consuming winding of the coils is not required on the experiment location. the basic part of the removable me sensor is the primary coil winded on the steel horseshoe-shaped core (see figure 12), that is used for developing sufficiently intense variable electromagnetic field. the parameters of the magnetic field are measured applying two secondary coils and two hall's sensors. the secondary coils are winded around the orange frames that were made by a 3d printer and that are put on the both ends of the steel core (see figure 12). they measure the magnetic field properties in a direction perpendicular to the element axis. more specifically, they observed the changes of magnetic-inductive flow in the magnetic circuit, that correlate with the intensity of the magnetic induction “b”. two hall's sensors are located inside the special black holder (see figure 12) that was also made by the 3d printer and that is mounted on the studied structural element in the plane of symmetry of the removable me sensor. it means, both hall's sensors are situated in the central plane of the sensor perpendicular to the investigated element and they measure the properties of the magnetic field in a direction parallel with the element axis, more specifically, they observe magnetic field intensity "h" in the investigated cross section of the element that is located in the plane of sensor symmetry. as part of the development of the sensor prototype, the possibility of magnetic-inductive flow observation was evaluated. flow observations were made by using two measuring coils that are positioned on the both ends of the steel horseshoe-shaped core of the electromagnet and that are connected in series in the measuring circuit. the mutual relation between the time courses of the measured signals from the secondary coils and hall's sensors was investigated and assessed. the subject of interest was the answer for the question if the magnetization of the compact massive core of the primary coil, on which both secondary coils are threaded, does not influence the measurement of hysteresis behaviour of the investigated prestressed element. based on the obtained experimental data, it can be stated that the hysteresis is affected by the compact core of the primary coil. however, this phenomenon occurs only during the rapid increase of magnetic flux during the rapid increase of the excitation electric pulse. it always disappears completely during the descending branch of the excitation pulse because then the phenomenon is almost quasi-static from the view of the dynamics of the magnetic field development. the magnetic behaviour of the assessed arrangement of the experiment, it means the system consisting of the removable me sensor and the investigated structural element, is purely linear in the zone of the descending branch of the excitation pulse that is really used for the analysis of the experimental data measured by the me sensor. this behaviour was also verified by the removable me sensor, which was mounted on a nonferromagnetic dummy of the investigated element. the obtained results of the experiment, that are described in more detail in reference [7], confirmed the functionality of the designed arrangement of the removable me sensor. however, the precision of the gained results is lower than for the cylindrical me sensor, that is described in the chapter 3, due to the less accurate definition of the magnetic field in the observed cross section compared to the arrangement of the cylindrical me sensor. the removable me sensor can be used for a relatively quick preparative experimental analysis intended for the evaluation of the total value of the prestressed force in the structural elements activated before the start of the experiment. the intended purpose of the practical application of the removable me sensor is a quick selection of prestressed elements that are the most important for further detailed analysis by using the cylindrical fully equipped me sensors. the comparison of the basic advantages and disadvantages between the standard cylindrical fully equipped me sensor whose general scheme is shown in figure 1 and the removable me sensor whose general diagram is drawn in figure 11 could be summarized in following notes. the advantages of the standard cylindrical me sensor are that a relatively high measurement accuracy could be achieved and that the dimension of the cross-section of the observed prestressed reinforcement is not restricted practically. its basic disadvantage is its relatively time-consuming production. the fundamental advantage of the removable me sensor is that the time necessary for preparation and realization of the experiment is substantially shorter. its disadvantages are higher uncertainties of the evaluated prestressed forces and a considerable limitation of the dimension of the observed reinforcement cross-section. figure 11. the arrangement of the removable me sensor figure 12. the application of the removable me sensor on the strand lp 15.7 in the laboratory conditions acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 175 7. results and discussions so far, as part of development of the modified magnetoelastic method, the experiments focused on the evaluating curves were realized for four specimens of the patent wires, three types of the prestressed bars and two prestressed strands lp 15.7 / 1860 mpa from different producers. the results obtained for six selected investigated prestressed elements (namely, three patent wires p4.5, one patent wire p7 and two prestressed strands lp15.7) were compared mutually in detail. the resultant regression fitting curves for the particular elements at the element temperature 20 °c are drawn in figure 7. the analysis of the results shows that the standard deviations of all measured data from the specific calculated evaluating curves are usually significantly lower than 1.0 % of the design strength. the comparison of the corresponding evaluating curves for two different samples of the prestressed strands lp 15.7 / 1860 mpa made by different producers is shown in figure 7 (specimens “l 2019” and “l 2016”). a specific producer of the strand “l 2016” is not known. the strand “l 2019” was supplied by freyssinet cz. the difference between curves is roughly about 5 % of the design strength of the strands. it means the evaluating curves of the particular samples of the strand lp 15.7 deviate roughly about 2.5 % from the averaging evaluating curve of this type of the prestressed strand. very similar results were obtained from the comparison of the corresponding evaluating curves for three different test samples of the patent wire p4.5 that were taken during the demolitions of three different existing bridges. these bridges were built in the seventies and eighties in czechoslovakia. for example, three curves “d4 spec. 1”, “d4 spec. 2” and “d4 spec. 3” for the wire temperature 20 °c are drawn in figure 7. the evaluating curves of the particular test samples of the patent wire p4.5 deviate roughly about 2.5 % from the averaging evaluating curve of this type of patent wire. in contrast to the previously mentioned results, the significant difference roughly above 20 % of the design strength was found between the evaluating curves for the patent wires p7 and p4.5, as can be seen from figure 7. results of the additional analyses indicated that the chemical compositions of the materials, from which the studied wires p4.5 and p7 were made, were almost identical. nevertheless, for the wire p7, it was found out, that its microscopic structure of steel is fundamentally different from the three wires p4.5 and this fact seems to be the reason of the difference between the evaluating curves for the patent wire p7 and wires p4.5. as it was expected, it was confirmed experimentally that the evaluation relationships are not applicable universally for different materials used for various prestressed elements. it was further found that the evaluation curves are not mutually applicable even for the patent wires with different diameters. 8. conclusions and outlook the results stated above demonstrate that the modified magnetoelastic method can be used for the experiments realized on the existing structures for the determination of the actual value of the tension force in steel prestressed structural elements using the available general evaluating curves. the result uncertainties of these experiments are then similar as for the alternative experimental methods, e.g., the vibration frequency method [1], [2]. it should be noted here that the frequency method cannot be used for the prestressed elements embedded in the concrete. in the cases when it is possible to remove the test sample of the specific prestressed element from the particular existing bridge then the evaluating curves for this observed element can be evaluated according to the above-described procedure. the uncertainties of the evaluated prestressed forces are then relatively small and they are comparable with precision of the method based on a strain measurement with strain gauges. the possibility of using the modified magnetoelastic method for the prestressed bars was also verified during previous experiments [8] and [11]. based on the new experiences of the authors with application of the me sensor containing both secondary coils, it can be stated that modified magnetoelastic method enables to determine the actual value in the standard prestressed threadbars. however, the me sensor sensitivity, when it is applied on the threadbars, is lower than for the prestressed wires and strands. the main reason of this fact is the significantly lower design strength of the threadbar materials. according to the authors opinion, the modified magnetoelastic method is the only one that can be purposefully and effectively used for the prestressed force evaluation in the prestressed reinforcements embedded inside the concrete on the existing prestressed concrete bridges or similar engineering structures. acknowledgement the results presented in this article are outputs of the research project fv 30457 “utilization of a magnetoelastic method for increasing the reliability and durability of existing and newly built prestressed concrete structures” supported by ministry of industry and trade of the czech republic. references [1] m. polák, t. plachý, determination of forces in roof cables at administrative center amazon court, procedia engineering, vol. 48., 2012, pp. 578-582. doi: 10.1016/j.proeng.2012.09.556 [2] m. polák, t. plachý, experimental evaluation of tensile forces in short steel rods, applied mechanics and materials, vol. 732, 2015, pp. 333-336. doi: 10.4028/www.scientific.net%2famm.732.333 [3] p. fajman, m. polák, measurement of structural cable of membranes, proceedings of the 50th annual conference on experimental stress analysis ean 2012, tábor, czech republic, 4-7 june 2012, pp. 61-64. [4] j. máca, p. fajman, m. polák, measurements of forces in cable membrane structures, proceedings of the 13th international conference on civil, structural and environmental engineering computing cc 2011, chania – crete, greece, 6-9 september 2011, paper 190. doi: 10.4203/ccp.96.190 [5] p. fajman, m. polák, j. máca, t. plachý, the experimental observation of the prestress forces in the structural elements of a tension fabric structure, applied mechanics and materials, vol. 486, 2014, pp. 189-194. doi: 10.4028/www.scientific.net/amm.486.189 [6] t. klier, t. míčka, m. polák, m. hedbávný, application of the modified magnetoelastic method, proceedings of the 17th imeko tc 10 and eurolab virtual conference “global trends in testing, diagnostics & inspection for 2030”, online event, 20-22 october 2020, pp. 344-349. online [accessed 8 september 2021] https://www.imeko.org/publications/tc10-2020/imekotc10-2020-051.pdf https://doi.org/10.1016/j.proeng.2012.09.556 https://doi.org/10.4028/www.scientific.net%2famm.732.333 http://dx.doi.org/10.4203/ccp.96.190 https://doi.org/10.4028/www.scientific.net/amm.486.189 https://www.imeko.org/publications/tc10-2020/imeko-tc10-2020-051.pdf https://www.imeko.org/publications/tc10-2020/imeko-tc10-2020-051.pdf acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 176 [7] t. klier, t. míčka, m. polák, t. plachý, m. hedbávný, l. krejčíková, the modified elastomagnetic sensor intended for a quick application on an existing prestressed concrete structures, proceedings of the 58th conference on experimental stress analysis ean, sobotín, 19-22 october 2020, p. 9. [8] t. klier, t. míčka, m. polák, t. plachý, m. hedbávný, l. krejčíková, new information about practical application of the modified magnetoelastic method, matec web of conferences, vol. 310, no. 00026, 2020, p. 10. doi: 10.1051/matecconf/202031000026 [9] t. klier, t. míčka, m. polák, t. plachý, m. hedbávný, r. jelínek, f. bláha, application of the modified magnetoelastic method and an analysis of the magnetic field, acta polytechnica ctu proceedings, vol. 15., 2018, pp. 46-50. doi: 10.14311/app.2018.15.0046 [10] t. klier, t. míčka, t. plachý, m. polák, t. smeták, m. šimler, the verification of a new approach to the experimental estimation of tensile forces in prestressed structural elements by method based on the magnetoelastic principle, matec web of conferences, vol. 107, no. 00015, 2017, p. 8. doi: 10.1051/matecconf/201710700015 [11] t. klier, t. míčka, m. polák, t. plachý, m. šimler, t. smeták, the in situ application of a new approach to experimental estimation of tensile forces in prestressed structural elements by method based on the magnetoelastic principle, proceedings of the 55th conference on experimental stress analysis 2017, košice, czechia, 30 may – 1 june 2017, pp. 122-132. [12] p. sumitro, n. miyamoto, a. jaroševič, corrosion monitoring by utilizing em technology, proceedings of the 5th international conference on structural health monitoring of intelligent infrastructure, shmii-5 2011, cancun, mexico, 11-15 december 2011, p. 11. [13] m. chandoga, a. jaroševič, j. sedlák, e. sedlák, experimental and in situ study of bridge beams supported by bottom external tendons, proceedings of the 3rd international fib congress and exhibition, incorporating the pci annual convention and bridge conference 2010, washington, usa, 29 may 2 june 2010, 9 pp. [14] m. chandoga, a. jaroševič, measurement of force distribution along the external tendons, proceedings of the international conference analytical models and new concepts in concrete and masonry structures, lodz, poland, 9-11 june 2008, 6 pp. [15] m. chandoga, p. fabo, a. jaroševič, measurement of forces in the cable stays of the apollo bridge, proceedings of the 2nd fib congress, naples, italy, 2006, pp. 674-675. [16] p. fabo, m. chandoga, a. jarosevic, the smart tendons a new approach to prestressing, proceedings of the fib symposium 2004 concrete structures: the challenge of creativity, avignon, france, 26-28 april 2004, pp. 286-287. [17] p. fabo, a. jaroševič, m. chandoga, health monitoring of the steel cables using the elasto-magnetic method, proceedings of the asme international mechanical engineering congress and exposition, 2002, pp. 295-299. doi: 10.1115/imece2002-33943 [18] s. sumitro, a. jaroševič, m. l. wang, elasto-magnetic sensor utilization on steel cable stress measurement, proceedings of the 1st fib congress, concrete structures in the 21th century, 2002, pp. 79-86. [19] m. chandoga, j. halvonik, a. jaroševič, d. w. begg, relationship between design, modelling and in-situ measurement of pretensioned bridge beams, proceedings of the 8th international conference on computational methods and experimental measurements, cmem 1997, rhodes, greece, 1 may 1997, pp. 623-632. [20] a. jaroševič, magnetoelastic method of stress measurement in steel, nato science series, 3 (65), 1998, pp. 107-114. [21] a. jaroševič, m. chandoga, force measuring of prestressing steel, inzinierske stavby, 42, 1994, pp. 56-62. [22] a. m. sarmento, a. lage, e. caetano, j. figueiras, stress measurement and material defect detection in steel strands by magneto elastic effect. comparison with other non-destructive measurement techniques, proceedings of the 6th international conference on bridge maintenance, safety and management iabmas 2012, stresa-lake maggiore, italy, 8-12 july 2012, pp. 914-921. doi: 10.1201/b12352-126 [23] c. chen, w. wu, d. wei, stress measurement of pre-stressed members using elasto-magnetic sensors, journal of the chinese institute of civil and hydraulic engineering, vol. 24, iss. 2, 2012, pp. 157-167. [24] h. j. wichmann, a. holst, h. budelmann, magnetoelastic stress measurement and material defect detection in prestressed tendons using coil sensors, proceedings of 7th international symposium on non-destructive testing in civil engineering ndtce’09, nantes, france, 30 june – 3 july 2009, 6 pp. online [accessed 8 september 2021] https://www.ndt.net/article/ndtce2009/papers/60.pdf [25] h. feng, x. liu, b. wu, d. wu, x. zhang, c. he, temperatureinsensitive cable tension monitoring during the construction of a cable-stayed bridge with a custom-developed pulse elastomagnetic instrument, structural health monitoring, vol. 18, iss. 5-6, 2019, pp. 1982-1994. doi: 10.1177/1475921718814733 https://doi.org/10.1051/matecconf/202031000026 https://doi.org/10.14311/app.2018.15.0046 https://doi.org/10.1051/matecconf/201710700015 https://doi.org/10.1115/imece2002-33943 https://doi.org/10.1201/b12352-126 https://www.ndt.net/article/ndtce2009/papers/60.pdf https://doi.org/10.1177/1475921718814733 fire sm: new dataset for anomaly detection of fire in video surveillance acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 6 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 fire sm: new dataset for anomaly detection of fire in video surveillance shital mali1, uday khot1 1 department of electronics and telecommunication , st. francis institute of technology, mumbai university, mumbai, india section: research paper keywords: anomalous; convolutional neural network; dataset; fire; smoke citation: shital mali, uday khot, fire sm: new dataset for anomaly detection of fire in video surveillance, acta imeko, vol. 11, no. 1, article 25, march 2022, identifier: imeko-acta-11 (2022)-01-25 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received november 29, 2021; in final form march 6, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: shital mali, e-mail: shital.mali@rait.ac.in 1. introduction surveillance cameras are widespread, and it is not feasible to have people actively tracking them. in most instances, nearly all footage of surveillance camera is unimportant. only rare pieces of video are of the main concern. thus, the key inspiration for creating video anomaly detection along with image-based is, to automatically locate areas of video/image which are irregular. this would mark those for human inspection. recently, the research study of the identification of video/image anomaly has been characterised by two parameters. the training videos are made using a secure event. the anomalous event identification would be task followed after examining the video. in order to define what is usual for a specific scene, it is important to have training footage of normal behaviour. by an anomalous case that implies the localized video section, which is substantially dissimilar, happen inside a training video. more difficult is to choose very different attributes, that have been handled in point of interest application. such disparity would be due to many causes, the majority usually remarkable presence of the things inside video. interestingly note, most researchers conferred on anomaly detection after experimentation [1], [2], [3], [4] and some have published their findings with different techniques [5], [6], [7]. few studies have discussed about the usual anomaly video which either coming from one or two same scene. the attribute that may attach in identification purpose, the unique index of geographical space in an instance of video. for certain instances, the detection algorithm one can identify the anomalous scene that for other instance may not be a case of anomalous. the problem in little moment was needed to take care under such research study. the quality-wise distinct issue in one scene may create a difficulty in superimposed multiple-scenes. the identification and analysis of single instance indicates to very uniquely handle feature in surveillance system, this study focused on such aspect. so, measurement technology plays a vital role in anomaly detection and surveillance applications. the formulation explains about the differences which lastly act spatially. the detection of anomalous activity in video/image directly related to performance and accuracy of detection algorithm. there would always be a scope of improvement in anomalous detection algorithm. abstract tiny datasets of restricted range operations, as well as flawed assessment criteria, are currently stifling progress in video anomaly detection science. this paper aims at assisting the progress of this research topic, incorporating a wide and diverse new dataset known as fire sm. further, additional information can be derived by a precise estimation in early fire detection using an indicator, average precision. in addition to the proposed dataset, the investigations under anomaly situations have been supported by results. in this paper different anomaly detection methods that offer efficient way to detect fire incidences have been compared with two existing popular techniques. the findings were analysed using average precision (ap) as a performance measure. it indicates about 78 % accuracy on the proposed dataset, compared to 71 % and 61 % on foggia dataset, for inceptionnet and firenet algorithm, respectively. the proposed dataset can be useful in a variety of cases. findings show that the crucial advantage is its diversity. mailto:shital.mali@rait.ac.in acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 the number of challenges would come to place while dealing with anomalous detection in fire related datasets. the shortcoming involves the unique kind dataset solely based on fire anomalous instances, low resolution of existing datasets, variations in anomalous. few more cases, in which uncertainty, inconsistencies, and loss in quality have been identified. the main focus of the paper has been on the detection of anomaly, analysis of obtained result after application on experimental dataset with the help of few assessment indices. the induction of new dataset of early fire and smoke (refer to table 1) would be helpful in many applications. maintaining diversities in dataset would be a key point, which checks anomalous things in different direction and more complex way. 2. existing dataset fires are man-made hazards that inflict human. it causes social, and economic damage. early fire alarm and an automatic approach are important and useful to emergency recovery services to minimize these losses. existing fire alarm devices have been shown to be unreliable in terms of numerous real-world situations. the vital disadvantage of the sensor-based framework is that it should be situated close to a fire or warmth source. however, this makes them impractical to use in a variety of frequently occurring scenarios such as long-distance fire occurrences as seen in figure 1. due to this, the traditional approach has failed to avoid a number of fire deaths. solutions to this usually involve a reasonable amount of fire or heat sensation to stimulate the alarm. in addition, the fire or smoke regions are not precisely located. due to shortcomings of fire detection, researchers have been researching computer vision related approaches that have become alternatives for improving the fire and smoke detection system. existing vision-based approaches focusing solely on the transformation of colour space for fire area detection [1], [2]. rule-based methodology, along with colour space, has a promising future in delivering improved results. however, such devices are also vulnerable to other lit items such as streetlights. additional methods applied to the decision-making algorithm additional features to colour-based methods such as location, boundary and motion cues [3], [4]. classifiers such as bayes classifier, dual optical flow and multi-expert scheme have been used to minimize false detection or misclassification. however, these strategies are vulnerable to error and fail in many complex real-world scenarios as seen in figure 2. however, due to the complexities of the condition, fire detection is a challenging task. as it does not have a definite form, area of incidence, complex temporal behaviour so as to extract the function. the hand-crafted collection of features involves a considerable amount of domain information. table 2 listed details of existing dataset. the foggia video dataset [8] and the chino dataset [9] were the two basic datasets. the first dataset includes of 31 enclosed environment and open-air videos. in that, seventeen are with not fire related while fourteen categorized of fire. as a result, colourbased methods are incapable for recognizing genuine fire and table 1. fire instances in fire sm dataset. anomaly class instances 1. outside offices 78 2. outside apartment 88 3. in bushes 26 4. outside light 15 5. street light 13 6. decorative lighting 11 7. bon-fire 9 8. cooking gas 25 table 2. existing datasets. type size per image rate no. of frames related fire remark or observations fire1 320x240 15 705 yes see [10] fire2 320x240 29 116 yes refer [10] and [11] fire3 400x256 15 255 yes fire4 400x256 15 240 yes fire5 400x256 15 195 yes fire6 320x240 10 1200 yes fire7 400x256 15 195 yes fire8 400x256 15 240 yes fire9 400x256 15 240 yes fire10 400x256 15 210 yes fire11 400x256 15 210 yes fire12 400x256 15 210 yes fire13 320x240 25 1650 yes fire14 320x240 15 5535 yes foggia et al. [8] fire15 320x240 15 240 no refer [11] and [10] fire16 320x240 10 900 no fire17 320x240 25 1725 no fire18 352x288 10 600 no fire19 320x240 10 630 no fire20 320x240 9 5958 no fire21 720x480 10 80 no fire22 480x272 25 22500 no foggia et al. [8] fire23 720x576 7 6097 no refer [11] and [10] fire24 320x240 10 342 no fire25 352x288 10 140 no fire26 720x576 7 847 no fire27 320x240 10 1400 no fire28 352x288 25 6025 no fire29 720x576 10 600 no fire30 800x600 15 1920 no foggia et al. [8] fire31 800x600 15 1485 no figure 1. test images of training data. figure 2. sample confusing images which look like fire or smoke. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 scenes with red shading parts. additionally, movement-based strategies may mistakenly portray a mountain scene of smoke, fog, or haze. these pieces have made the informational collection more troublesome, empowering us to push our engineering and assess its exhibition in different genuine settings. another issue that arises during data processing is the difference between the fire and the non-fire. at a greater distance, for example, fire2 [10] video contains so little fire. in the other hand, the fire13 [10] video shows no fire, but only within a very small range. thus, red designs and grounds like billboard (fire14) and radish grass (fire6) are available in numerous photos, making the dataset hard to decipher. the second dataset is relatively limited but very difficult. this dataset contains a total of 226 images, 119 of which contain fire while the other 107 are fire-like pictures including night falls, fire-like stars, and daylight coming through windows and so on. an enormous amount of data is required in training for convolutional neural networks (cnns). conversely, the current image/video fire collections are insufficient to meet the demands. table 3 displays some limited scale fire picture/video information repositories. the data collection includes 13,400 fire images in all. these photographs were taken both outside and inside. there are 9695 "fire" and 7442 "smoke" facets in the data collection. in addition, the dataset includes 15,780 images that do not have flames. these data were acquired from 16 separate user environments and involve 49,614 distorted images. each picture usually involves distortion such as due to surrounding noise or climatic condition. for this investigation, half of the pictures in the information assortment are utilized as the preparation/approval set. the remaining half is utilized as the test set. 3. experimental details experiments were carried out using a deep neural network technique has been applied on proposed dataset. for which the system was used, the nvidia rtx 2080 processor with 10 gb on-board ram, as well as ubuntu os16.04 based on system. the cpu would be of intel core i5. this system was having ram of 64gb. the analyses utilized 68,457 pictures acquired from notable fire datasets. this includes foggia et al. [8] of 62,690 pictures. the planning and testing periods of the tests followed the trial system, where 20 % and 80 % of the information were utilized for preparing and testing, separately. the technique was applied with a qualified proposed updated efficientdet [13]. the modified efficientdet algorithm incorporated with leaky relu as activation function has been replaced hardswish. a training data of 2717 pictures was generated by using a model of 2529 fire pictures and 190 non-fire pictures. the planned network, however, with only 2-classes, i.e. fire and not fire class. data sets are one of the essential components for evaluating the output of any given device. evaluating the algorithm against a regular data set is one of the most difficult activities. in the suggested datasets, all photographs are original and taken by real people. this dataset is therefore the most demanding and diversified dataset ever produced. this handcrafted research dataset was designed to explain the generalization of a qualified model. this involves an average of 2 boxes per picture of varying size and aspect ratio. activation mapping has been exploited. this was required to get the approximate bounding box. the loss function was used as a binary cross-entropy during the study of this dataset. in addition, the optimizer is found to be rmsprop with an early learning score of 0.001s. the number of 300 epochs was taken into account. the accompanying segments present subtleties of results got utilizing different fire datasets and their correlation with cutting edge fire information base methodologies. 4. fire sm dataset description and results this proposed fire sm dataset was verified for density of occurrences of actual fire location in an image. the dataset contains images of the fire which was located not only at centre of image but also at corner, top, bottom side as well included. the density of fire location in an image was shown in figure 3. this figure shows the fire location in relative coordinate plane. in this, red colour signifies fire at middle, orange, yellow colour table 3. number of fire image/video datasets present [12]. institution format object website bilkent university video fire, smoke, disturbance http://signal.ee.bilkent.edu.tr/visitfire/index.html cvpr lab, at keimyung university video fire, smoke, disturbance https://cvpr.kmu.ac.kr umrcnrs 6134 spe, corsica university dataset fire http://cfdb.univ-crse.fr/index.php?menu=1 faculty of electrical engineering, split university image, video smoke http://wildfire.fesb.hr/ institute of microelectronics, seville, spain image, video smoke https://www2.imse-cnm.csic.es/vmote/english_version/ national fire research laboratory, nist video fire https://www.nist.gov/topics/fire state key laboratory of fire science, university of science and technology of china image, video smoke http://smoke.ustc.edu.en/datasets.htm figure 3. fire location in images with distribution in relative coordinate in fire sm dataset (red-at middle, orange, yellowat corner, sky blue, dark blue not at middle and corner). http://signal.ee.bilkent.edu.tr/visitfire/index.html https://cvpr.kmu.ac.kr/ http://cfdb.univ-crse.fr/index.php?menu=1 http://wildfire.fesb.hr/ https://www2.imse-cnm.csic.es/vmote/english_version/ https://www.nist.gov/topics/fire http://smoke.ustc.edu.en/datasets.htm acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 signifies fire at corner, while sky blue, dark blue colour signifies fire not at middle and corner position. this figure supported to the diversified distribution dataset of fire was proposed for anomaly fire detection technique. in reality, with a phenomenon that appears over several frames, it is necessary to discover an irregularity in probably a portion of the images. however, confirming the area in every frame of the track is typically needless. this is especially true where there is uncertainty regarding when to start and finish the previously described phenomenon, as well as when anomalous activity is heavily occluded for a few frames. below mentioned feature are nothing but a measure of classification quality. 4.1. feature indices 4.1.1. localized detection index/rate the localized detection rate ldr is defined as ldr = (number of true regions detected)/(total number of regions). true region in an image detected if intersection of true area and recognized local portion is more or equivalent to  as shown in figure 4. 4.1.2. region based detection rate the region based detection rate rbdr is defined as rbdr= (number of positive image detected)/(total number of regions).  ranges between 0 to 1. default, =0.1. the negative region rate nrr is defined as nrr= (total non-positive regions) / (total frames or images) the average detection rate for negative region rate, nrr, will ranging from 0 to 1. there is a compromise between the discovery rate (genuine positive rate) and the bogus positive rate, likewise with any location rule. this can be caught in the roc bend determined by changing the inconsistency score edge that characterises what locales are seen as abnormality. figure 5 and figure 6, show characteristic curves for inceptionnet method, foggia and fire sm datasets. the nature of the curve signifies those values favorable to proposed dataset i.e., fire sm compared to foggia. khan et al. [14] described inceptionnet method on fire instance dataset. the dataset mentioned was less diversified as compared to proposed fire sm dataset. khan et al. [14] and firenet [15] proposed approaches focused on the classification of the leave-one-out strategy to be used for each level. in comparison to these algorithms, the updated efficientdet [16]-[19] based more on the degree of detection. this paper considered the average precision (ap) indicator for quantitative analysis. results were collected and 1 1 1 1 1 1 1 1 1 figure 4. representation of framework of areas petitioning frame. ‘1’ defines depicted as true region. gives an idea of detection method. a) b) figure 5. a, b nrr per frame characteristic curves for different dataset. a) b) figure 6. a, b nrr frame level characteristic curves for different dataset. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 seen in table 4 for the proposed dataset relative to the foggia dataset. activation mapping has been used to get an estimated bounding box. table 4 shows the updated efficientdet performance better relative to other algorithms. at present, both average precision (ap) at 50 and ap at 75 were compared. the efficientdet result obtained given the proposed early fire dataset was approximately 73 per cent and 71 per cent compared to approximately 53 per cent, 51 per cent and approximately 68 per cent respectively, and 58 per cent for inceptionnet and firenet. on the other hand, when taking into account the foggia dataset, the findings obtained were approximately 82 per cent, 78 per cent. these have been compared to about 65 %, 61 % and around 73 %, and 71 % for inceptionnet and firenet respectively, for ap@50 and ap@75 both. figure 7 shows the detection of fire and smoke. 5. conclusions this paper introduces a new fire sm database of fire anomaly scenarios. the database has been therefore the most demanding and diversified dataset ever produced. this hand-crafted research dataset was designed to explain the generalization of a qualified model. this research study proposes novel lightweight and realtime for detecting smoke and fire in videos or photographs. exiting datasets are either restricted or produced synthetically for testing purposes. in this study, validation was carried out on a real-world challenging proposed dataset that includes the majority of fire and smoke event scenarios. further, the weighted bi-directional feature pyramid network (bifpn) as well as compound scaling, consistently achieve better efficiency in efficientdet. experiment findings show that google's newest model, efficientdet, outperforms foggia on the proposed dataset. these results were obtained using average precision (ap) as an indicator; on the proposed dataset, it shows around 78 %, compared to 71 % and 61 % for inceptionnet and firenet, respectively, on the foggia dataset. the new assessment criteria address the shortcomings of the traditional criteria in this field. it provides a more accurate picture of how well an algorithm performs in real environment. furthermore, in this study, two variants of a latest fire anomaly detection algorithm used as a benchmark to which future work was measured. the new database would be helpful to encourage new novel techniques under this research field. references [1] t. celik, h. demirel, h. ozkaramanl, m. uyguroglu, fire detection using statistical color model in video sequences. journal of visual communication and image representation. 2007 apr 1;18(2):176-85. doi: 10.1016/j.jvcir.2006.12.003 [2] b. c. ko, s. j. ham, j. y. nam. modeling and formalization of fuzzy finite automata for detection of irregular fire flames. ieee transactions on circuits and systems for video technology. 2011 may 19; 21(12):1903-12. doi: 10.1109/tcsvt.2011.2157190 [3] j. choi, j. y. choi, patch-based fire detection with online outlier learning. in 2015 12th ieee international conference on advanced video and signal based surveillance (avss) 2015 aug 25, pp. 1-6. doi: 10.1109/avss.2015.7301763 [4] t. wang, l. shi, p. yuan, l. bu, x. hou, a new fire detection method based on flame color dispersion and similarity in consecutive frames. in2017 chinese automation congress (cac) 2017 oct 20 (pp. 151-156). ieee. doi: 10.1109/cac.2017.8242754 [5] k. muhammad, j. ahmad, z. lv, p. bellavista, p. yang, s. w. baik, efficient deep cnn-based fire detection and localization in video surveillance applications. ieee transactions on systems, man, and cybernetics: systems, vol. 49, no. 7, july 2019, pp. 14191434. doi: 10.1109/tsmc.2018.2830099 [6] a. jadon, m. omama, a. varshney, m. s. ansari, r. sharma, firenet: a specialized lightweight fire & smoke detection model for real-time iot applications. arxiv preprint arxiv:1905.11922. 2019 may 28. doi: 10.48550/arxiv.1905.11922 [7] k. muhammad, j. ahmad, i. mehmood, s. rho and s. w. baik, convolutional neural networks based fire detection in surveillance videos. ieee access, vol. 6, pp. 18174-18183, 2018. doi: 10.1109/access.2018.2812835 [8] p foggia, a saggese, m vento, real-time fire detection for videosurveillance applications using a combination of experts based on figure 7. detection of fire and smoke in proposed dataset. table 4. comparison of updated efficientdet to inceptionnet and firenet (*ap@50: 50 % above overlap, ap@75: 75 % above overlap). method early fire and smoke (proposed) foggia dataset ap@50 ap@75 ap@50 ap@75 khan et. al [14] inceptionnet 53.41 50.63 65.23 61.28 firenet [15] 68.46 57.94 73.23 70.65 modified efficientdet d0 73.35 70.78 81.92 78.23 https://doi.org/10.1016/j.jvcir.2006.12.003 https://doi.org/10.1109/tcsvt.2011.2157190 https://doi.org/10.1109/avss.2015.7301763 https://doi.org/10.1109/cac.2017.8242754 https://doi.org/10.1109/tsmc.2018.2830099 https://doi.org/10.48550/arxiv.1905.11922 https://doi.org/10.1109/access.2018.2812835 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 color, shape, and motion. ieee transactions on circuits and systems for video technology. 2015 jan 19;25(9):1545-56. doi: 10.1109/tcsvt.2015.2392531 [9] d. y. chino, l. p. avalhais, j. f. rodrigues, a. j. traina, bowfire: detection of fire in still images by integrating pixel color and texture analysis, in 28th sibgrapi conference on graphics, patterns and images, 2015, pp. 95-102. doi: 10.1109/sibgrapi.2015.19 [10] e. cetin, computer vision-based fire detection dataset. online [accessed 17 march 2022] http://signal.ee.bilkent.edu.tr/visifire/ ultimate chase. online [accessed 17 march 2022] http://ultimatechase.com/ [11] li pu, w. zhao, image fire detection algorithms based on convolutional neural networks. case studies in thermal engineering. 2020 june 1, 19:100625. doi: 10.1016/j.csite.2020.100625 [12] m tan, r. pang, q. v. le, efficientdet: scalable and efficient object detection. in proceedings of the ieee/cvf conference on computer vision and pattern recognition, pp. 10781-10790. [13] k. muhammad, j. ahmad, z. lv, p. bellavista, p. yang, s. w. baik, efficient deep cnn-based fire detection and localization in video surveillance applications. ieee transactions on systems, man, and cybernetics: systems, vol. 49, no. 7, july 2019, pp. 14191434. doi: 10.1109/tsmc.2018.2830099 [14] a jadon, m omama, a varshney, m. s. ansari, r. sharma, firenet: a specialized lightweight fire & smoke detection model for real-time iot applications. arxiv preprint arxiv:1905.11922. 2019 may 28. doi: 10.48550/arxiv.1905.11922 [15] fire sm dataset online [accessed 17 march 2022] https://tinyurl.com/83exdz6d [16] federica vurchio, giorgia fiori, andrea scorza, salvatore andrea sciuto, comparative evaluation of three image analysis methods for angular displacement measurement in a mems microgripper prototype: a preliminary study, acta imeko, vol. 10, no. 2, pp. 119-125, 2021. doi: 10.21014/acta_imeko.v10i2.1047 [17] henrik ingerslev, soren andresen, jacob holm winther, digital signal processing functions for ultra-low frequency calibrations, acta imeko, vol. 9, 2020, no. 5, pp. 374-378. doi: 10.21014/acta_imeko.v9i5.1004 [18] lorenzo ciani, alessandro bartolini, giulia guidi, gabriele patrizi, a hybrid tree sensor network for a condition monitoring system to optimise maintenance policy, acta imeko, vol. 9, 2020, no. 1, pp. 3-9. doi: 10.21014/acta_imeko.v9i1.732 [19] andrás kalapos, csaba gór, róbert moni, istván harmati, vision-based reinforcement learning for lane-tracking control, acta imeko, vol. 10, 2021, no. 3, pp. 7-14. doi: 10.21014/acta_imeko.v10i3.1020 https://doi.org/10.1109/tcsvt.2015.2392531 https://doi.org/10.1109/sibgrapi.2015.19 http://signal.ee.bilkent.edu.tr/visifire/ http://ultimatechase.com/ https://doi.org/10.1016/j.csite.2020.100625 https://doi.org/10.1109/tsmc.2018.2830099 https://doi.org/10.48550/arxiv.1905.11922 https://tinyurl.com/83exdz6d http://dx.doi.org/10.21014/acta_imeko.v10i2.1047 http://dx.doi.org/10.21014/acta_imeko.v9i5.1004 http://dx.doi.org/10.21014/acta_imeko.v9i1.732 http://dx.doi.org/10.21014/acta_imeko.v10i3.1020 design optimisation of a wireless sensor node using a temperature-based test plan acta imeko issn: 2221-870x june 2021, volume 10, number 2, 37 45 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 37 design optimisation of a wireless sensor node using a temperature-based test plan lorenzo ciani1, marcantonio catelani1, alessandro bartolini1, giulia guidi1, gabriele patrizi1 1 department of information engineering, university of florence, via di santa marta 3, 50139, florence, italy section: research paper keywords: fault diagnosis; precision farming; temperature; testing; wireless sensor network citation: lorenzo ciani, marcantonio catelani, alessandro bartolini, giulia guidi, gabriele patrizi, design optimisation of a wireless sensor node using a temperature-based test plan, acta imeko, vol. 10, no. 2, article 7, june 2021, identifier: imeko-acta-10 (2021)-02-07 section editor: giuseppe caravello, università degli studi di palermo, italy received january 14, 2021; in final form june 7, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: gabriele patrizi, e-mail: gabriele.patrizi@unifi.it 1. introduction nowadays, automatic measurement systems and condition monitoring (cm) tools have become valid and reliable means extensively used in several internet of things (iot) applications in different fields of the industry 4.0 scenario [1]-[10]. the continuous monitoring of both environmental conditions and soil parameters has become extremely important in agriculture applications [11]. according to [12] the environmental factors such as temperature and humidity have a deep influence on plant pathogens such as bacteria, fungi, and viruses. moreover, the continuous monitoring of the soil parameters allows to automatise irrigation and consequently minimise the water waste [13]-[15]. lezoche et al. [16] explained in detail the several advantages achieved integrating internet of things (iot) technologies inside the agricultural industry, such as productivity improvements, soil conservation, water saving and minimisation of plant diseases. usually, a wireless sensor network (wsn) is designed and implemented to monitor the crop. the network has to endure harsh outdoor conditions, facing both hot summers and cold winters. at the same time the network has to guarantee service continuity ensuring accurate and reliable data. according to [17] an optimal sensor node for agricultural applications should composed by the following units: a power unit, a processing unit, a memory unit, a sensing unit, and a communication unit. in particular, the thorough analysis presented in [17] highlights the importance of using soil moisture, relative humidity, temperature, and gas sensors. recent literature has plenty of papers focusing on the design of innovative wireless network for agricultural purposes. each of the papers deals with the optimisation of one particular aspect of the network. the nodes deployment is one of the most critical design aspect since it severely affects connectivity, coverage area and reliability of the entire network [18]-[20]. some papers such as [21], [22] introduce new routing strategies to solve classical drawbacks of wsns and optimise the transmission based on the actual nodes deployment. another fully discussed problem regards the optimisation of power consumption which is solved with many different solutions [23]-[25]. for instance, in [26], the authors propose a thermal modeling and characterisation for designing reliable power converters, while [27] focuses on risk abstract the introduction of big data and internet of things has allowed the rapid development of smart farming technologies. usually, systems implemented in smart farms monitor environmental conditions and soil parameter to improve productivity, to optimize soil conservation, to save water and to limit plant diseases. wireless sensor networks are a widespread solution because they allow to implement effective and efficient crop monitoring. at the same time, wireless sensor networks can cover large area, they can ensure fault tolerance and they can acquire large amount of data. recent literature misses to consider the testing of the hardware performances of such systems according to the actual operating conditions. the effects of a harsh environment on the dynamic metrological performances of sensor nodes are not sufficiently investigated. consequently, this work deals with the electrical design optimization of a sensor node by means of thermal test used to reproduce the actual operating conditions of the nodes. the results of the node characterization through thermal tests are used to improve the node’s design and consequently to achieve higher performances in harsh operative conditions. mailto:gabriele.patrizi@unifi.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 38 analysis of photovoltaic panels. in [28], a wireless charging method for the batteries management in agriculture applications is presented. other papers choose to optimise the design based on the type of plantation. for instance, the design of a low-power sensor node for rice field based on hybrid antenna is presented in [29]. jiaxing et al. [30] improves the design of sensor nodes in litchi plantation dealing with the coverage area of each node and the micro-irrigation management efficiency. in [31] a low-cost weather station for edamame farm is presented and compared with commercial systems. as presented above, many recent works deal with design and development of wsn for precision farming. quite the opposite, the concept of testing the hardware performances according to the actual operating conditions is not adequately addressed. the effects of the operating environment on the dynamic metrological performances of wsn are not sufficiently investigated. as observed in previous works on similar systems [32]-[36], environmental stresses such as temperature, humidity, vibration and mechanical shocks deeply influence both reliability and metrological performances of low-cost electronic components, leading to loss of calibrations, measurement variability and a significant growth in component failure rate. trying to fill these needs, this paper deals with the electrical design optimisation of a sensor node using thermal test. the agriculture field of applications is taken into account in order to customise the test plan and characterise the node under its actual operating conditions. the results of the node characterisation through thermal tests are used to improve the node’s design and consequently to achieve higher performances in harsh operative conditions. unfortunately, there are no international standards regarding environmental tests of wsn, as well as customised standard concerning electronic component testing for agricultural applications are not available. for these reasons, this paper proposes a customised test plan and test-bed for the performances characterisation of a sensor node under temperature stress. the paper is organised as follows: section 2 illustrates the initial design of the sensor node developed in this work. section 3 explain the proposed temperature-based test plan composed by three different test procedures (namely t.1. – t.2. – t.3.). section 4 summarises the main results of the tests and it proposes some design improvements to optimise the performances of the node. finally, in section 5 conclusions are drawn. 2. sensor node developed in this work typically, the classical wsns are implemented using a single central node (access point ap) directly connected to all the other nodes (which are called peripheral) in the network. the peripheral nodes use a set of several sensors to acquire a large amount of data, then they send the data to the ap which must collect and store them [25], [37]. the main drawbacks are the limited coverage area and the restricted number of nodes. an advanced wsn architecture is the one based on mesh topology, generally called wireless mesh network (wmn). a wmn is an optimal solution when it is required to monitor large geographical areas. more in detail, a wmn is a self-organised and self-configured system made up by lots of peripheral nodes and a single central node (called root node in the following) that manages the whole network. every node is able to interact with the nearby nodes, using them to reach the root node trough indirect paths, allowing large-area coverage [38]-[40]. furthermore, wmns use several near nodes and dynamic routing tables to achieve high-frequency transmissions, high bitrate, full scalability and low management cost [41]-[43]. figure 1 shows the block diagram of the developed sensor node. it is composed by the following units: • a power supply unit composed by a photovoltaic panel, two lithium batteries, a “batteries management system” (bms) and a “maximum power point tracking” (mppt). • a set of sensors, including an air temperature and humidity sensor, a soil temperature transducer, a soil moisture sensor and a solar radiation sensor. • an external antenna. • a radio and processing unit which is the real core of the sensor node and it is based on the esp32 system-on-achip microcontroller by “espressif”. the microcontroller is mounted on an evaluation board used for software programming by means of a usb-touart bridge controllers. the evaluation board also includes pin interface and power supply by means of an ams1117 ldo. two 8-channel 12-bit sar adcs and two 8-bit dacs are embedded in the esp32. a customised interface board is used to connect the power unit and the sensors unit to the esp32 microcontroller. the network works taking into account two alternative operating phases: a 10 minutes “sleep phase” in which almost all node functionalities are disabled to save energy; and an “active phase” in which the sensors acquire data, the microcontroller elaborates and transmit them to the root node. this type of functioning minimises the duty cycle of the network, allowing a reasonable overheating of the hardware and saving batteries power. figure 2 shows two images of the developed sensor node. a detail of the system is illustrated on the left side, while the right image shows the installation on the field. figure 1. block diagram of the designed sensor node, including power management systems, radio and processing unit, sensors unit and an external antenna. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 39 3. temperature-based test plan a temperature-based test plan was developed in this work to optimise the design of a sensor node for smart farm applications. temperature is the optimal stress condition to characterise the hardware of the sensor node and to investigate the weaknesses of the system. in fact, in compliance with the physics of failure of electronic devices, the main failure mechanisms of this kind of systems are intrinsically related to temperature [44], [45]. temperature is the key acceleration factor for many failure mechanisms such as open/short circuit, silicon fracture, electrostatic discharge (esd), dielectric charging and many others. all these failures could be easily triggered when the temperature reaches high values. consequently, temperature is the optimal stress since it allows to characterise both operational performances and reliability at the same time. information acquired during the electrical characterisation under temperature stress are extremely useful to improve the design of the system. a customised automatic measurement set-up was developed for the acquisition of the key parameter during the temperature tests (see figure 3). a climatic chamber was used to generate the thermal conditions described above. a datalogger equipped with ten k-type thermocouples was employed to monitor the temperature in some critical points of the developed system. the other equipment illustrated in figure 3 and used during the tests are a power supply generator, an oscilloscope, a set of multimeters, a current generator and a waveform generator. furthermore, a root node and a laptop were used to manage the network functionalities and acquire the data. international standards expressly related to smart agriculture systems are not available. consequently, several standards that cover similar area were used as guidelines during the design of the test plan, as follow: • mil-std 810g (2008) [46] is a guideline for any kind of environmental stress tests • iec 60068-2-14 (2011) [47] provides general procedures for temperature testing • iec 60068-5-2 (1990) [48] is a guide to drafting of test methods • iec 60068-2-2 (2007) [49] regarding the dry heat test conditions • iec 60068-2-38 (2009) [50] provides detailed procedures for temperature cyclic test • jedec jesd22 a104e (2014) [51] covers the temperature test of semiconductor devices • iest-rp-pr-003.1 (2012) [52] defines a temperature step-stress profile for accelerating life test. the developed test plan is based on the aforementioned standards, and it is tailored on the practical application scenario. in particular, the nodes will be deployed in open field, and they will be exposed to harsh environmental conditions. consequently, the test plan was developed considering the real operating conditions of the node, which must endure extremely high temperature during summer days, and extremely low temperature during winter days. moreover, the test plan must take into account also the range of guaranteed operability of the component that make the system, as follows: • microcontroller and electronic boards (up to 85 °c). • batteries (from -10 °c to 50°c). three temperature-based test procedures were developed, namely a positive temperature step-test from 20 °c to 80°c (test t.1), a back and forth temperature step-test from -10 °c to 80 °c and then again to -10 °c (test t.2) and a temperature cyclic test with restricted temperature range only for battery testing (test t.3). 3.1. test t.1. positive temperature step-test in this test procedure the devices under test are two identical sensor nodes. the only different between the two boards is the ldo that manage the power supply of the electronic board. the aim of this test was to characterise the main electrical parameters of the nodes. only the electronic hardware was tested, the batteries and the solar panel were not located inside the climatic chamber. the first node was supplied by the ldo provided by the manufacturer of the evaluation board (ams1117 ldo “former ldo” in the following) and the other one was equipped with a ap2114h ldo (“new ldo” in the following). the complete temperature profile t.1. is illustrated in figure 4, where the blue arrows highlight the temperature step and the exposition time. the test procedure t.1. starts at 20 °c which is generally the room temperature. the first step consists in a 5 °c raising temperature lasts 10 minutes. figure 2. pictures of the sensor node designed in this work. the left figure shows a detail of the boards enclosed inside a waterproof case, while on the right side the complete system installed on the field is illustrated. figure 3. measurement setup proposed in this work to characterise the designed sensor node under temperature stress. figure 4. test procedure t.1. positive temperature step-test from 20 °c up to 80 °c. ime min e m p e ra tu re minutes at onstant temperature step in minutes acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 40 the rise speed is intentionally kept low to allow components temperature to increase together with chamber temperature. then 20 minutes of exposition time at the reached temperature are required to ensure at least two active phases after temperature stability. the two previous steps are repeated up to 80 °c, alternating a 5 °c step (10 minutes) and a 20minute exposition time. 3.2. test t.2. back and forth temperature step-test the test procedure t.2. illustrated in this section is an extension of the test procedure t.1. the differences between the back and forth temperature step-test t.2. and the previous-described procedure are illustrated in the following: • temperature range of the test procedure t.2. is from -10 °c to 80 °c. • the test procedure t.2. is repeated back and forth, which means that it starts at -10 °c, then it reaches 80 °c following a step increase, and finally it decreases again to -10 °c following the same steps. • test procedure t.2. is characterised by a rise rate and a lowering rate between one step and the following equal to 2 °c/min. • the exposition time at constant temperature of test procedure t.2. is reduced to only 10 minutes. the complete test profile of the procedure t.2. is illustrated in figure 5, highlighting the back and forth trend used to investigate hysteresis behaviour. during this test procedure, only the processing unit of the node was located inside the climatic chamber. in fact, the main purposes of this procedure were to test the performances of the analog-to-digital converter (adc) and digital-to-analog converter (dac) embedded in the microcontroller esp32. 3.3. test t.3. temperature cyclic test many papers in recent literature agree that temperature is the key factor in battery degradation [53]-[56]. for this reason, the test procedure t.3. was developed only for battery characterisation. test procedure t.3 is based on two consecutive cycle. the minimum temperature is -10 °c, while the maximum temperature is 50 °c. this limited range was designed to satisfy battery safety requirements; nonetheless, it is well representative of the actual operative temperature in agriculture field. the rise rate and the lowering rate is 2 °c/min, and the exposition time at the minimum and maximum temperature is 30 minutes. test procedure t.3. is the only procedure in the proposed test plan in which the temperature changes linearly between the minimum and maximum values of the range (without steps). figure 6 shows the temperature profile of test t.3. highlighting two cyclic repetitions in the range [-10 °c; 50 °c]. 4. results and discussion in this section the main results achieved during the test are illustrated. moreover, some design improvements are proposed in order to optimise the performances of the node under the actual operating conditions. 4.1. test t.1. main results and proposed improvements the node was tested with two different ldos: the one provided by the manufacturer and a new one. the aim of this procedure is to investigate if the new ldo provides significant upgrades with respect to the former ldo and to evaluate the effect of the temperature, during the step-stress test. preliminary results regarding this test have been illustrated in [57]. figure 7 shows the comparison of the current consumption of the two boards (blue and green lines) during six cycles of active and sleep phases, with a corresponding chamber temperature from 55 ° c to 65 °c (red line). moreover, figure 7 highlights the benefits introduced by the new ldo in both active and sleep phases. in fact, the new ldo allows an average decrease of the absorbed current of 2 ma. figure7 shows the most striking results discovered during the test phase, which is the presence of a current step-up (discovered in both the sensor nodes) at a certain temperature. in case of the former ldo, the step-up occurs approximately around 63 °c, while the new ldo is subjected to this phenomenon at lower temperature (approximately 58 °c). indeed, focusing only on the sleep phase, as shown in figure 8, it is possible to identify an unexpected increase of about 4.5 ma of the current above a specific temperature in both nodes. during the cooling phase of the chamber the current consumption suddenly decreases assuming the previous value. after a deep analysis widely explained in [57], this anomaly is due to an unexpected activation of the usb-to-uart bridge controllers (cp2102n) integrated in the evaluation board. figure 5. test procedure t.2. back and forth temperature step-test from -10 °c up to 80 °c and then back again to -10 °c. figure 6. test procedure t.3. temperature cyclic test characterised by two tests repetitions. figure 7. current consumption of the two boards (blue and green lines) on the left y-axis while the right y-axis shows the temperature variation of the chamber (red line). ime min e m p e ra tu re minutes at onstant temperature step at rate min ime min e m p e ra tu re min min acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 41 the controller should only be activated (or enabled) in case of a device connection to the usb port. the 3.3 v output of the ldo supplies both usb-to-uart controller and microcontroller. the cp2102n controller is enabled only in case a usb device is connected to the board (vusb). under typical operating conditions, the usb provides a 5 v voltage, then a divider generates a voltage drop (vbus) as input of the 8th pin of the usb-to-uart controller. vbus is: 𝑉bus = 3.41 v . (1) the cp2102n datasheet highlights that vbus pin is considered in a high logical state (controller on) when the following relationships are satisfied: 𝑉bus > (𝑉𝐷𝐷 − 0.6 v) (2) 𝑉bus < (𝑉𝐷𝐷 + 2.5 v) (3) with 𝑉𝐷𝐷 = 3.3 v the high-state threshold can be calculated as: 𝑉th = 𝑉𝐷𝐷 − 0.6 v = 3.3 v − 0.6 v = 2.7 v . (4) the board is also powered by an external voltage of 5 v. the controller is disabled by a schottky diode (bat760-7) located between the usb connector and the external 5 v pin. this diode allows to avoid turning on the usb-to-uart bridge with an external 5v supply. furthermore, it also protects computer or other devices connected via usb from unexpected reverse current. analysing the schottky diode datasheet, it is evident that the increase of temperature produces an increase of the reverse current of the diode. for example, at 75 °c with a 5 v of reverse voltage the diode exhibits a reverse current of about 100 µa. since the usb connector is an open circuit, the reverse current of the schottky diode generates a voltage drop given by: 𝑉bus = 4.75 v ≫ 𝑉th (5) therefore, this reverse current evaluated at 75 °c is enough to enable the usb-to-uart controller. furthermore, this reverse current could be dangerous for higher temperatures because it could generate activation voltages higher than the maximum limit, leading to possible damages of the converter. consequently, the higher the temperature, the higher the diode reverse current, the higher the activation voltage of the cp2102n controller. if the temperature is higher enough to produce a reverse current which generates a 𝑉bus > 𝑉th, then the usb-touart controller turns on absorbing 4.5 ma and generating the current step shown in the previous figures. there are two possible corrective actions to delete this problem guaranteeing the proper functionalities in case of a connected usb device: • change the schottky diode with another model able to guarantee a lower reverse current. • modify the divider, for example by maintaining the ratio between the resistances but decreasing the resistance value. • design a new interface board removing the usb interface and introducing an external serial interface used only during the programming. the previous considerations explain also the reason why the current step-ups occurred at different temperature in the boards. indeed, by measuring the outputs of the two ldos, it is possible to verify a slight difference in the output voltage that leads to a different voltage threshold (4) and, consequently, a different reverse current to activate the usb-to-uart controller. 4.2. test t.2. main results and proposed improvements this test procedure was used to characterise the performances of the internal adc embedded within the esp32 microcontroller (simply referred as internal adc in the following). to this purpose, a reference signal 𝑉in = 1.5 v was provided, using a signal generator, as input of the internal adc. moreover, the same signal was also used as input of an additional analog-todigital converter (ads1115 by texas instrument) called external adc in the following. the external adc was located on the interface board. it must convert the same signal acquired by the internal adc, then the external adc transfers the digital data to the microcontroller, that store the data into a e2prom memory. the adcs acquire 512 samples every two minutes by order of the esp32 microcontroller. in this way, it is possible to acquire a large amount of data at every considered temperature step. then, mean value and standard deviation of these samples are calculated to compare the performances of the adcs. figure 9 shows a comparison between the mean value of the 512 acquisitions during each active phase of the two adcs. the blue-circle markers stand for the mean value of the internal adc acquisitions, while the star-blue markers represent the mean value of the external adc acquisitions. the right y-axis (red axis) is used to depict the temperature of the climatic chamber acquired using a k-type thermocouple and a datalogger during the test. in the initial phase of the test (room temperature), both adcs show an offset level with respect the true input value provided by the signal generator. more in detail, the internal adc is characterised by a positive offset of +40 mv which suddenly figure 8. detail of the current consumption during the sleep phases of the two boards (blue and green lines) on the left y-axis while the right y-axis shows the temperature variation of the chamber during the test (red line). figure 9. comparison of the performances of the internal (blue circle marker) and external (blue star marker) adcs at each temperature step (red line). each marker represents the mean value of the 512 acquisition at the considered active phase. ime mm de e a n a lu e o f d a u is i o n e m p e ra tu re nternal d ternal d emperature acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 42 increase when temperature is lowered. quite the opposite, the external adc has a negative offset of -2 mv. temperature has a strong influence on the mean value acquired by the internal adc (proven by a non-constant trend of the circle markers illustrated in figure 9). on the other hand, the external adc highlights a remarkable temperature stability with a semiconstant trend throughout the temperature range. table 1 compares the standard deviations of the adcs under test at some significant temperatures. the standard deviation of the internal adc is quite high and considerably influenced by the temperature of the chamber, while the external adc highlight better performances in terms of data dispersion. the introduction of the ads1115 analog-to-digital converter provides better performances in terms of offset at room temperature, thermal stability and data dispersion. for these reasons, it is recommended to integrate this chip instead of the internal adc embedded in the esp32 system-on-a-chip microcontroller. 4.3. test t.3. main results and proposed improvements this test procedure was used to characterise the performances of two different types of lithium batteries under the actual operative temperature in case of agriculture applications. the battery a is a linicomno2 type (inr18650-35e), while the battery b is a lifepo4 (htcfr26650). the battery a is characterised by high specific energy and an operating voltage range between 3 v and 4 v. instead, the battery b guarantees a constant voltage output to supply the microcontroller, but it is characterised by a low-density charge. moreover, according to the datasheet, the lifepo4 batteries guarantee good performance in a larger temperature range. during the thermal test an active load was used to discharge the batteries ensuring a constant discharge current of 300 ma. this value was chosen since it is the average current consumption of the whole system during the data transmission phase. figure 10 shows the experimental results achieved during the thermal cyclic test of the battery a (linicomno2 type) using a blue line to represent the battery voltage during the discharge. the data were compared with a reference discharge voltage (red trend) measured maintaining a constant temperature of 20 °c. as expected, the reference discharge at room temperature exhibits a linear trend characterised by a negative slope (called δ𝑉 in the following) strictly related to the constant discharge current forced by the active load. instead, the blue trend in figure 10, achieved during the thermal cyclic test, highlights some deviation compared to the nominal trend during the cold phase of the cyclic test. specifically, the “v-shape trend” highlighted by the blue line in figure 10 refers to temperature lower than 0 °c. more in detail, when temperature is lowered under 0 °c the discharge rate of the battery suddenly increases, producing a remarkable decrease of the negative slope δ𝑉. then, when temperature starts to increase, the figure shows a counterintuitive behaviour: the battery voltage increases even though the battery is always on discharge phase. this increase of discharge rate at very low temperature could become remarkable in case of an exposition for a long period with really cold temperature. for this reason, the fixture of the solar panel that charges the batteries must be oriented with a proper slope in order to optimise the charge during winter. figure 11 shows the experimental results achieved during the thermal cyclic test of the battery b (lifepo4 type) using a blue line to represent the battery voltage during the discharge. also in this case the data were compared with a reference discharge voltage (red trend) measured maintaining a constant temperature of 20 °c. as expected, the lifepo4 battery shows a constant voltage during the discharge phase. despite this, during the thermal cyclic test also the lifepo4 battery shows a “v-shape trend”. for these reasons, considering the higher specific energy of the linicomno2, the latter should be used in the sensor node. 5. conclusions the paper deals with the design optimisation of a sensor node, used in a wireless mesh network, under temperature stress. since there is not a specific standard for this kind of system, a customised test plan was developed in this work based on three temperature-based stress tests. moreover, an automatic measurement setup was designed and implemented to monitor the performances of the system during the test. the aim of the first temperature test (test t.1.) was to observe the effects of high temperature on the hardware and firmware bugs, looking for any anomalies from the correct functioning. one of the main unexpected finding is an increase of the current consumption figure 10. battery discharge test for linicomno2 battery: trend achieved at 20 °c constant temperature (red line) and trend achieved during thermal cyclic test t.3. (blue line). table 1. comparison of the standard deviations for internal and external adcs at each temperature. temperature standard deviation internal adc external adc -10 °c 4 mv 0.5 mv 10 °c 4 mv 0.6 mv 30 °c 8 mv 0.8 mv 50 °c 11 mv 1 mv figure 11. battery discharge test for lifepo4 battery: trend achieved at 20 °c constant temperature (red line) and trend achieved during thermal cyclic test t.3. (blue line). ime minutes a e r o lt a g e ermal li test a er dis arge ime minutes a e r o lt a g e ermal li test a er dis arge acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 43 during the sleep phase when temperature overpass a certain threshold. in particular, a 4.5 ma step was verified above a specific temperature. this step is not due to a permanent failure, because during the cooling phase the current returns to its normal value at approximately the same temperature. this unexpected behaviour can lead to an increase of the power consumption of the sensor node and a solution must be considered. the second temperature test (test t.2.) aimed at verifying the performances of the analog-to-digital converter (adc) and digital-to-analog converter (dac) embedded in the microcontroller esp32. the dacs does not highlight any particular problems, while the adc embedded in the esp32 shows three main drawbacks: a significant offset at room temperature, thermal instability and a remarkable data dispersion. for these reasons, it is recommended to use an external ads1115 analog-to-digital converter which has provided better performances during the test. finally, test t.3. characterised the behaviour of two different types of batteries under thermal stress focusing on the discharge rate at cold temperature. the test highlights the importance of a proper solar panel orientation to optimise the batteries charge during winter. references [1] g. d’emilia, a. gaspari, data validation techniques for measurements systems operating in a idustry 4.0 scenario a condition monitoring application, in 2018 workshop on metrology for industry 4.0 and iot, brescia, italy, 16-18 april 2018, pp. 112–116. doi: 10.1109/metroi4.2018.8428317 [2] m. carratu, c. liguori, a. pietrosanto, m. o’nils, j. lundgren, a novel ivs procedure for handling big data with artificial neural networks, in 2020 ieee international instrumentation and measurement technology conference (i2mtc), dubrovnik, croatia, 25-28 may 2020, pp. 1–6. doi: 10.1109/i2mtc43012.2020.9128500 [3] e. petritoli, f. leccese, m. botticelli, s. pizzuti, f. pieroni, a rams analysis for a precision scale-up configuration of "smart street" pilot site: an industry 4.0 case study, acta imeko 8(2) (2019), pp. 3-11. doi: 10.21014/acta_imeko.v8i2.614 [4] g. d’emilia, a. gaspari, e. natale, mechatronics applications of measurements for smart manufacturing in an industry 4.0 scenario, ieee instrum. meas. mag. 22(2) (2019), pp. 35–43. doi: 10.1109/mim.2019.8674633 [5] m. carratu, m. ferro, a. pietrosanto, v. paciello, smart power meter for the iot, in 2018 ieee 16th international conference on industrial informatics (indin), porto, portugal, 18-20 july 2018, pp. 514–519. doi: 10.1109/indin.2018.8472018 [6] g. d’emilia, a. gaspari, e. hohwieler, a. laghmouchi, e. uhlmann, improvement of defect detectability in machine tools using sensor-based condition monitoring applications, procedia cirp 67 (2018), pp. 325–331. doi: 10.1016/j.procir.2017.12.221 [7] e. petritoli, f. leccese, g. s. spagnolo, in-line quality control in semiconductors production and availability for industry 4.0, in 2020 ieee international workshop on metrology for industry 4.0 & iot, roma, italy, 3-5 june 2020, pp. 665–668. doi: 10.1109/metroind4.0iot48571.2020.9138296 [8] m. carratu, a. pietrosanto, p. sommella, v. paciello, a wearable low-cost device for measurement of human exposure to transmitted vibration on motorcycle, in 2019 ii workshop on metrology for industry 4.0 and iot (metroind4.0&iot), naples, italy, 4-6 june 2019, pp. 329–333. doi: 10.1109/metroi4.2019.8792855 [9] f. lamonaca, c. scuro, p. f. sciammarella, r. s. olivito, d. grimaldi, d. l. carnì, a layered iot-based architecture for a distributed structural health monitoring system, acta imeko 8(2) (2019), pp. 45–52. doi: 10.21014/acta_imeko.v8i2.640 [10] m. catelani, l. ciani, v. luongo, r. singuaroli, evaluation of the safe failure fraction for an electromechanical complex system: remarks about the standard iec61508, in 2010 ieee instrumentation & measurement technology conference proceedings, austin, tx, usa, 3-6 may 2010, pp. 949–953. doi: 10.1109/imtc.2010.5488034 [11] m. catelani, l. ciani, a. bartolini, g. guidi, g. patrizi, standby redundancy for reliability improvement of wireless sensor network, in 2019 ieee 5th international forum on research and technology for society and industry (rtsi), florence, italy, 9-12 sept. 2019, pp. 364–369. doi: 10.1109/rtsi.2019.8895533 [12] a. p. j. lindsey, s. murugan, r. e. renitta, microbial disease management in agriculture: current status and future prospects, biocatal. agric. biotechnol. 23 (2020)art. 101468. doi: 10.1016/j.bcab.2019.101468 [13] h. zia, n. r. harris, g. v. merrett, m. rivers, n. coles, the impact of agricultural activities on water quality: a case for collaborative catchment-scale management using integrated wireless sensor networks, comput. electron. agric. 96 (2013), pp. 126–138. doi: 10.1016/j.compag.2013.05.001 [14] g. vellidis, m. tucker, c. perry, c. kvien, c. bednarz, a real-time wireless smart sensor array for scheduling irrigation, comput. electron. agric. 61(1) (2008), pp. 44–50. doi: 10.1016/j.compag.2007.05.009 [15] j. mcculloch, p. mccarthy, s. m. guru, w. peng, d. hugo, a. terhorst, wireless sensor network deployment for water use efficiency in irrigation, in proceedings of the workshop on realworld wireless sensor networks realwsn ’08, glasgow scotland, 1 april, 2008, pp. 46-50. doi: 10.1145/1435473.1435487 [16] m. lezoche, j. e. hernandez, m. del m. e. alemany díaz, h. panetto, j. kacprzyk, agri-food 4.0: a survey of the supply chains and technologies for the future agriculture, comput. ind. 117 (2020) art. 103187. doi: 10.1016/j.compind.2020.103187 [17] f. al-turjman, the road towards plant phenotyping via wsns: an overview, comput. electron. agric. 161 (2019), pp. 4–13. doi: 10.1016/j.compag.2018.09.018 [18] m. younis, k. akkaya, strategies and techniques for node placement in wireless sensor networks: a survey, ad hoc networks 6(4) (2008), pp. 621–655. doi: 10.1016/j.adhoc.2007.05.003 [19] f. m. al-turjman, a. e. al-fagih, w. m. alsalih, h. s. hassanein, reciprocal public sensing for integrated rfid-sensor networks, in 2013 9th international wireless communications and mobile computing conference (iwcmc), sardinia, italy, 1-5 july 2013, pp. 746–751. doi: 10.1109/iwcmc.2013.6583650 [20] f. m. al turjman, h. s. hassanein, towards augmented connectivity with delay constraints in wsn federation, int. j. ad hoc ubiquitous comput. 11(2/3) (2012), pp. 97-108. doi: 10.1504/ijahuc.2012.050273 [21] y. wang, x. li, w.-z. song, m. huang, t. a. dahlberg, energyefficient localized routing in random multihop wireless networks, ieee trans. parallel distrib. syst. 22(8) (2011), pp. 1249–1257. doi: 10.1109/tpds.2010.198 [22] a. m. patel, m. m. patel, a survey of energy efficient routing protocols for mobile adhoc networks, int. j. eng. res. technol. 1(10) (2012), pp. 1–6. [23] r. yan, h. sun, y. qian, energy-aware sensor node design with its application in wireless sensor networks, ieee trans. instrum. meas. 62(5) (2013), pp. 1183–1191. doi: 10.1109/tim.2013.2245181 https://doi.org/10.1109/metroi4.2018.8428317 https://doi.org/10.1109/i2mtc43012.2020.9128500 https://doi.org/10.21014/acta_imeko.v8i2.614 https://doi.org/10.1109/mim.2019.8674633 https://doi.org/10.1109/indin.2018.8472018 https://doi.org/10.1016/j.procir.2017.12.221 https://doi.org/10.1109/metroind4.0iot48571.2020.9138296 https://doi.org/10.1109/metroi4.2019.8792855 https://doi.org/10.21014/acta_imeko.v8i2.640 https://doi.org/10.1109/imtc.2010.5488034 https://doi.org/10.1109/rtsi.2019.8895533 https://doi.org/10.1016/j.bcab.2019.101468 https://doi.org/10.1016/j.compag.2013.05.001 https://doi.org/10.1016/j.compag.2007.05.009 https://doi.org/10.1145/1435473.1435487 https://doi.org/10.1016/j.compind.2020.103187 https://doi.org/10.1016/j.compag.2018.09.018 https://doi.org/10.1016/j.adhoc.2007.05.003 https://doi.org/10.1109/iwcmc.2013.6583650 https://doi.org/10.1504/ijahuc.2012.050273 https://doi.org/10.1109/tpds.2010.198 https://doi.org/10.1109/tim.2013.2245181 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 44 [24] m. t. penella, m. gasulla, runtime extension of low-power wireless sensor nodes using hybrid-storage units, ieee trans. instrum. meas. 59(4) (2010), pp. 857–865. doi: 10.1109/tim.2009.2026603 [25] l. gasparini, r. manduchi, m. gottardi, d. petri, an ultralowpower wireless camera node: development and performance analysis, ieee trans. instrum. meas. 60(12) (2011), pp. 3824– 3832. doi: 10.1109/tim.2011.2147630 [26] m. lazzaroni et al., thermal modeling and characterization for designing reliable power converters for lhc power supplies, acta imeko 3(4) (2014), pp. 17–25. doi: 10.21014/acta_imeko.v3i4.147 [27] l. cristaldi, m. khalil, p. soulatintork, a root cause analysis and a risk evaluation of pv balance of system failures, acta imeko 6(4) (2017), pp. 113-120. doi: 10.21014/acta_imeko.v6i4.425 [28] l. varandas, p. d. gaspar, m. l. aguiar, standalone docking station with combined charging methods for agricultural mobile robots, int. j. mech. mechatronics eng. 13(1) (2019), pp. 38–42. doi: 10.5281/zenodo.3607799 [29] h. chen, w. wang, b. sun, j. weng, f. tie, design of a wsn node for rice field based on hybrid antenna, in 2017 international conference on computer network, electronic and automation (iccnea), xi'an, china, 23-25 sept. 2017, pp. 276–280. doi: 10.1109/iccnea.2017.101 [30] x. jiaxing, g. peng, w. weixing, l. huazhong, x. xin, h. guosheng, design of wireless sensor network bidirectional nodes for intelligent monitoring system of micro-irrigation in litchi orchards, ifac-papersonline 51(17) (2018), pp. 449–454. doi: 10.1016/j.ifacol.2018.08.176 [31] s. tenzin, s. siyang, t. pobkrut, t. kerdcharoen, low cost weather station for climate-smart agriculture, in 2017 9th international conference on knowledge and smart technology (kst), chonburi, thailand, 1-4 feb. 2017, pp. 172–177. doi: 10.1109/kst.2017.7886085 [32] l. ciani, d. galar, g. patrizi, improving context awareness reliability estimation for a wind turbine using an rbd model, in 2019 ieee international instrumentation and measurement technology conference (i2mtc), auckland, new zealand, 20-23 may 2019, pp. 1–6. doi: 10.1109/i2mtc.2019.8827041 [33] d. capriglione et al., development of a test plan and a testbed for performance analysis of mems-based imus under vibration conditions, measurement 158 (2020), p. 107734. doi: 10.1016/j.measurement.2020.107734 [34] d. capriglione et al., experimental analysis of imu under vibration, in 16th imeko tc10 conference 2019 testing, diagnostics and inspection as a comprehensive value chain for quality and safety, berlin, germany, 3-4 sept. 2019, pp. 26-31. online [accessed 09 june 2021] https://www.imeko.org/publications/tc10-2019/imekotc10-2019-002.pdf [35] l. ciani, g. guidi, application and analysis of methods for the evaluation of failure rate distribution parameters for avionics components, measurement 139 (2019), pp. 258–269. doi: 10.1016/j.measurement.2019.02.082 [36] m. catelani, l. ciani, experimental tests and reliability assessment of electronic ballast system, microelectron. reliab. 52(9–10) (2012), pp. 1833–1836. doi: 10.1016/j.microrel.2012.06.077 [37] m. carratu, m. ferro, a. pietrosanto, p. sommella, v. paciello, a smart wireless sensor network for pm10 measurement, in 2019 ieee international symposium on measurements & networking (m&n), catania, italy, 8-10 july 2019, pp. 1–6. doi: 10.1109/iwmn.2019.8805015 [38] l. ciani, a. bartolini, g. guidi, g. patrizi, condition monitoring of wind farm based on wireless mesh network, in 16th imeko tc10 conference 2019 testing, diagnostics and inspection as a comprehensive value chain for quality and safety, berlin, germany, 3-4 sept. 2019, pp. 39–44. online [accessed 09 june 2021] https://www.imeko.org/publications/tc10-2019/imekotc10-2019-004.pdf [39] m. catelani, l. ciani, a. bartolini, g. guidi, g. patrizi, characterization of a low-cost and low-power environmental monitoring system, in 2020 ieee international instrumentation and measurement technology conference (i2mtc), dubrovnik, croatia, 25-28 may 2020. doi: 10.1109/i2mtc43012.2020.9129274 [40] m. cagnetti, f. leccese, d. trinca, a new remote and automated control system for the vineyard hail protection based on zigbee sensors, raspberry-pi electronic card and wimax, journal agric. sci. technol. 3 (2013), pp. 853–864. [41] l. ciani, a. bartolini, g. guidi, g. patrizi, a hybrid tree sensor network for a condition monitoring system to optimise maintenance policy, acta imeko 9(1) (2020), pp. 3–9. doi: 10.21014/acta_imeko.v9i1.732 [42] y. zhang, l. jijun, h. hu, wireless mesh networking: architectures, protocols and standards. tailor & francis group, new york, usa, 2007, isbn: 9780429133107. [43] x. zhu, y. lu, j. han, l. shi, transmission reliability evaluation for wireless sensor networks, int. j. distrib. sens. networks 12(2) (2016), p. 1346079. doi: 10.1155%2f2016%2f1346079 [44] d. capriglione et al., characterization of inertial measurement units under environmental stress screening, in 2020 ieee international instrumentation and measurement technology conference (i2mtc), dubrovnik, croatia, 25-28 may 2020, pp. 1–6. doi: 10.1109/i2mtc43012.2020.9129263 [45] m. catelani, l. ciani, g. guidi, m. venzi, parameter estimation methods for failure rate distributions, in 14th imeko tc10 workshop on technical diagnostics 2016: new perspectives in measurements, tools and techniques for systems reliability, maintainability and safety, milan, italy, june 27-28, 2016, pp. 441– 445. online [accessed 09 june 2021] https://www.imeko.org/publications/tc10-2016/imekotc10-2016-083.pdf [46] mil-std-810g, “environmental engineering considerations and laboratory tests,” no. october. us department of defense, washington dc, 2008. [47] iec 60068-2-14, “environmental testing part 2-14: tests test n: change of temperature.” international electrotechnical commission, 2011. [48] iec 60068-5-2, “environmental testing part 5: guide to drafting of test methods terms and definitions.” international electrotechnical commission, 1990. [49] iec 60068-2-2, “environmental testing part 2-2: tests test b: dry heat.” international electrotechnical commission, 2007. [50] iec 60068-2-38, “environmental testing part 2: tests test z/ad: composite temperature/humidity cyclic test.” international electrotechnical commission, 2009. [51] jedec solid state technology, “jedec standard: temperature cycling.” jesd22-a104e, 2014. [52] iest-rp-pr003.1, “halt and hass.” institute of environmental sciences and technology product reliability division, 2012. [53] a. eddahech, o. briat, j.-m. vinassa, performance comparison of four lithium–ion battery technologies under calendar aging, energy 84 (2015), pp. 542–550. doi: 10.1016/j.energy.2015.03.019 [54] t. huria, m. ceraolo, j. gazzarri, r. jackey, high fidelity electrical model with thermal dependence for characterization and simulation of high power lithium battery cells, 2012 ieee international electric vehicle conference, 4-8 march 2012, greenville, sc, usa. doi: 10.1109/ievc.2012.6183271 [55] n. omar et al., lithium iron phosphate based battery – assessment of the aging parameters and development of cycle life https://doi.org/10.1109/tim.2009.2026603 https://doi.org/10.1109/tim.2011.2147630 https://doi.org/10.21014/acta_imeko.v3i4.147 https://doi.org/10.21014/acta_imeko.v6i4.425 https://doi.org/10.5281/zenodo.3607799 https://doi.org/10.1109/iccnea.2017.101 https://doi.org/10.1016/j.ifacol.2018.08.176 https://doi.org/10.1109/kst.2017.7886085 https://doi.org/10.1109/i2mtc.2019.8827041 https://doi.org/10.1016/j.measurement.2020.107734 https://www.imeko.org/publications/tc10-2019/imeko-tc10-2019-002.pdf https://www.imeko.org/publications/tc10-2019/imeko-tc10-2019-002.pdf https://doi.org/10.1016/j.measurement.2019.02.082 https://doi.org/10.1016/j.microrel.2012.06.077 https://doi.org/10.1109/iwmn.2019.8805015 https://www.imeko.org/publications/tc10-2019/imeko-tc10-2019-004.pdf https://www.imeko.org/publications/tc10-2019/imeko-tc10-2019-004.pdf https://doi.org/10.1109/i2mtc43012.2020.9129274 https://doi.org/10.21014/acta_imeko.v9i1.732 https://doi.org/10.1155%2f2016%2f1346079 https://doi.org/10.1109/i2mtc43012.2020.9129263 https://www.imeko.org/publications/tc10-2016/imeko-tc10-2016-083.pdf https://www.imeko.org/publications/tc10-2016/imeko-tc10-2016-083.pdf https://doi.org/10.1016/j.energy.2015.03.019 https://doi.org/10.1109/ievc.2012.6183271 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 45 model, appl. energy 113 (2014), pp. 1575–1585. doi: 10.1016/j.apenergy.2013.09.003 [56] e. locorotondo, l. pugi, l. berzi, m. pierini, g. lutzemberger, online identification of thevenin equivalent circuit model parameters and estimation state of charge of lithium-ion batteries, in 2018 ieee international conference on environment and electrical engineering and 2018 ieee industrial and commercial power systems europe (eeeic / i&cps europe), palermo, italy, 12-15 june 2018, pp. 1–6. doi: 10.1109/eeeic.2018.8493924 [57] l. ciani, m. catelani, a. bartolini, g. guidi, g. patrizi, electrical characterization of a monitoring system for precision farming under temperature stress, in 24th imeko tc4 international symposium and 22nd international workshop on adc and dac modelling and testing, palermo, italy, 14-16 sept. 2020, pp. 270– 275. online [accessed 14 june 2021] https://www.imeko.org/publications/tc4-2020/imeko-tc42020-51.pdf https://doi.org/10.1016/j.apenergy.2013.09.003 https://doi.org/10.1109/eeeic.2018.8493924 https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-51.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-51.pdf omnidirectional camera pose estimation and projective texture mapping for photorealistic 3d virtual reality experiences acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 8 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 omnidirectional camera pose estimation and projective texture mapping for photorealistic 3d virtual reality experiences alessandro luchetti1, matteo zanetti1, denis kalkofen2, mariolino de cecco1 1 department of industrial engineering, university of trento, sommarive 9, 38123 trento, italy 2 institute of computer graphics and vision, graz university of technology, rechbauerstraße 12, 8010 graz, austria section: research paper keywords: omnidirectional cameras; mesh reconstruction; camera pose estimation; optimization; enhanced comprehension citation: alessandro luchetti, matteo zanetti, denis kalkofen, mariolino de cecco, omnidirectional camera pose estimation and projective texture mapping for photorealistic 3d virtual reality experiences, acta imeko, vol. 11, no. 2, article 24, june 2022, identifier: imeko-acta-11 (2022)-02-24 section editor: alfredo cigada, politecnico di milano, italy received may 26, 2021; in final form march 21, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was developed inside the european project mirebooks: mixed reality handbooks for mining education, a project funded by eit raw materials. corresponding author: alessandro luchetti, e-mail: alessandro.luchetti@unitn.it 1. introduction media acquired by 360°cameras (also known as omnidirectional, spherical, or panoramic cameras) is becoming increasingly important to many applications. compared to conventional cameras, images taken by 360° cameras offer a larger field of view, which is why they are traditionally useful to applications that derive their state from information about the environment. examples include robot localization, navigation, and visual servoing [1]. however, omnidirectional cameras have recently also become an essential tool for content creation in virtual reality (vr) applications because spherical photographs and videos can provide a high level of realism. for example, applications for real estate agents already make use of omnidirectional images and video data within a vr head mounted display to improve the realism of virtual customer inspections and research domains span widely from 360° tourism [2] to education in 360° classrooms [3]. vr applications using omnidirectional media allow their users to change the view within the boundaries of a 360° image that has been captured at a specific point of interest (poi). thus, vr users are commonly restricted to head rotations only while translations require transitioning into a 360° image that has been captured at a different poi [4]. thus, motion parallax is missing in vr applications, which use omnidirectional data. furthermore, view transitions are limited to where omnidirectional images or videos exist. these shortcomings limit the benefit of omnidirectional media in vr. for example, the missing 3d information restricts the usage of advanced exploration techniques [5], [6] and the missing motion parallax can cause visual discomfort [7]. abstract modern applications in virtual reality require a high level of fruition of the environment as if it was real. in applications that have to deal with real scenarios, it is important to acquire both its three-dimensional (3d) structure and details to enable the users to achieve good immersive experiences. the purpose of this paper is to illustrate a method to obtain a mesh with high quality texture combining a raw 3d mesh model of the environment and 360° images. the main outcome is a mesh with a high level of photorealistic details. this enables both a good depth perception thanks to the mesh model and high visualization quality thanks to the 2d resolution of modern omnidirectional cameras. the fundamental step to reach this goal is the correct alignment between the 360° camera and the 3d mesh model. for this reason, we propose a method that embodies two steps: 1) find the 360° cameras pose within the current 3d environment; 2) project the high-quality 360° image on top of the mesh. after the method description, we outline its validation in two virtual reality scenarios, a mine and city environment, respectively, which allows us to compare the achieved results with the ground truth. mailto:alessandro.luchetti@unitn.it acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 to overcome these limitations, we propose combining omnidirectional photorealistic image data with its corresponding 3d representation. since 3d reconstructions commonly suffer from poor color representations, we apply projective texture mapping of omnidirectional images. our approach supports photorealistic image fidelity at the pois and motion parallax at viewpoints nearby. to enable projective texture mapping of 360° image data, we present an approach for omnidirectional camera pose estimation that automatically finds the position and orientation of the 360° camera relative to the 3d representation of the environment. to put our work in context, we first outline related work in section 2, before we describe our approaches to omnidirectional camera pose estimation and projective texture mapping in section 3. we evaluate our system in section 4 and discuss possible directions for future work in section 5. 2. related work camera pose detection has always been a key problem in computer vision. for example, makadia et al. [8] proposed a method useful for the alignment of large rotations with potential impact on 3d shape alignment to estimate the rotation directly from images defined on the sphere and without correspondence. unfortunately, this approach is quite resistant only to small translations of the camera [9]. another work [10] addresses the problem of camera pose recovery from spherical panoramas using pairwise essential matrices. in this case, the exact position of each panorama was an important step to ensure the consistency of visual information about a database of georeferenced images. here the pose recovery works with a twostage algorithm for rotations and after for translations with a bad result if the camera starting pose is very far from the correct one. the above-mentioned problems have been overcome by our method because it works also for large variations of translation as well as rotations. also levin et al. present in [11] a method to compute camera pose from a sequence of spherical images through the use of an essential matrix for initial pairwise geometry. differently from our work and the work of [10], they also use a rough estimate of the camera path as an additional system input to calculate camera positions. an example of generating a texture map of a 3d model with 2d high-quality images is given in [12]. in particular, it is a specific application in the e-commerce presentation of shoes. it consists of a texture mapping technique that comprises several phases: mesh partitioning, mesh parameterization and packing, texture transferring, and texture correction and optimization. in particular, in the texture transferring step, each mesh is allocated to a front image, and all meshes that use the same front image are put in a group. finally, the pixels from the front image corresponding to the 3d mesh are extracted. differently, our method uses only a spherical image to recreate the highresolution 3d model by projecting each pixel of the image from the correct camera pose previously found. the obtained results are faster and good if the user's field of view rotates without large displacements with respect to the camera pose. a similar approach but for another application related to realizing surveying tasks in architectural, archaeological, and cultural landscapes conservation is provided by abmayr et al. [13]. they developed a laser scanner, which offers high accuracy measurements of object surfaces, combined with a panoramic color camera to achieve precise and accurate monitoring of the actual environment employing colored point clouds. the camera rotates according to the same tripod as the laser scanner. many similarities with the method described in the present article can be found. the main difference resides in the use of a single 360° camera instead of a rotating unit, the use of an automatic pose estimation method instead to use the same tripod for laser scanner and camera during the acquisition process. our method is faster, and the 3d model reconstruction can be more complete because it doesn't need to be at a fixed distance from the camera during the scanning process. this aspect becomes more important if it is necessary to reconstruct a high-resolution model with different cameras from unknown positions. finally, an interesting study was provided by teo et al. [14], where, in the context of remote collaboration, helpers shared 360° live videos or 3d virtual reconstructions of their surroundings from different places to work together with local workers. the results showed that participants preferred having both 360° and 3d modes, as it provides variation in controls and features from different perspectives. our work provides a combination of a 360° live video and 3d virtual reconstruction to combine their advantages without the need to switch between them. 3. method in this section, the localization algorithm to estimate the camera pose (i.e., its positions and orientations in the environment), and the method used to project the texture mapping on a 3d representation of the environment are explained. 3.1. camera pose estimation a good alignment between the virtual environment and the captured image is fundamental for the final texture projection that will be covered in the next chapter. for example, this step is necessary when an operator needs to place the camera in a predefined position and orientation. some human errors may be made during this operation and a method to find an accurate camera pose is necessary. moreover, for large distances, even a small angle or small position errors can compromise the final result. the large-scale automatic camera pose identification algorithm has been implemented in matlab 2019b using a zmq communication protocol [15] between matlab and unity 3d. a particle swarm optimization (pso) was used. the procedure of the camera pose estimation is shown in figure 1. starting from the reconstructed 3d model with its low-quality texture but with depth information of the environment and given as input a high quality equirectangular photorealistic image taken by an omnidirectional camera, the localization algorithm finds the pose that gives a 360° image taken with a simulated camera that is as similar as possible to the input one. figure 1. schematic diagram of the camera pose detection algorithm. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 in particular: i. a new camera position is set for each iteration of the pso algorithm. ii. the equirectangular image corresponding to the set camera pose, at the previous step, is acquired. iii. the algorithm checks the similarity between the new image and the input one that has to be used as a new texture for the 3d mesh; the parameters to be optimized are the translation and the euler angles to be applied to the 3d model to generate an equirectangular image that matches the one in the input. the cost function for comparing the two equirectangular images uses the following quantities: • the structural similarity (ssim) index of the equirectangular images. • the mean-squared error (mse) between the two equirectangular images. • ssim of the approximation coefficients (ssima) of level 1 of the wavelet decomposition. • ssim of the horizontal detail coefficients (ssimh) of level 1 of the wavelet decomposition. • ssim of the vertical detail coefficients (ssimv) of level 1 of the wavelet decomposition. • ssim of the diagonal detail coefficients (ssimd) of level 1 of the wavelet decomposition. the final cost function c obtained by adding the abovementioned quantities is: 𝐶 = 𝑆𝑆𝐼𝑀 + 𝑀𝑆𝐸 + 𝑆𝑆𝐼𝑀𝐴 + 𝑆𝑆𝐼𝑀𝐻 + 𝑆𝑆𝐼𝑀𝑉 + 𝑆𝑆𝐼𝑀𝐷 . (1) the mse represents the cumulative squared error between two images x(i,j) and y(i,j): 𝑀𝑆𝐸(𝑥, 𝑦) = 1 𝑀𝑁 ∑ ∑[𝑥(𝑚, 𝑛) − 𝑦(𝑚, 𝑛)]2 , 𝑁 𝑛=1 𝑀 𝑚=1 (2) where m and n are the number of rows and columns of x and y. ssim is used for measuring the similarity between two images x and y [16]. the ssim index quality assessment index is based on the computation of three terms, namely the luminance term l, the contrast term c and the structural term s. the overall index is a multiplicative combination of the three terms: 𝑆𝑆𝐼𝑀(𝑥, 𝑦) = [𝑙(𝑥, 𝑦)] [𝑐(𝑥, 𝑦)]  [𝑠(𝑥, 𝑦)] , (3) where: 𝑙(𝑥, 𝑦) = 2𝜇𝑥 𝜇𝑦 + 𝐶1 𝜇𝑥 2 + 𝜇𝑦 2 + 𝐶1 , (4) 𝑐(𝑥, 𝑦) = 2𝑥𝑦 + 𝐶2 𝑥 2 + 𝑦 2 + 𝐶2 , (5) 𝑠(𝑥, 𝑦) = 𝑥𝑦 + 𝐶3 𝑥𝑦 + 𝐶3 . (6) x, y, x, y and xy are the local means, standard deviations, and cross-covariance for images x and y. c1, c2, c3 are constants to avoid instability for image regions where the local mean or standard deviation is close to zero. choosing  =  =  = 1 and 𝐶3 = 𝑐2 2 , the index simplifies to: 𝑆𝑆𝐼𝑀(𝑥, 𝑦) = (2𝜇𝑋 𝜇𝑌 + 𝐶1)(2𝑥𝑦 + 𝐶2) (𝜇𝑥 2 + 𝜇𝑦 2 + 𝐶1)(𝑥 2 + 𝑦 2 + 𝐶2) . (7) iv. the pso optimization runs until convergence, giving as output the best camera pose (translation and euler angles) that makes the two images as similar as possible. 3.2. texture projection in this chapter, the method to apply the high-quality texture mapping will be described. essentially, a merge of the highquality 360° image with the 3d mesh is performed. firstly, the 3d cartesian coordinates and colors of each 360° image's pixel were obtained by projecting the equirectangular image on the surface of a unitary radius sphere. given an equirectangular image with n rows and m columns, each image's pixel in 2d cartesian coordinates (n,m) was transformed in spherical coordinates, computing the corresponding azimuth a and elevation e, setting the radius r equal to 1. the equations used for the conversion are: 𝑎 = − ( 𝑚 𝑀 − 0.5) · 2 π , (8) 𝑒 = − ( 𝑛 𝑁 − 0.5) · π , (9) 𝑅 = 1. (10) finally, the 3d cartesian coordinates are obtained to be visualized in matlab software like a 3d point cloud. the mapping from spherical coordinates to 3d cartesian coordinates is: 𝑥 = 𝑅 · cos(𝑒) · cos(𝑎) (11) 𝑦 = 𝑅 · cos(𝑒) · sin(𝑎) (12) 𝑧 = 𝑅 · sin(e). (13) this "spherical" point cloud was imported inside unity and placed with the position and orientation found in the previous pose estimation step chapter. the raycasting technique was used: through the ray class, it is possible to emit or "cast" rays in a 3d environment and control the resulting collisions. the rays used in raycasting are invisible lines that have the center of the image sphere as the origin and are oriented in each pixel's direction. the important point is that these invisible lines or rays that are cast into the scene can return information about gameobjects that have been hit by the rays. attached to the environment's mesh as gameobject in unity, there is a mesh collider to register a hit with the ray. when a ray intersects or "hits" a gameobject, the event is referred to as a raycasthit. this hit provides details about the gameobject and where it was hit, including a reference to the gameobject's transform, the length of the ray when it hits something, the point in the world where the hit happened. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 once the collision of each pixel is detected, their new position is saved with color properties. lastly, the new point cloud was used to reconstruct a highquality photorealistic texture, using the screened poisson surface reconstruction algorithm [17] implemented in meshlab [18]. this algorithm is particularly useful when the model to reconstruct is very big with very fine details to be preserved. the reconstruction of the 3d model was done setting the reconstruction depth parameter (i.e., the maximum depth of the octree that is used to make the reconstruction) to 13. the default value of meshlab for this parameter is 8, we increased it because in general, the higher this value is the more time will be needed for reconstitution, the more details will be preserved [17]. we did not increase it more than 13 because after 14 it is not possible to see a real change in the final result. the minimum number of samples was set to 1.5 and the interpolation weight to 4 as default values of meshlab. since the poisson algorithm tends to "close" the reconstructed mesh, the triangles whose area was above a certain threshold were deleted to preserve the original form of the reconstructed environment. 4. evaluation for the validation of the camera pose localization algorithm and the high-quality texture mapping projection, a wavefront 3d object file (obj file extension) of two 3d high-quality virtual outdoor environments, one for a mine and one for a city, were imported into unity 3d platform. an original script was also written to simulate a 360° camera. the 360° capture technique is based on google's omni-directional stereo (ods) technology using cubemap rendering [19]. after the cubemap is generated, it is possible to convert this cubemap to an equirectangular map which is a projection format used by 360° video players. after placing the simulated camera at a specific pose inside the scene of a specific scenario, a high-quality equirectangular image was acquired, figure 2. this will be the input images whose pose has to be detected by the developed algorithm. to simulate the acquisition of the environment through a 3d scanner, a point cloud for each analyzed environment was extracted from the 3d high-quality models using the cloud compare software [20]. these point clouds were downsampled to simulate a 3d model with less detail than the input model, and new reconstructions were performed in meshlab [18] to obtain new low-quality 3d models, figure 3. new scenes were then recreated in unity with the downsampled 3d models. figure 4 shows the schematic diagram of our camera pose detection algorithm proposed in figure 1 applied to the specific example of the mine environment. the input omnidirectional image has a resolution of 4096 × 2048 pixels. however, to improve the calculation time speed, the comparison between images is done by downsampling them to 256 × 128 pixels for both the analyzed environments. the bounding box dimensions of the scenario with the mine are 113 m × 169 m × 37 m for the x, y, z coordinates, respectively. instead, the dimensions of the city environment are 440 m × 100 m × 435 m. the same analysis was done for both environments using the same approach and shifting the camera pose by the same values. table 1 shows the position and orientation for 10 random trials. the initial starting position was set to the origin (0, 0, 0) with null rotations for each trial. the research limits were set to  20 m for translations and  80° for rotations. by default, unity applies the following rotation order: extrinsic rotation around the z-axis (γ), then around the x-axis (α), and finally around the y-axis (β). the average time spent by the pso algorithm is around 20 minutes. the tests were run on a pc with an intel i7-9700kf processor and 64 gb of ram. for each of the 10 trials of table 1, the pso algorithm has been run changing 5 times the numbers of generations, i.e., 200, (a) (b) figure 2. high-quality equirectangular images whose detection poses must be identified for a mine (a) and city (b) environments. (a) (b) figure 3. the 3d downsampled models used by the localization algorithm for a mine (a) and city (b) environments. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 5 250, 300, 350, 400, keeping the number of particles fixed to 100, and 5 times changing the number of particles, i.e. 60, 70, 80, 90, 100, keeping the number of generation fixed to 400. the number of generations and particles was changed to force the algorithm to increase variability. to compute the error in pose detection, we decided to separate the translation and the rotation part. the translation error is computed by performing the euclidean distance between the position of the camera found by the pso algorithm and the ground truth. for what concerns the rotations, firstly, the rotations found by the optimization process and the ground truth were decomposed in axis and angle notation. consequentially, the error, in the case of rotation, has 2 terms: the error in the axis orientation with respect to the ground truth and the amount of rotation around such axis. figure 5 shows the cost function score for the various error components explained above, while figure 6 shows the three possible couple combinations of the error components with respect to the final score optimization value. as can be noticed, sometimes, a higher score of the cost function at the end of the optimization does not mean that an incorrect pose was found. this fact is probably due to the mesh reconstruction process. indeed, after this process, there could be portions of the environment that can be less accurate compared to the real model. for this reason, the meaning of the final reached score values is not absolute or easily comparable considering different camera poses. this generates the need to quantify the accuracy of the camera localization measurement within a scene. despite the uncertainty concerning the accuracy in the pose found by the algorithm with respect to the final cost function score, figure 5 and figure 6 show that, for the mine environment, a score below 1.6 means that, for the trial performed, the error in translation is below 0.7 m, the difference in the amount of rotation is below 1°, and the difference in the rotation axis orientation is below 2°. for the city environment, the same amount of errors corresponds to a cost function score of 2. the score is higher because the city environment is a scenario with much more detail than a mine. many of these details, through initial downsampling, are lost and the initial reconstructed mesh is much less detailed, as can be seen in figure 3b. the final score, therefore, which measures the similarity between the input high-quality equirectangular image and that obtained from this low-quality model, turns out to be higher. however, the errors, especially those related to rotations (figure 5b and figure 5c), are lower for the city environment even at high levels of the cost function score because the environment is more diverse. because of this relationship of the cost function threshold from the level of detail of the reconstructed 3d model, table 1. camera poses chosen for 10 trials (ground truth). trial x (m) y (m) z (m) α (°) β (°) γ (°) 1 -4.00 10.00 15.00 10.00 15.00 18.00 2 5.00 -2.00 5.00 10.00 -60.00 1.00 3 -8.00 5.00 -6.00 30.00 45.00 15.00 4 2.00 -7.00 15.00 -10.00 -45.00 -20.00 5 10.00 10.00 10.00 20.00 -15.00 5.00 6 0.00 15.00 8.00 25.00 -15.00 5.00 7 -5.00 2.00 -5.00 -10.00 60.00 -1.00 8 -1.00 -2.00 -3.00 -4.00 -5.00 -6.00 9 -15.00 10.00 10.00 40.00 70.00 40.00 10 -19.00 19.00 -19.00 2.00 80.00 -5.00 figure 4. example of the camera pose detection algorithm flow for the mine environment. (a) cost function score vs translation error. (b) cost function score vs axis orientation error. (c) cost function score vs rotation angle error. figure 5. 2d plots of the cost function score vs the errors in translation (a), axis orientation (b), and rotation angle (c). acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 6 there is a need for further analysis to investigate possible acceptance criteria and multidimensional models capable of finding a correlation between the different terms of the cost function and the uncertainty in translation and rotation. for example, figure 7 shows that mse could be a possible discriminant factor for accuracy. indeed, in this case, the accurate solutions are all centered around the 0.005 value for both exanimated environments. once the camera poses were found for each environment, this information is used to set the 360° image projected on the surface of a unitary radius sphere in the correct position and orientation, figure 8a. after that, using the raycasting technique, the 3d mesh, figure 8b, is hit by 360° image pixels (figure 8c). 5. conclusions and future work in this paper, we presented an approach for combining photorealistic with 3d environment representations using a 360° high-quality image and a 3d model of an environment with low (a) cost function score vs translation error vs rotation angle error. (b) cost function score vs axis orientation error vs rotation angle error. (c) cost function score vs translation error vs axis orientation error. figure 6. 3d plots of the cost function score and the errors in translation, rotation angle, and axis orientation. figure 7. mse score vs translation error. (a) 360° image placed on the surface of a unitary sphere (matlab software). (b) raw 3d mesh (unity software). (c) point cloud obtained projecting the pixels of the 360° image on the raw 3d mesh (unity software). figure 8. the pixels of the 360° image of the mine environment are projected on a sphere surface (a), which is put in the correct camera pose found by our algorithm inside the raw 3d mesh (b). the pixels are then projected using the ray cast technique on the raw mesh, obtaining a new dense point cloud (c). acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 7 quality. at the core of our system, we have developed an approach for automatic large-scale 360° camera pose estimation within a 3d environment and a method for projective texture mapping spherical images. contrary to previous work, outline in the related work section, the camera pose estimator developed in this paper works both for significant differences in rotation and displacement, and it works without the need to start from a known point of view. the positions and orientations of the camera were estimated with a translation error below 0.7 m, and below 1° and 2° for the difference in the amount of rotation, and the difference in the rotation axis orientation, respectively. these results were obtained for both environments analyzed at full size and with search limits of  20 m for translations and  80° for rotations using an mse of 0.005 as a possible discriminant factor for accuracy. while this work was validated using a 360° camera simulation in virtual scenes, we plan to test its capability on real scenes as well. in such situations, the light conditions could be very different between the model and equirectangular image which is why the luminance has to be carefully considered. furthermore, the approach here presented is valid until the view of the user rotates without large displacements from the camera’s initial position because not all the mesh areas are covered after the pixel projection. to overcome this problem, the same method presented in this paper can be applied with more than one camera, but in the case of the final reconstruction of the texture, there is not a discriminating parameter that allows us to choose which pixels to use from one or another camera for the final reconstruction. this choice can be useful if the field of view of one camera is better for some mesh areas than another one to obtain a better result and it can be implemented in future work. finally, in the optimization camera pose process, a further study can be done to find a correlation between the different terms of the cost function and the uncertainty in translation and rotation by investigating other possible acceptance criteria through a multidimensional analysis. references [1] r. benosman, s. kang, o. faugeras, panoramic vision, springer verlag, 2000, isbn 978-0387951119. [2] j. hakulinen, t. keskinen, v. mäkelä, s. saarinen, m. turunen, omnidirectional video in museums–authentic, immersive and entertaining, in international conference on advances in computer entertainment, springer, 2017, pp. 567–587. doi: 10.1007/978-3-319-76270-8_39 [3] d. kalkofen, s. mori, t. ladinig, l. daling, a. abdelrazeq, m. ebner, m. ortega, s. feiel, s. gabl, t. shepel, j. tibbett, t. h. laine, m. hitch, c. drebenstedt, p. moser, tools for teaching mining students in virtual reality based on 360° video experiences, conference on virtual reality and 3d user interfaces abstracts and workshops (vrw), iee, atlanta, ga, usa, 2020, pp. 455-459. doi: 10.1109/vrw50115.2020.00096 [4] a. macquarrie, a. steed. the effect of transition type in multiview 360 media, ieee transactions on visualization and computer graphics 24(4) (2018), pp. 1564-1573. doi: 10.1109/tvcg.2018.2793561 [5] m. tatzgern, r. grasset, d. kalkofen, d. schmalstieg, transitional augmented reality navigation for live captured scenes, virtual reality (vr), ieee, 2014, pp. 21-26. doi: 10.1109/vr.2014.6802045 [6] m. tatzgern, r. grasset, e. veas, d. kalkofen, h. seichter, d. schmalstieg, exploring real world points of interest: design and evaluation of object-centric exploration techniques for augmented reality. pervasive and mobile computing 18 (2015), pp. 55-70. doi: 10.1016/j.pmcj.2014.08.010 [7] j. thatte, b. girod, towards perceptual evaluation of six degrees of freedom virtual reality rendering from stacked omnistereo representation, electronic imaging, 2018. doi: 10.2352/issn.2470-1173.2018.05.pmii-352 [8] a. makadia, k. daniilidis, rotation recovery from spherical images without correspondences, ieee transactions on pattern analysis and machine intelligence 28(7) (2006), pp. 1170–1175. doi: 10.1109/tpami.2006.150 [9] a. makadia, k. daniilidis, direct 3d-rotation estimation from spherical images via a generalized shift theorem, ieee computer society conference on computer vision and pattern recognition, vol. 2, madison, wi, usa, 2003, pp. ii–217. doi: 10.1109/cvpr.2003.1211473 [10] r. laganiere and f. kangni, orientation and pose estimation of panoramic imagery, mach graph vis 19(3) (2010), pp. 339–363. [11] a. levin, r. szeliski, visual odometry and map correlation, in proceedings of the ieee computer society conference on computer vision and pattern recognition 1 (2004) washington, dc, usa. doi: 10.1109/cvpr.2004.1315088 [12] j.-y. lai, t.-c. wu, w. phothong, d. w. wang, c.-y. liao, j.-y. lee, a high-resolution texture mapping technique for 3d textured model, applied sciences, vol. 8, no. 11, 2018, p. 2228. doi: 10.3390/app8112228 [13] t. abmayr, f. härtl, m. mettenleiter, i. heinz, a. hildebrand, b. neumann, c. fröhlich, realistic3d reconstruction–combining laserscan data with rgb color information, proceedings of isprs internation archives of photogrammetry, remote sensing and spatial information sciences 35 (2004), pp. 198–203. [14] t. teo, l. lawrence, g. a. lee, m. billinghurst, m. adcock, mixed reality remote collaboration combining 360 video and 3d reconstruction, in proceedings of the 2019 chi conference on human factors in computing systems, 2019, pp. 1–14. doi: 10.1145/3290605.3300431 (a) (b) figure 9. final results after the 3d reconstruction for the mine (a) and the city (b) environments. https://doi.org/10.1007/978-3-319-76270-8_39 https://doi.org/10.1109/vrw50115.2020.00096 https://doi.org/10.1109/tvcg.2018.2793561 https://doi.org/10.1109/vr.2014.6802045 https://doi.org/10.1016/j.pmcj.2014.08.010 https://doi.org/10.2352/issn.2470-1173.2018.05.pmii-352 https://doi.org/10.1109/tpami.2006.150 https://doi.org/10.1109/cvpr.2003.1211473 https://doi.org/10.1109/cvpr.2004.1315088 https://doi.org/10.3390/app8112228 https://doi.org/10.1145/3290605.3300431 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 8 [15] p. hintjens, zeromq: messaging for many applications. o’reilly media, inc., 2013, isbn: 9781449334062. [16] z. wang, a. c. bovik, h. r. sheikh, e. p. simoncelli, image quality assessment: from error visibility tostructural similarity, ieee transactions on image processing 13(4) (2004), pp. 600– 612. doi: 10.1109/tip.2003.819861 [17] m. kazhdan, h. hoppe, screened poisson surface reconstruction, acm transactions on graphics (tog) 32(3) (2013), pp. 1–13. doi: 10.1145/2487228.2487237 [18] p. cignoni, m. callieri, m. corsini, m. dellepiane, f. ganovelli, g. ranzuglia, meshlab: an open-source mesh processing tool, in eurographics italian chapter conference, salerno, 2008, pp. 129– 136. doi: 10.2312/localchapterevents/italchap/italianchapconf2008/1 29-136 [19] google inc., rendering omni-directional stereo content. online [accessed 21 march 2022] https://developers.google.com/vr/jump/rendering-odscontent.pdf [20] d. girardeau-montaut, cloudcompare, 2016. online [accessed 21 march 2022] https://www.danielgm.net/cc https://doi.org/10.1109/tip.2003.819861 https://doi.org/10.1145/2487228.2487237 http://dx.doi.org/10.2312/localchapterevents/italchap/italianchapconf2008/129-136 http://dx.doi.org/10.2312/localchapterevents/italchap/italianchapconf2008/129-136 https://developers.google.com/vr/jump/rendering-ods-content.pdf https://developers.google.com/vr/jump/rendering-ods-content.pdf https://www.danielgm.net/cc mitigation of spectrum sensing data falsification attack using multilayer perception in cognitive radio networks acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 7 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 mitigation of spectrum sensing data falsification attack using multilayer perception in cognitive radio networks mahesh kumar nanjundaswamy1, ane ashok babu2, sathish shet3, nithya selvaraj4, jamal kovelakuntla5 1 department of electronics and communication engineering, dayananda sagar college of engineering, bengaluru, karnataka-560078, india 2 department of electronics and communication engineering, pvp siddhartha institute of technology, vijayawada, andhra pradesh 520007, india 3 department of electronics and communication engineering, jss academy of technical education, bengaluru, karnataka560060, india 4 department of electronics and communication engineering, k. ramakrishnan college of technology, tiruchirappalli-621112, tamilnadu, india 5 department of electronics and communication engineering, gokaraju rangaraju institute of engineering and technology (griet), hyderabad, telangana-500090, india section: research paper keywords: cognitive radio network; cooperative spectrum sensing; energy statistic; machine learning model; spectrum sensing data falsification citation: mahesh kumar nanjundaswamy, ane ashok babu, sathish shet, nithya selvaraj, jamal kovelakuntla, mitigation of spectrum sensing data falsification attack using multilayer perception in cognitive radio networks, acta imeko, vol. 11, no. 1, article 21, march 2022, identifier: imeko-acta11 (2022)-01-21 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received november 20, 2021; in final form march 1, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: mahesh kumar nanjundaswamy, e-mail: mkumar.n19@gmail.com 1. introduction over the past years, the world has witnessed a tremendous growth in the field of wireless communication technologies due to the popularity of telemedicine, smart home, smartphones, autonomous vehicles, mobile televisions and smart cities. the increasing demand for wireless communications has brought the problem of spectrum scarcity. energy detection and measurement is a key task in spectrum sensing in cognitive radio networks. as a result, development of hybrid machine learning or signal processing algorithms becomes an intense research area for both measurement technology as well as in cognitive radio communications. the federal communications commission abstract cognitive radio network (crn) is used to solve spectrum scarcity and low spectrum utilization problems in wireless communication systems. spectrum sensing is a vital process in crns, which needs continuous measurement of energy. it enables the sensors to sense the primary signal. cooperative spectrum sensing (css) has recommended to sense spectrum accurately and to enhance detection performance. however, spectrum sensing data falsification (ssdf) attack being launched by malicious users can lead to wrong global decision on the availability of spectrum. it is an extremely challenging task to alleviate impact of ssdf attack. over the years, numerous strategies have been proposed to mitigate ssdf attack ranging from statistical to machine learning models. energy measurement through statistical models is based on some predefined criteria. on the other hand, machine learning models have low sensing performance. therefore, it is necessary to develop an efficient method to mitigate the negative impact of ssdf attack. this paper intends to propose a multilayer perceptron (mlp) classifier to identify falsified data in css to prevent ssdf attack. the statistical features of the received signals are measured and taken as feature vectors to be trained by mlp. in this manner, measurement of these statistical features using mlp becomes a key task in cognitive radio networks. trained network is employed to differentiate malicious users signal from honest users’ signal. the network is trained with the levenberg-marquart algorithm and then employed for eliminating the effect of attacks due to the ssdf process. once the simulated results are observed, it can be revealed that the proposed model could efficiently reduce the impact of malicious users in crn. mailto:mkumar.n19@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 (fcc) [1], [2] reported that most of the allocated spectrum is rarely used by the primary users (pus) (fcc, 2002; fcc, 2003). in order to solve the conflicts between spectrum utilization and spectrum shortage, it has been recommended that opportunistic access to the licensed spectrum should be given to secondary users (sus). cognitive radio network (crn) has been developed to solve the aforementioned issues by enabling dynamic spectrum access. it is a new paradigm that offers the potential to utilize the licensed spectrum in an opportunistic manner [3] (wan et al. 2019). crn allows sus to sense and access free spectrum bands without interfering pus. crn allows sus to use valid spectrum, the spectrum scarcity problem will be solved successfully. sus needs to monitor continuously [4], [5] to sense the pu status. therefore, spectrum sensing becomes an important process for crn. spectrum sensing is the process of identifying status of pu. an accurate detection of spectrum can enhance the performance of crn significantly [6]-[8] (ali et al., 2017). however, due to obstacles, shadowing and multipath fading, wrong detections could take place, thus resulting in an inefficient usage considering the licensed spectrum. to deal such an issue, cooperative spectrum sensing (css) has been considered as a satisfactory candidate for spectrum sensing [9] (sharifi et al., 2016). css combines all cus sensing signal and makes final decision. it prevents the effect of noise, pathloss, shadowing and fading that may occur in wireless communication. however, css is vulnerable to many security threats. among many attacks, spectrum sensing data falsification (ssdf) can severely affect the detection performance of crn. in ssdf attack, malicious users (mus) sent falsified report to the fusion center (fc) about the spectrum band. ssdf attack mislead global decision by sending falsified report about the spectrum availability, hence degrading the crn performance. therefore, it is essential to develop an efficient method to eliminate the impact of ssdf attacks. the core contributions of this research work are as follows: the main aim or the focus of this research article will relate to the designing of an efficient model by using artificial neural network to suppress the negative impact of mus in crn and to enhance the detection performance through measuring various statistical parameters. in this scheme, a set of features are extracted from the received signals and then generated a large representative dataset. multilayer perceptron (mlp) [10], [11] is designed with one of input layer, two layers which are hidden and one output layer. next, the obtained feature vectors could be grouped into two sets, viz., training and testing. the data’s with respect to the training sets could be used for developing and training the mlp. testing data set is employed to validate the efficacy of the proposed model. performance of the proposed model is evaluated by measuring some commonly used metrices. spectrum sensing refers to the process of detecting the activity of pus in a licensed spectrum band. it plays a vital role in crns and css [12]-[14] has been suggested to make accurate decision about spectrum availability by utilizing spatial diversity via observations of multiusers. but css has some limitations. ssdf can severely affect the detection performance of the crns in which false sensing report sent by mus during the css. to eliminate ssdf attack, several methods have been reported in the literature. each method has their own characteristics. none of the method provide consistent result. thus, the main aim or the focus of this research article will relate to the designing of an efficient model by using artificial neural network to suppress the negative impact of mus in crn and to enhance the detection performance. the rest of the article is outlined as mentioned in the following sentences. the 2nd section presents an exhaustive review of former methods. the section 3 describes the development of the system model which is proposed here. section 4 provides experimental outcomes. the paper ends with the conclusive remarks presented in the 5th section. 2. related work several research literatures have proved that css is a good candidate to detect the activity of pu in crn. however, css is affected by many attacks such as the primary user emulsion attack and ssdf. amidst, ssdf is the dangerous attack in crn, ssdf attack can reduce the detection performance by sending falsified report to fc. over the past years, several methods have proposed to resist ssdf attack. wan et al., (2019) [15], [16] presented a method to mitigate the influence of the attacks using the ssdf concepts involving the concepts of the combinations of the linear weights. adaptive reputation method is also presented to differentiate mus from sus. feng et al. (2018) [17] used the exclusive-or (xor) distance analysis to eliminate the influence of ssdf in crn. in this approach, xor distance with hypothesis detecting the information is employed to calculate the equivalency between the two super-users. based on the xor distance, mus is separated from sus. soft decision-based scheme to resist ssdf is developed by ahmadfard et al. (2017) proposed method achieved better result than existing methods. in the article in ahmed et al. (2014), the authors presented a method to combat the effect of mus in crns using the bayseiens strategies. the authors used statistical features of received samples to sense the inexistence and existence of pu. li et al. (2014) investigated the potential of the fuzzy c-means algorithm in spectrum sensing. the proposed algorithm is capable of detecting pu signal accurately, which in turn enhance the detection performance. robust algorithm [18] to defence ssdf proposed by althunibat et al. (2014). in this approach, specific weights are assigned to sensing nodes. results showed that the algorithm is capable of detecting mus. according to the mean and standard deviation of the received samples, mapunya and velempini (2018) developed a ssdf mitigation method in crn. results proved that the proposed scheme could reduce the probability of false alarm rate. sharif et al. (2018) presented a defence strategy against ssdf attack. mean value of received samples are computed. two parameters α and β are obtained and then used in the likelihood ratio test method to enhance the detection performance. li and peng (2016) used unsupervised machine learning model to differentiate honest sus from mus. the proposed model utilizes past sensing report as a feature vector to categorize users. nie et al. (2017) proposed a defence scheme which is based on bayesian learning model. each user has specific weight that reflects its trustworthiness. farmani et al. (2011) suggested the ‘support a vector data description’ method to detect the activity of pu. the proposed differentiate honest sus from mus based on the energy statistic signal. however, this method failed to decrease the probability of false alarm rate. cheng et al. (2017) developed a self-organizing map to classify nodes into honest and malicious nodes. the proposed method uses average suspicion degree to discriminate mus from honest users. amar taggu et al. (2021) [19] proposed a two-layer model framework to classify ssdf attackers the first layer, the computational layer, employs the hidden markov model (hmm) to establish a probabilistic relationship between the pu's states and the sensing acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 reports of the sus. this generates the set of data needed for the next layer. the second layer, the decision layer, employs several ml algorithms to categorise sus as byzantine attackers or normal sus. 3. proposed system model the fundamental focus of this research work is developing and rendering a scheme for mitigating ssdf attack in crn by using the models developed using the concepts of machine learning’s. one of the hidden properties in the process of machine learning’s is the concept of learning how to develop the relation between the two variables, i.e., the input and the output via training process which makes the scheme more robust. the developed model is simulated, and its performance compared with the earlier methods to prove its superiority. the figure 1 depicts the structure of cognitive radio network with one pu and 5 sus. sus uses the communication channel whenever the pu signal is absent. the su’s will perform the process of the least square set (lss) in order to detect the absence or the presence of the pu’s and finally reporting the processes to the fc. next, the fc will make a final verdict with respect to the spectrum availability which is relying on the received information’s by the respective users. in this context, because of the presence of the mu’s, secondary user/s will send some falsified reports to the fc. let m mus are presented in sus. mus can send either always yes or always no to fc. always yes represents high energy (1) which increases the probability of the fake alarms (false) as they are going to give an active status information of the primary user/s when it will be inactive in nature. similarly, always no corresponds to low energy (0) which decreases the probability of detection because they send the primary user’s absentia information’s, even due to the presence of the pu’s in the system. both high and low signal of mus degrade the performance of crn. to deal such an issue, mlp is developed. spectrum sensing mainly used for detecting the presence or the inexistence of the pu’s, which is shown in figure 1, each su receives noisy informative signal once the primary user becomes inactive in nature and finally, energy of the parameters used in the process could be calculated at time of the tth instant using the pth su, which could be expressed by (1) as 𝑆𝑝(𝑡) = 1 𝑁1 ∑ |𝜂𝑝(𝑡, 𝑛)| 2 𝑁1−1 𝑛=0 . (1) in (1), the parameter 𝑆𝑝(𝑡) denotes the received noise by the pth su at 𝑡. n1 represents the number of samples considered. when the pu is active, equation (1) can be written in the form of eqn. (2) as 𝑆𝑝(𝑡) = 1 𝑁1 ∑ |𝐻𝑝 (𝑡, 𝑛). 𝑆(𝑡, 𝑛) + 𝜂𝑝(𝑡, 𝑛)| 2 𝑁1−1 𝑛=0 (2) where, ( ),ph t n denotes the channel gain between pu and pth su, 𝑆(𝑡, 𝑛) denotes the pu signal and 𝜂𝑝(𝑡, 𝑛)denotes the additive gaussian noise with 0 mean and variance. several spectrum sensing methods are reported in the literature such as detecting the energy parameters, the method of matched filtering process and the features using the cyclo-stationary concepts. amidst, energy detecting method is a good candidate for local spectrum sensing because the lss will not need any earlier datas or information’s about the primary user signal and the computational overhead is low. utilizing energy detection method, the received signal can be expressed into binary hypothesis testing, h0 and h1, given in (3) as 𝑟𝑝 (𝑡) = { 𝜂𝑝(𝑡) 𝐻0 𝐻𝑝(𝑡) ∙ 𝑆(𝑡) + 𝜂𝑝(𝑡) 𝐻1 . (3) here, the parameter𝑟𝑝 (𝑡) represents the signal which is being sensed., represents the presence and the absence of the primary user signals. after the local spectrum is being sended, the decision of each su is represented as binary value, 0 and 1, on the inexistence and existence of pu signal with the mathematical model given by 𝑆𝑉𝑝(𝑡) = { 0 𝐻0 1 𝐻1 , (4) where 𝑆𝑉𝑝 is the sensing value of the p th su. 0 and 1 shows the inactive and active of pu signal respectively. every secondary user will report the end verdict to the central unit. then, the fc is going to make a final verdict regarding the spectrum which is relying on all of the data’s which are obtained by the sus. because of the presence of few of the fake or falsified user/s, the secondary user/s may or may not send the modified information to fc, which is finally going to affect the overall performance of the communication system’s spectrum. ssdf attacks such as always yes and always no are considered. under such attacks, mus can report contaminated data to the fc. mus will change the local sensing report and falsify the test outcome. for an instance, mus sends h0 while its local decision is h1 represents the existence of pu signal. let the qth ssdf attacker reports low energy when its local decision is h1 with probability h0 and reports high energy value to fc system, when the decision at the local level is depicted by the parameter h0 with using the probabilistic feature using the 𝑃1,𝑞 parameter with the mathematical model given by eqn. (5) and (6) as 𝑃𝑑,𝑞 = (1 − 𝑃0,𝑞 )𝑃𝑑,𝑞 + (1 − 𝑃𝑑,𝑞 )𝑃1,𝑞 (5) 𝑃𝑓,𝑞 = (1 − 𝑃0,𝑞 )𝑃𝑓,𝑞 + (1 − 𝑃𝑓,𝑞 )𝑃1,𝑞 . (6) to mitigate the ssdf attacks, features such as energy statistic, autocorrelation, squared mean, standard deviation and maximum-minimum eigen value and are computed and fed as figure 1. cognitive radio network model. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 inputs to the mlp. energy statistic of the signal can be represented using (7) as 𝐸 = ∑ |𝑟(𝑘)|2 𝑁1−1 𝑘=0 . (7) autocorrelation is a mathematical function which could be used for encoding the level of the association between two parameters which are procured from the similar source, i.e., the same source (correlating between itself). autocorrelation measures the similarity of a signal with a delayed version of itself. honest sus sends the actual report to the fc that may vary depending on the existence and inexistence of pu signal. but, mus report either low or high energy repeatedly. autocorrelation value does not oscillate much. therefore, autocorrelation value of signal is considered as one of the feature vectors. autocorrelation value of the signal is given using (8) as 𝐴(𝑖) = 1 𝑁1 ∑ 𝑟(𝑘). 𝑟(𝑘 − 𝑖) 𝑁1−1 𝑘=0 . (8) squared mean of the received signal can be computed using (9) as 𝜇 = 1 𝑁1 ∑ |𝑟(𝑘)|2 𝑁1−1 𝑘=0 . (9) features are labelled as 0 for h0 and 1 for h1. mlp is a feed forward, supervised artificial neural network (ann). the neural net has got 3 important layers, viz., an input layer, a no. of layers which are hidden and finally one output layer. the input layer of the ann is used for getting the input signal as external stimuli. the no. of parameters in the input later will decide the no. of feature vectors. each middle layer (hidden) in between the input and output consists of one or more units or hidden neurons. number of hidden layer and its units are determined by experimentation. output represents the final decision, and this output layer has one neuron representing binary classification, shown in figure 2. the mlp output at the output layer can be calculated using (10) 𝑦𝑘 = 𝑓 | ∑ 𝑤𝑗,𝑘 ′ ((𝑔 ∑ 𝑥𝑖 𝑤𝑖,𝑗 + 𝑏 𝑁 𝑖,𝑗=1 )) 𝑁 𝑗,𝑘=1 | , (10) where 𝑥𝑖 the i th input vector, 𝑏 is the bias and 𝑤𝑖 denotes weight between the input and hidden zones (layer/s), the weight between the hidden and the output layer is denoted by 𝑤𝑗 , finally the parameters 𝑔 and 𝑓 could be called as the function of activation, which are present at the hidden and output layer respectively. after labelling, feature vectors are categorized into two sets of data, i.e., the training ones and the testing ones, which are called as the training and testing information. the data which is used for training is employed to ensure the modelled ann system recognizes data and test data used to check the ability of the model to predict new cases based on its training. algorithm 3.1 explains the training procedure of mlp. performance of the trained network is validated with the test data. 3.1. algorithm: mlp training algorithm • create mlp network • initialize the weight and bias randomly • computer the feature vector x = [x1, x2, x3, …, xn] • label the target vector t = [0, 1], 0 = h0, 1 = h1 • for each training pair (x, t) • present input to the input layer. calculate the net output using (11) ℎ = ∑ (𝑥𝑖 𝑤𝑖,𝑗 + 𝑏) 𝑁 𝑖,𝑗=1 . (11) • apply activation function to compute net input using (12) ℎ = 𝑔(ℎ) . (12) • calculate the net output at the output layer using (13) 𝑦𝑘 = ∑ 𝑤𝑗,𝑘=1 ′ ℎ 𝑁 𝑗,𝑘=1 (13) • apply activation function to compute net outcome using (14) 𝑦𝑘 = 𝑓[𝑦𝑘 ] . (14) • calculate the error using (15) 𝐸𝑟𝑟𝑜𝑟 = 𝑇 − 𝑦 . (15) • back propagate the error and update weight and bias using (16) and (17) 𝑤new = 𝑤old + 𝛥𝑤𝑖,𝑗 𝑏new = 𝑏old + 𝛥𝑏𝑖,𝑗 (hidden layer) (16) 𝑤new = 𝑤old + 𝛥𝑤𝑗,𝑘 𝑏new = 𝑏old + 𝛥𝑏𝑗,𝑘 (output layer) (17) • test for stopping condition • end. 4. simulation results here, in the section-iv, experimental outcomes are portrayed in order to prove the effectiveness of the mathematical model and the simulation is performed using matlab 2018a platform. figure 2. multilayer perceptron with two hidden layers. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 crn is designed with one fc; one pu and 30 secondary users and pf will be set to 0.1 for all sus. the percentage of mus is ranged from 10 % to 60 %. the pu signal is a quadrature phase shift keying signal and outcomes are revealed using the simulations done using the monte-carlo methods with a step size of around 10000 runs. the signal to the noise ratio will be changed from a value of -20 db up to zero decibels. with respect to this work, mlp is employed for mitigating negative impact of mus in crn. an input value to the mlp is composed of feature vectors obtained from the received signal. mlp is designed with 5 neurons representing 5 feature vectors, 2 layers which are hidden using 10 hidden neurons in every ann layer and consisting of only 1 neuron in the output layer of the neural net and having only one neuron which corresponds to the output value 0 or 1 representing 𝐻0 & 𝐻1 hypothesis. tan sigmoid and linear activation function is used at the second and the third layer of the neural net (hidden/output). in this stage, the least means square (lms) algorithm could be used to train the neural net by setting the epoch value up to 500 points. efficacy of the developed model is ascertained or justified next. this is done with the help of probability detection and also using the help of probability of the false alarm rate parameter. the figure 3 depicts the plot between the signal-to-noise ratio (snr) and the probability of detection (pd) at the probability of false alarm pf = 0.1. from the results of simulation, we can infer that considering figure 3, the model which has been proposed by us give as an outstanding performance when compared to other methods taken for comparison. for an instance, snr = 12.5 db, pd performance of the proposed model is increased by 47.4 %, 46 % and 10 % as compared to energy detection (ed), generalized likelihood ratio test (glrt) and hadamard ratio (hr) sensing methods respectively. the figure 4 exhibits the probability of false alarm for always yes attack versus snr considering a group of malicious secondary users (percentagewise). from the figure 4, it is noticed that the proposed model can sense a yes attacking process correctly up to 50 % falsified secondary user/s in the presence for snr varying from -10 db to 0 db. it proves that machine learning model can efficiently sense the ssdf attack launched by malicious sus in crn. probability of false alarm for always no attack as a function of snr for varying percentage of malicious sus is graphically plotted in figure 5. from the results, one can see apparently seeing the figure 5 that model which is proposed by us is able to detect always no attack preciously up to 50 % malicious sus presence for snr varying from -10 db to 0 db. it further confirms that the proposed machine learning model can figure 3. probability of detection versus snr at pf = 0.1. figure 4. probability of false alarm versus snr for varying falsified secondary users percentagewise (always a yes attacking phenomenon). figure 5. probability of false alarm versus snr for varying percentage of falsified secondary user/s (always no attacking phenomenon’s). figure 6. probability of detection versus percentage of malicious users. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 efficiently detect the ssdf attack launched by malicious sus in crn. in order to experimentally validate the performance characteristics of the model which is being proposed in this research article, a curve can be plotted between probability of detection and malicious users varying from 10 % to 60 % at snr is -10 db, as shown in the figure 6, from where it can be revealed that the scheme which has been proposed by us yields more pd value. from the empirical findings, it can be justified and enunciated that the ml dependent strategy could efficiently suppress the impact of mus in the crns. the efficacy of the proposed model can be observed using the concepts of probability detection and probability of false alarm rate. the proposed model provides outstanding performance when compared to other methods taken for comparison with respect to different values of snr as shown in table 1. 5. conclusion performance of crn is severely affected by malicious sus. mus may launch ssdf attack to mislead the global decision. this paper has presented a machine learning model using an mlp classifier to identify falsified data in css to prevent ssdf attack in crn. set of features are extracted from the received samples and labelled based on the inexistence and existence of primary user. the obtained features used as input to the mlp model. the network is trained with the levenberg-marquart algorithm and then employed for eliminating the effect of attacks due to the ssdf process. once the simulated results are observed, it can be revealed that the proposed model could efficiently reduce the impact of malicious users in crn. however, it needs more time for training. in future work, meta heuristic algorithm will be explored to optimize the parameters of network and to further enhance detection performance. acknowledgement the authors acknowledge with thanks dayananda sagar college of engineering, bengaluru, jss academy of technical education, bengaluru for all the support and encouragement given to me to take up this research work and publication. references [1] a. ahmadfard, ali jamshidi, alireza keshavarz-haddad, probabilistic spectrum sensing data falsification attack in cognitive radio networks, signal processing, 137, 2017, 1-9, issn 01651684. doi: 10.1016/j.sigpro.2017.01.033 [2] m. e. ahmed, j. b. song, z. han, mitigating malicious attacks using bayesian nonparametric clustering in collaborative cognitive radio networks, 2014 ieee global communications conference, austin, tx, 999-1004. doi: 10.1109/glocom.2014.7036939 [3] a. ali, w. hamouda, advances on spectrum sensing for cognitive radio networks: theory and applications, ieee communications surveys & tutorials, 19(2), 2017, 1277-1304. doi: 10.1109/comst.2016.2631080 [4] s. althunibat, m. di renzo, f. granelli, robust algorithm against spectrum sensing data falsification attack in cognitive radio networks, ieee 79th vehicular technology conference (vtc spring), seoul, 2014, 1-5, doi: 10.1109/vtcspring.2014.7023078 [5] z. cheng, t. song; j. zhang; j. hu; y. hu; l. shen; x. li; j. wu, self-organizing map-based scheme against probabilistic ssdf attack in cognitive radio networks, 9th int. conf. on wireless comm. and signal processing (wcsp), nanjing, 2017, 1-6. doi: 10.1109/wcsp.2017.8170994 [6] f. farmani, m. abbasi-jannatabad, r. berangi, detection of ssdf attack using svdd algorithm in cognitive radio networks, third international conference on computational intelligence, communication systems and networks, bali, 2011, 201-204. doi: 10.1109/cicsyn.2011.51 [7] federal communications commission: et docket no 03-222, notice of proposed rulemaking and order. [8] federal communications commission, spectrum policy task force. rep. et docket no. 02-135. 2002. online [accessed 16 march 2022] https://transition.fcc.gov/sptf/files/sewgfinalreport_1.pdf [9] j. feng, m. zhang, y. xiao, h. yue, securing cooperative spectrum sensing against collusive ssdf attack using xor distance analysis in cognitive radio networks, sensors (basel). 18(2), 2018, 370. doi: 10.3390/s18020370 [10] l. li, c. chigan, fuzzy c-means clustering based secure fusion strategy in collaborative spectrum sensing. in 2014 ieee international conference on communications (icc), sydney, nsw, 2014, 1355-1360. doi: 10.1109/icc.2014.6883510 [11] s. mapunya, m. velempini, design of byzantine attack mitigation scheme in cognitive radio ad-hoc networks, international conf. on intelligent and innovative comp. apps. (iconic), plainemagnien, 2018, 1-4. doi: 10.1109/iconic.2018.8601087 [12] g. nie, g. ding, l. zhang, q. wu, byzantine défense in collaborative spectrum sensing via bayesian learning. ieee access 5, 2017, 20089-20098. doi: 10.1109/access.2017.2756992 [13] a. sharifi, m. mofarreh-bonab, spectrum sensing data falsification attack in cognitive radio networks: an analytical model for evaluation and mitigation of performance degradation, aut journal of electrical engineering, 50(1), 2018, 43-50. doi: 10.22060/eej.2017.12528.5094 [14] a. a. sharifi, m. j. musevi niya, defenseagainst ssdf attack in cognitive radio networks: attack-aware collaborative spectrum sensing approach, ieee communications letters, 20(1), 2016, 93-96, doi: 10.1109/lcomm.2015.2499286 [15] runze wan, naixue xiong, lixin ding, xing zhou, mitigation strategy against spectrum-sensing data falsification attack in cognitive radio sensor networks, international journal of distributed sensor networks, 15, 2019, 1550-1477. doi: 10.1177/1550147719870645 [16] yang li, q. peng, achieving secure spectrum sensing in presence of malicious attacks utilizing unsupervised machine learning, milcom 2016 ieee military commn. conf., baltimore, md, usa, 2016 174-179. doi: 10.1109/milcom.2016.7795321 [17] imran ahmed, eulalia balestrieri, francesco lamonaca, iomtbased biomedical measurement systems for healthcare table 1. probability of detection versus snr. approaches vs. snr proposed technique ed glrt hr snr = 20 db 0.14 0.1 0.1 0.1 snr = 15 db 0.32 0.1 0.12 0.23 snr= 12.5 db 0.62 0.14 0.16 0.52 snr = 10 db 0.88 0.25 0.28 0.87 snr = 4 db 0.99 0.93 0.96 0.99 snr = 2 db 0.99 0.98 0.99 0.99 https://doi.org/10.1016/j.sigpro.2017.01.033 https://doi.org/10.1109/glocom.2014.7036939 https://doi.org/10.1109/comst.2016.2631080 https://doi.org/10.1109/vtcspring.2014.7023078 https://doi.org/10.1109/wcsp.2017.8170994 https://doi.org/10.1109/cicsyn.2011.51 https://transition.fcc.gov/sptf/files/sewgfinalreport_1.pdf https://doi.org/10.3390/s18020370 https://doi.org/10.1109/icc.2014.6883510 https://doi.org/10.1109/iconic.2018.8601087 https://doi.org/10.1109/access.2017.2756992 https://doi.org/10.22060/eej.2017.12528.5094 https://doi.org/10.1109/lcomm.2015.2499286 https://doi.org/10.1177/1550147719870645 https://doi.org/10.1109/milcom.2016.7795321 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 monitoring: a review, acta imeko, vol. 10, no.2, pp. 1-11, 2021. doi: 10.21014/acta_imeko.v10i2.1080 [18] armando coccia, federica amitrano, leandro donisi, giuseppe cesarelli, gaetano pagano, mario cesarelli, giovanni d'addio, design and validation of an e-textile-based wearable system for remote health monitoring, acta imeko, vol.10, no.2, pp. 1-10, 2021. doi: 10.21014/acta_imeko.v10i2.912 [19] amar taggu, ningrinla marchang, detecting byzantine attacks in cognitive radio networks: a two-layered approach using hidden markov model and machine learning, pervasive and mobile computing, v 77, 2021, issn1574-1192, doi: 10.1016/j.pmcj.2021.101461 https://doi.org/10.21014/acta_imeko.v10i2.1080 https://doi.org/10.21014/acta_imeko.v10i2.912 https://doi.org/10.1016/j.pmcj.2021.101461 aeronautic pilot training and augmented reality acta imeko issn: 2221-870x september 2021, volume 10, number 3, 66 71 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 66 aeronautic pilot training and augmented reality simone keller füchter1, mário sergio schlichting1, george salazar2 1 university of estácio de santa catarina pesquisa produtividade program, av. leoberto leal, 431, são josé, santa catarina, brazil 2 nasa, national aeronautics and space administration johnson space center, 2101 nasa parkway houston, 77058, texas usa section: research paper keywords: augmented reality; virtual training; flight panel; pilot training citation: simone keller füchter, mário sergio schlichting, george salazar, aeronautic pilot training and augmented reality, acta imeko, vol. 10, no. 3, article 11, september 2021, identifier: imeko-acta-10 (2021)-03-11 section editor: bálint kiss, budapest university of technology and economics, hungary received january 15, 2021; in final form september 14, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: simone keller füchter, e-mail: simonekf.2011@gmail.com 1. introduction various different materials are used for pilot training on the equipment used in various types of aircraft. traditionally, the aircraft manual has been an important source of information on each part of the aircraft’s structure, as well as its mechanical, electronic and digital components. while the operating manual remains the main learning resource, a number of multimedia resources have been added for better visualisation of certain tasks and commands during the panel checklist. the array of available printed books can also be a good knowledge resource for students aspiring to be private pilots, also known as pps. this pilot category involves certification that allows the individual to fly an airplane and carry baggage and passengers without being paid or hired.[1] videos, photographs, printed posters, flight simulators and virtual reality glasses can also be used to help learn [2] and memorise the components of the panels. each resource has its advantages and disadvantages. for example, the simulator is fairly complete and close to reality but is not available 24 hours a day for everyone, while its cost per hour of use is relatively high since it involves sophisticated equipment. meanwhile, while videos present a great deal of detail, they do not involve interaction, and in an effort to find the information one is looking for, it is sometimes necessary to watch a long video to be able to visualise a specific procedure. software such as microsoft's flight simulator [3] and x-plane [4], among others, have a wide base of information and procedures as well as highquality graphics. however, the purpose of the present work is to create a prototype app for smartphones that is easy to use and is specifically focused on familiarising the pilot with the panel and the respective checklists. thus, the learning and memorisation of a new aircraft, as well as the attendant training, can be faster, bringing great savings and, above all, enhanced flight safety. checklists are a critical part of preparing for a flight. apps such as that presented in this paper can be used anywhere without the need for computers, screens, joysticks, or any other hardware; in fact, all that is required is a smartphone. the app can be used, for example, before the pilot goes into a flight simulator. indeed, the app is not intended for use as a kind of flight simulator, but a specific and cheap tool to help pilots or students to memorise the flight panel components and study them continuously, be it at home, in open spaces or even in rooms without an internet abstract a pre-flight checklist requires in-depth technical knowledge of the aircraft, including its dashboard, avionics, instruments, functions and cabin layout. to obtain up-to-date certification, students training to be a pilot or advanced pilot must be completely familiar with each instrument and its position on the flight panel. every second spent searching for the location of an instrument, switch or indicator can waste time, resulting in a poor start-up procedure and possibly a safety hazard. the objective of this research was to obtain preliminary data to determine whether the use of augmented reality as a human interface for training can help pilots improve their skills and learn new flight panel layouts of different aircraft. the methodology used was the human-centred design method, which is a multidisciplinary process that involves various actors who collaborate in terms of design skills, which includes individuals such as flight instructors, students, and pilots. a mobile/tablet application prototype was created with sufficient detail on the flight panel of a cessna150, an aircraft used in training flights at the aero club of santa catarina. the tests were applied in brazil and the results indicated good response and acceptance from the users. mailto:simonekf.2011@gmail.com acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 67 connection. indeed, the main objective of this work was to attempt to represent, as realistically as possible, the details of the flight panel without the need of computers and in a cost-effective and efficient way using readily available technology. 2. methodology the methodology used for this research with augmented reality (ar) training is the human-centred design (hcd) method. this method is a multidisciplinary process that involves various stakeholders who collaborate in terms of design skills, including the individuals who specifically pertain to the process [5], such as flight instructors, pilots and students. interactivity is a key point in this field and comes in the form of continuous testing and evaluation of the ar system and the related concepts. the hcd method is a key component of the human-systems integration (hsi) process, which is defined as an interdisciplinary technical and management process that integrates human considerations within and across all system elements [6]. in this context, iso 13407:1999, part of ics 13.180 on ergonomics, has been withdrawn and revised in terms of iso 9241-210:2019 ergonomics of human-system interaction-part 210: humancentred design for interactive systems. [7] the hsi domains define (a) how human capabilities or limitations impact the hardware and software and, conversely, (b) how the system’s software, hardware, and environment influence human performance.[8] meanwhile, there are four principles of hcd [9]: • function allocation between user and technology • design iteration • multidisciplinary design • active involvement of users and a clear understanding of user and task requirements. these principles are crucial for the concept of this research since they are focused on the ergonomics, comfort and acceptance of the pilots and students. for this study, a prototype mobile app related to ar was built based on the cessna150 aircraft panel, which is used for the training of new pps in a number of flying clubs in brazil, specifically here, the aero club of santa catarina. this prototype provides the pilot with the opportunity to familiarise themselves with the panel of a new plane that he/she may fly, accessing it on their smartphone anytime, anywhere, and viewing the panel in a highly realistic way. furthermore, perhaps more importantly, the pilot can use the main checklist procedures in 3d view, which enables a better understanding of certain movements of push/pull, rotate, and other movements. 3. augmented reality the technology pertaining to ar involves overlaying virtual components in a real-world environment with users viewing it using a specific device. for the most part, virtual objects are added to the real world in real-time during the user experience [10]. to connect with this technology, digital devices such as smartphones, tablets and virtual reality glasses (e.g., microsoft hololens) [11] can be used. the difference between ar and virtual reality (vr) is that the latter creates a completely synthetic and artificial world in which the user is completely immersed, while in the former, the user can observe a real environment with virtual objects overlaid. with ar, the pilot can observe a virtual flight panel in front of them and, using a smartphone, can visualise and study each component before moving on to a real physical aircraft. the combination of ar and smartphone provides a useful tool for training, with the professional able to practice in a manufacturing environment [9], a garage or a hangar. when used correctly, the ar technology can present a real environment in terms of creating virtual objects that mimic real-time applications [12]. however, the future of ar depends on huge innovations in all fields [13], and this paper presents a new way of learning and training aircraft-related checklist panels. 3.1. augmented reality as a training tool this technology is used in various industries [14], military environments, and schools and is highly useful in the health field, making it possible to visualise the inside of a human body. furthermore, this type of digital interface is suitable for the new generations comfortable with the devices used for vr and ar [15]. when combined with gamification, educational technologies can be improved, largely because the new generations are open to experimenting with new virtual competencies and stimuli and are highly motivated to win [16]. the same idea can be applied to ar, that is, using challenges similar to videogames to promote learning and pleasure, as explained in terms of the dopamine cycle, which pertains to challenges, achievement and pleasure [17]. a recent study conducted on a multi-national european sample of pilots regarding the use of ar and gamification [11] demonstrated that 72.25 % of the women pilots and 56.25 % of the male pilots considered it satisfactory in terms of successfully finishing a task, while, overall, 70.74 % of the pilots regarded the feedback received for corrective actions as satisfactory. this demonstrates that interactivity is important for users. for this reason, the third step of the proof of concept presented in this paper was created to clarify the number of possibilities this prototype can offer. 3.2. augmented reality and aviation according to the aviation instructors handbook, [18] vr is already part of pilot training, while ar is not mentioned. therefore, this field requires more research and more applications to find new ways to use this tool, such as external inspection, maintenance, procedures using the flight panel and various other aspects [19], [20], [21]. 4. the prototype in this paper, a life-size prototype was created and tested by six users made up of flight instructors, pilots and students, who applied the visualisation of a virtual flight panel employing ar to their smartphones or tablets. bringing together the information from the manual, the experience of the instructors and pilots who were studying how to use different aircrafts, and the proofs of concept, various evaluations were created, as shown in figure 1. a full description follows. 4.1. preliminary proof of concept the prototype was built in unity 3d, a game development platform [22], using vuforia [23] as the ar technology. here, it was used as a printed marker in banner size (a0), as shown in figure 2. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 68 after the first test, a small ‘pocket’ marker was printed, as shown in figure 3. the app was developed after several meetings with the instructors and several practice flights with the students. the assessment was made based on how pilots perform a checklist before departure in the aircraft. a sketch was shown with the first placement of the instruments and buttons. the size of the panel was presented, and the quality of the generated 3d image was tested. the result exhibited good aesthetic quality and high definition, but still presented little interactivity. after the first phase, an enhancement was applied where various animations were created and placed in the first menu item, the idea being to present each item in the manual according to the order in which it appears from left to right, following the checklist found in the manual. for each instrument or button displayed, a green highlight appears, as shown in figure 4. this step was not planned with this type of indication, but with the feedback given in the meetings and discussions, we determined that the first step in the study of the checklist was not to start with the checklist steps. in fact, it was more interesting to present all the instruments and buttons first before moving on to the checklist, using various animations of objects and instruments. the prototype, which ended up acquiring the unexpected new feature described above, now featured the item in the menu that makes it possible to understand not only the names and functions of the buttons but also the kind of movement it performs. if this movement is a turn, it is evidenced in terms of three dimensions: whether it is, for example, 45 ⁰ or 90 ⁰, whether it is a push/pull-type button, or whether it is an on/off switch. as such, it is possible to closely visualise the movement of the mixture control, which consists of a button and a lever that must be pressed at the same time. therefore, in this second proof of concept, the goal was to demonstrate the panel in a better way than the pictures in the aircraft manual, where it is difficult to clearly observe the controls’ design, format and movements, as shown in figure 5. in short, an aircraft manual presents 2d images, making it difficult to visualise the array of controls and understand whether each is a rotate, push/pull or press button. thus, the goal was to demonstrate the panel in high definition and to include animation and interaction, as shown in figure 6. this app is very specific to a checklist approach and the panel components, which makes it different to other applications such as x-plane. the focus of our app includes a didactical figure 1. proof of concept. figure 2. a life-size panel that can be viewed at home using a smartphone and a banner printed marker. figure 3. a small-size ‘pocket’ panel that can be visualised anywhere, anytime in a ubiquitous approach. figure 4. each instrument and controls are presented to the user. figure 5. typical images presented in aircraft manuals, where full visualisation can be difficult (cessna 150 aircraft manual). acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 69 explanation for each instrument. the cessna 150 starting engine checklist [24] was used here, as shown in figure 7. 4.2. enhanced proof of concept at this point, greater interaction with the user was pursued, with the same checklist presented and the user asked to perform the movements of the lower control panel. it will only be possible to start the aircraft (start the engine) in this ar simulation if all the buttons have been executed with the correct movement and in the correct order. at the end of this step, the engine is started, and the audio will confirm the complete verification of the checklist in addition to a message. following discussions with all those involved, it was understood that these features would bring gamification to the app, as the pilot feels challenged ‘to hit’ the movements and finish the checklist perfectly, and as a reward, the engine will start. this stage is still under development, but its concept and technologies have already been finalised. flight instructors, pilots and students were involved in this experiment. 5. the experience the prototype was tested by students, pilots and instructors. the experience involved the following steps: • explaining the objectives of the study; • obtaining the participants’ consent; • reading the orientation on how to use the app; • using the app in front of a banner as a marker for the ar. the user can be standing or seated and can move in front of the panel; • using the app with a banner marker in a0 size and with a little marker printed on half an a4 page. in the first case, the user can see the ar panel in full scale (lifesize). in the second case, the user can move the little marker using their hands and manipulate the panel position so that it can be visualised comfortably; • filling out a questionnaire for reflection and sharing points of view the tests were performed with instructors, as can be seen in figure 8, while the experience also involved students preparing to gain their pp license, as shown in figure 9. 6. results the results demonstrated that the virtual 3d model is highly realistic and will prove useful for the pilot through the feature of simulating failures in the instruments in order to check whether the pilot had paid attention to the flight indicators or even whether the aircraft has deficiencies in the human interface design, which should be corrected or controlled during the flight. the prototype features various animated controls and items that promote interactive and complex tasks in different situations. the app can help the pilot to be more confident, faster and more secure when flying. meanwhile, ensuring that less time is spent on the checklist in a real aircraft or a flight simulator will reduce the costs of the process and increase the safety with ar training. furthermore, the use of ar improves the pilot’s situational awareness (s-a) [25] in perceiving, comprehending and projecting future actions in scenarios in which s-a and the figure 6. in an ar app, it is possible to identify that a specific control is not a button but a lever. the animation clearly shows how to execute the procedure. figure 7. the checklist items used in this experience (cessna150 aircraft manual). figure 8. flight instructor testing the ar app. figure 9. student testing the ar app. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 70 system’s mental models are important for minimising human error. table 1 presents example information and comments that allowed us to conclude how useful this application can be for pilots and students, and even for companies that need to improve their training resources. 7. conclusion this was merely a preliminary study, and more students, pilots and instructors will be invited to participate in future tests of the system. in addition, flight engineers and physicians will also become involved so as to increase the diversity in the evaluation of the system as part of the hcd method. it will be important to ensure that this application goes beyond the data and information that can be found in aircraft manuals and the existing procedures. it will also be crucial to improve the app such that it can capture and share the knowledge and background of qualified pilots, instructors and all stakeholders who may have novice pilots. this will help promote a complete experience, improve security, ensure better quality flights and good tolls, and will present mutually beneficial teaching methods, transferring the knowledge and experience of flying. unquestionably, this study can be useful for the aircraft industries and for professionals such as those operating in the medical industry, where medical practitioners can receive more information to support the human cognitive and health fields in terms of, for example, medical operation procedures. essentially, the ar method is independent of the field of application. acknowledgements s. k. füchter thanks the aero club of santa catarina, the instructors, claudia thofehrn and thiago bortholotti, the pilots and the students. we also thank dr orly pedroso, physician specialist in aerospace medicine, development in health, astronaut, and increased vr in cosmic space. he is accredited by the national civil aviation agency (anac), brazil; postgraduate in the course for aerospatiale medical examiners, and nasa, johnson space center, houston, texas, usa: physiology and aerospatiale medicine course. s. k. füchter also thanks the university of estácio de santa catarina center and the programa pesquisa produtividade (ppp). references [1] i. n. gleim and g. w. gleim, private pilot faa knowledge test (1st ed.), university station, 2011, pp 2. [2] j. tustain, tudo sobre realidade virtual & fotografia 3600. são paulo editora senac, 2019. [3] microsoft. microsoft flight simulator. xbox. online [accessed 18 september 2020] http://xbox.com/games/microsoft-flight-simulator [4] laminar research. x-plane 11. x-plane. online [accessed 18 september 2020] https://www.x-plane.com [5] international standards organisation (iso), iso 13407. humancentered design processes for interactive systems. online [accessed 15 september 2020] fehler! linkreferenz ungültig.https://www.iso.org/obp/ui/#iso:std:iso:13407:en [6] j. s. martinez, n. schoenstein, g. salazar, t. m. swarmer, h. silva, n. russi-vigoya, a. baturoni-cortez, j. love, d. wong, r. walker, implementation of human-system integration workshop at nasa for human spaceflight, 70th international astronautical congress iac washington d. c., usa, 21-25 october 2019, 11 pp. online [accessed 20 sept. 2020] https://ntrs.nasa.gov/api/citations/20190032353/downloads/2 0190032353.pdf table 1. users comments. profile/function comments pros cons flight instructor/ coordinator ‘the app has great potential, not only for new students studying to get their private pilot license, but also for experienced pilots when they want to upgrade and get their jet license, for example’. ‘easy, it can be replicated to other airplanes, high definition’. ‘check list could have all items, it could be more complete’. flight instructors ‘cool. we can see in detail’. ‘it is a good way to learn about the instruments and controls. good to memorise’. ‘cool. we can see in detail’. ‘it is a good way to learn about the instruments and controls. good to memorise’. ‘the prototype is just for android. various instructors use iphone’. students ‘there are illustrations in the manuals that do not allow us to understand the real movement that is made when activating a certain switch, lever, button, or selector. with ar, it is possible to visualise things as if the panel were really in front of us, and the items are easily seen’. ‘it would have been great to have had an app like this one when i was preparing myself to be a pilot and upgrading my licenses and airplanes’. ‘functional, detailed, easy’. ‘it could include more procedures’. students ‘i found it very realistic and i liked the part where i could have it at home and use the panel whenever i wanted’. ‘super. i can study at home and see the panel in detail. when i use this airplane ( i will be a new student) it will be familiar to me’. ‘nowadays, i´m learning to fly in an aeroboeiro aircraft, i would like to have one of these for this airplane’. ‘cool, portable, useful in training’. ‘the prototype is just for android. various students use iphone; the screen is too small to click on it’. aerospace physician ‘brilliant project’ ‘useful and safe in training’. http://xbox.com/games/microsoft-flight-simulator https://www.x-plane.com/ https://www.iso.org/obp/ui/#iso:std:iso:13407:en https://ntrs.nasa.gov/api/citations/20190032353/downloads/20190032353.pdf https://ntrs.nasa.gov/api/citations/20190032353/downloads/20190032353.pdf acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 71 [7] iso international standalisation organisation, documentation. online [accessed 25 september 2020] https://www.iso.org/standard/52075.html [8] nasa/sp–2015-3709 human systems integration (hsi) practitioner’s guide. national aeronautics and space administration lyndon b. johnson space center houston, texas. 2015. online [accessed 18 december 2020] https://ntrs.nasa.gov/api/citations/20150022283/downloads/2 0150022283.pdf [9] nasa/tp-2014-218556. human integration design processes (hidp). national aeronautics and space administration international space station program johnson space center houston, texas. usa, 2014. online [accessed 18 december 2020] https://www.nasa.gov/sites/default/files/atoms/files/human_i ntegration_design_processes.pdf [10] p. milgram, f. kishino, a taxonomy of mixed reality visual displays. ieice trans. inform. syst. 77 (1994), pp. 1321-1329. [11] h. schaffernak, potential augmented reality application areas for pilot education: an exploratory study. educ. sci. 2020, 10, p. 86. doi: 10.3390/educsci10040086 [12] c. kirner, and r. tori, introdução à realidade virtual, realidade misturada e hiper-realidade. realidade virtual: conceitos, tecnologia e tendências. 1 ed. sã paulo, senac, 2004, v. 1. [13] p. cipresso, i. a. giglioli, m. a. raya, g. riva, the past, present, and future of virtual and augmented reality research. front psychol (2018). doi: 10.3389/fpsyg.2018.02086 [14] s. füchter, t. pham, a. perecin, l. ramos, a. k. füchter, m. schlichting, o uso do game como ferramenta de educação e sensibilisação sobre a reciclagem de lixo. revista educação e cultura contemporânea 13(31) (2016), pp. 56-81. doi: 10.5935/2238-1279.20160022 [15] l. brown, the next generation classroom: transforming aviation training with augmented reality. in: proceedings of the national training aircraft symposium (ntas), embry-riddle aeronautical university, daytona beach, fl, usa, 14–16 august 2017. online [accessed 8 jan. 2021] https://commons.erau.edu/ntas/2017/presentations/40/?utms ource=commons.erau.edu%2fntas%2f2017%2fpresentations% 2f40utmmedium=pdfutmcampaign=pdfcoverpages [16] s. k. füchter, m. s. schlichting, utilisação da realidade virtual voltada para o treinamento industrial. revista científica multidisciplinar núcleo do conhecimento. ano 03 10(07) (2019), pp. 113-120. [17] g. zichermann, j. linder, the gamification revolution: how leaders leverage game mechanics to crush the competition. new york: mcgraw-hill, 2013. [18] aviation instructor’s handbook: 2008, department of transportation, federal aviation administration, flight standards service: oklahoma city, ok, usa, 2008. [19] s. k. fuchter, augmented reality in aviation for pilots and technicians. october 2018 xr community of practice. johnson space center – nasa. houston, 2018. [20] s. k. fuchter, pre-flight inspection and briefing for training aircraft pilots using augmented reality. ieee galveston bay section. joint meeting of yp group and new i&s section chapter. university of houston clear lake, houston, tx, usa, 2018. [21] s. k. fuchter. augmented reality as a tool for technical training. galveston bay section meeting: the institute of electrical and electronics engineers (ieee). gilruth center nasa-jsc, 17 august 2017. online [accessed 18 december 2020] https://site.ieee.org/gb/files/2017/06/ieee-gbs-081717.pdf [22] unity. documentation. online [accessed 18 september 2020] https://learn.unity.com/?_ga=2.143231477.1523753934.163121 9945-1791299178.1627660886 [23] ptc. vuforia: market-leading enterprise ar. online [accessed 22 september 2020] https://www.ptc.com/en/products/vuforia [24] cessna pilot’s operating handbook. 150 commuter cessna model 150m, 1977. [25] m. r. endsley, toward a theory of situation awareness in dynamic systems. human factors journal 37(1) (1995), pp. 32-64. https://www.iso.org/standard/52075.html https://ntrs.nasa.gov/api/citations/20150022283/downloads/20150022283.pdf https://ntrs.nasa.gov/api/citations/20150022283/downloads/20150022283.pdf https://www.nasa.gov/sites/default/files/atoms/files/human_integration_design_processes.pdf https://www.nasa.gov/sites/default/files/atoms/files/human_integration_design_processes.pdf https://doi.org/10.3390/educsci10040086 https://doi.org/10.3389/fpsyg.2018.02086 http://dx.doi.org/10.5935/2238-1279.20160022 https://commons.erau.edu/ntas/2017/presentations/40/?utmsource=commons.erau.edu%2fntas%2f2017%2fpresentations%2f40utmmedium=pdfutmcampaign=pdfcoverpages https://commons.erau.edu/ntas/2017/presentations/40/?utmsource=commons.erau.edu%2fntas%2f2017%2fpresentations%2f40utmmedium=pdfutmcampaign=pdfcoverpages https://commons.erau.edu/ntas/2017/presentations/40/?utmsource=commons.erau.edu%2fntas%2f2017%2fpresentations%2f40utmmedium=pdfutmcampaign=pdfcoverpages https://site.ieee.org/gb/files/2017/06/ieee-gbs-081717.pdf https://learn.unity.com/?_ga=2.143231477.1523753934.1631219945-1791299178.1627660886 https://learn.unity.com/?_ga=2.143231477.1523753934.1631219945-1791299178.1627660886 https://www.ptc.com/en/products/vuforia 12-29-1-pb acta imeko december 2011, issue 0, 2 − 3 www.imeko.org acta imeko | www.imeko.org december 2011 | issue 0 | 1 instructions for authors paul p. l. regtien 1 1 measurement science consultancy, julia culpstraat 66, 7558jb hengelo, the netherlands keywords: journal; template; imeko; microsoft word citation: paul p.l. regtien, instructions for authors, acta imeko, no. 0, december 2011, pp. 2-3, identifier:10.3345/acta.imeko.4530 editor: paul regtien, measurement science consultancy, the netherlands received december 28, 2011; in final form december 29, 2011; published december 30, 2011 copyright: © 2011 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: this work was supported by measurement science consultancy, the netherlands corresponding author: paul p. l. regtien, e-mail: paul@regtien.net 1. introduction this paper describes how a new article can be submitted from acta imeko’s website. authors can track their submission, resubmit revised papers, communicate with the editors and support the editorial process including copyediting and proofreading. 2. a new submission to register with the journal system, open the website http://acta.imeko.org/index.php/acta-imeko/index and go to register. if you don’t have a user account, fill in the form, otherwise enter your username and password. then click the register button near the end of the page. you arrive at the acta imeko page user home. under acta imeko select your role: “» author”. here you see your current submissions and the status (when applicable). for a new submission, click on “click here” under “start a new submission”, and you arrive at the page step 1: starting the submission. in “section” select “article”, and go to the 4-point checklist. check all four after reading. in the 4th button, you see a link “author guidelines” pointing to the template. download this template, which contains all further instruction for the layout of your paper. check the copyright notice. you can enter a message for the editor, when you like. press “save and continue”, and you arrive at the next page step 2: enter metadata. in this page you can enter the meta data of the paper. title and abstract are required fields. when done, click “save and continue”, and you arrive at the next page: “step 3. uploading the submission”. in this page you can upload your submission and in the next one “step 4. uploading supplementary files” you can upload additional files if necessary. the next step is “step 5. confirming the submission”. when finished, you receive an acknowledgement of your submission by email. you can view all your running submissions and their status by clicking the button “active submissions”. your submission is assigned to a section editor. the section editor invites reviewers. reviewers who have accepted to review upload their review reports together with a recommendation. the section editor sends the editor’s decision to the author, together with the reviewer’s reports. the following decisions can be made: 1) accept submission > the paper enters the final stages of the submission process 2) revisions required > the paper is accepted provided the author complies with the recommendations of the reviewers and editor 3) resubmit for review > the paper is not suitable for publication in this form, but can be resubmitted after major revisions 4) decline submission. 3. tracking your submission(s) register, go to user home, select your role: “» author”. here you see a list of your submissions and their the paper contains instructions for authors of articles for acta acta imeko. it can be used as template for new submissions. authors are encouraged to follow the instructions as described in this template file to produce their manuscript. this abstract should be composed in a way suitable for publication in the abstract section of electronic journals, and should state concisely what it is written in the paper. important items are the aim of the research, the basic method and the major achievement (also numerically, when applicable). the length should not exceed 200 words. acta imeko | www.imeko.org december 2011 | issue 0 | 2 status. click on the name of the submission to view the details. 4. resubmission of a revised paper when the recommendation is “revisions required”, the author is asked to make revisions based on the comments of the reviewers, and to upload the revised paper. when the recommendation is “resubmit for review”, the author is asked to revise the paper (usually major revisions concerning the structure, missing material, expansion of the theory, incomplete experimental results). resubmission follows the same procedure as the first submission. the resubmission is again peer reviewed, by either the same or different reviewers. 5. processing your accepted submission once your paper has been accepted, the author can take part in the copyedit, the layout and the proofreading process. copyedit. the editor will do a first copyedit. you receive a request to review your submission after the first copyedit step by the editor. follow the instructions in this request: 1. click on the submission url. 2. log into the journal; you are directed to the author home page (active submissions), where you can see which paper is in the editing phase. click “in editing” of the paper you want to copyedit. the editing page has three submenus: summary, review and editing. when you are not in the editing submenu already, click editing. click on the file that appears in step 1 of the copyediting box. 3. open the downloaded submission. 4. review the text, including copyediting proposals and author queries. when needed, read copyedit instructions in this window. click “copyedit comments” to add comments in the box and, when finished, press “save and email”. the editor receives your copyedit comments by email. 5. make any copyediting changes that would further improve the text. 6. when completed, upload the file in step 2. 7. click on metadata to check indexing information for completeness and accuracy. 8. send the complete email to the editor and copyeditor by clicking on the envelope just below complete in the copyediting window. layout. when the editor asks you to prove the layout, follow the same procedure: click in editing of the paper you want to check the layout. view the proof. when corrections are necessary, click “layout comments”, insert comments in the box and, when finished, press “save and email”. the editor receives your layout comments by email. proofreading. when the editor asks for proofreading by the author, follow the same procedure: click “in editing” of the paper you want to proofread. read proofing instructions. when corrections are necessary, click “proofreading corrections”, insert corrections in the box and, when finished, press “save and email”. the editor receives your proofread corrections by email. when the submission has passed all the post-editing steps, the paper is released for publication. the author is informed about this action. editorial summary to selected papers from the 2020 imeko tc19 international workshop on metrology for the sea “learning to measure sea health parameters” acta imeko issn: 2221-870x december 2021, volume 10, number 4, 3 4 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 3 editorial summary to selected papers from the 2020 imeko tc19 international workshop on metrology for the sea “learning to measure sea health parameters” silvio del pizzo1 1 department of science and technologies, university od naples “parthenope”, centro direzionale isola c4, naples, italy section: editorial citation: silvio del pizzo, editorial summary to selected papers from the 2020 imeko tc19 international workshop on metrology for the sea “learning to measure sea health parameters”, acta imeko, vol. 10, no. 4, article 2, december 2021, identifier: imeko-acta-10 (2021)-04-02 received december 14, 2021; in final form december 14, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: silvio del pizzo, e-mail: silvio.delpizzo@uniparthenope.it dear readers, this issue of acta imeko is dedicated to the works that have been selected from those presented at “2020 imeko tc19 international workshop on metrology for the sea”, shortly named metrosea. the edition 2020 was hosted by the university of naples “parthenope”, but unfortunately all sessions, discussions and other activities were carried out online due to the pandemic situation. the virtual workshop was organized to broadcast live all sessions according to the conference program, and the attendees were able to participate to all proposed activities by entering in virtual rooms. therefore, the online conference was not so different from a live event, with the great advantage that every session was recorded and shared with the participants. the sea is the medium that allows several human activities such as fishing, traveling, and the transporting of large quantities of goods using vessels. furthermore, its environment host an important and fragile ecosystem that represents a great reservoir and source of food for all living beings. at the same time, the sea has a fundamental role in the climate change, indeed the ocean circulation is a key mechanism of global climate by transporting and storing heat and fresh water around the world. in the last decades the environment in general, and specifically the marine one, has been compromised by several human activities which have undermined both the delicate equilibrium of the ecosystem and the ocean circulation system, activating a dangerous process that involves the extinction risk of some species of sea fish. moreover, several studies confirm that the extreme climate and meteorological events are strictly related to the modification of the global ocean circulation as well as to the global warming. this latter phenomenon entails the increasing of the global mean sea level that is worrying the population of the small islands. therefore, it is evident that the sea health is very important for the survival of all humanity. in this context, two concepts were born: the blue economy and the blue growth, both approaches referring to the use of seas in sustainable way. definitively, the sea is a complex environment that includes complex phenomena and assets; as everyone may know, measuring is a fundamental step that allows a deep knowledge of a phenomenon or an asset. the aim of the international workshop “metrology for the sea” is to support the sharing of the recent advances in the field of measurement and instrumentations to be applied for the increasing of knowledge, for protecting and preserving the sea and all assets and phenomena involved. indeed, every year the workshop involves hundreds of researchers and people who work in developing instrumentation and measurement methods for the sea activities. several research topics have been discussed during the edition 2020 of the workshop, such as: the electronic instrumentation, sensors and sensing systems, wireless sensor networks for marine applications, monitoring systems for sea activities, underwater navigation and submarine obstacle detection, pollution detection, measures for marine biology, marine geology, and oceanography. five selected papers from metrosea 2020 are presented in this special issue: one of these is a review paper that illustrates the state of the art and the future perspectives in the field of underwater wireless communications, while the other four papers concern experimental approaches, although applied in different fields. leccese and spagnolo in the paper titled ‘state-of-the art and perspectives of underwater optical wireless communications’ illustrate the state of art and future perspective of underwater wireless communication using optical waves. this approach is revolutionizing the underwater communications due to the achievable large bandwidth, and low latency levels. unfortunately, the communication distances are still limited. the authors provide a comprehensive overview of the state of art, mailto:silvio.delpizzo@uniparthenope.it acta imeko issn: 2221-870x december 2021, volume 10, number 4, 4 5 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 4 highlighting which limits distress this technology and which are the possible future developments, especially in military field. it is very interesting their preliminary study to verify the feasibility of a simple, economical, and reliable communication system based on uv-a radiation. of course, the development of a low-cost underwater communication system with a high transmission rate could have a great impact on the world of the autonomous underwater vehicles. the scientific contribution presented by baldo et al., titled ‘remote video monitoring for offshore sea farms: reliability and availability evaluation and image quality assessment via laboratory tests’ describes a preliminary study for the implementation of a remote monitoring system based on video recording for an offshore sea farm. the aim of this work is to ensure a video surveillance infrastructure for supervising breeding cages along with the fish inside them, in order to contrast undesired phenomena like fish poaching as well as cages damages. since the sea farm are built in open sea, where the weather conditions significantly change from a season to another, the authors focused their inspections on the availability and the reliability of the designed monitoring system, performing several tests in laboratory through a climatic chamber. a specific section is dedicated to the system architecture, where the authors tackle the critical problem of the power supply system. moreover, it is very interesting that the designed project employs a hardware low-cost and open source such as raspberry pi. the other three research papers provide different studies related to the measurements in hydrography. these scientific contributions deal with three different aspects of the hydrographic process. specifically, the calibration of the echo sounder for acquiring the depth, the enhancement of the positioning system gnss (global navigation satellite system) and the management of the post processing data. the scientific contribution by amoroso and parente, ‘the importance of sound velocity determination for bathymetric survey’ presents an inspection on the importance of the sound velocity in water during a hydrographic acquisition. the paper took into consideration four different methodologies for modeling the speed of the sound in sea water. all these approaches need an accurate knowledge of the water density (function of temperature, pressure and salinity) at different depths. the authors reported the impact that inaccurate measurements of these three parameters has on the bathymetric survey results. the experimentation was conducted on real data collected by a hydrographic vessel, while the error propagation was conducted in simulated environment considering several systematic errors on the measurements of the three inspected parameters. this innovative methodology is very promising for the performed inspections. the paper wrote by baiocchi et al. titled ‘first considerations on post processing kinematic gnss data during a geophysical oceanographic cruise’ concerns the problem of the accurate positioning during a collection of the bathymetric, oceanographic, and geophysical data. the development of oceanographic and geophysical instrumentation, capable to acquire with a high spatial resolution, requires high accuracy positioning systems. in this work the authors carried out several tests on an oceanographic ship applying ppk (post process kinematic) approach to improve the position accuracy provided by the gnss. the large amount of data acquired by the authors, allowed to validate the performances of the proposed methodology. indeed, the authors compared the results obtained by ppk processing in the vertical domain with the data registered by several tide gauges located on the surrounding coast. in this work the authors tackled several problems concerning both the acquisition data phase and the different vertical datum used by the tide gauges considered. finally, alcaras et al. presented an interesting work on the post-elaboration of the bathymetric data titled ‘from electronic navigational chart data to sea-bottom models: kriging approaches for the bay of pozzuoli’. the electronic navigational chart is an electronic map realized by a national hydrographic office and can be used for navigational purposes. the enc data can be used to build a detailed bathymetric 3d model applying an interpolation method. the authors analyze the performance of different interpolation methods based on the ordinary and universal kriging applied to a specific case study: the bay of pozzuoli. hence, the authors employ 11 mathematical semi-variogram models for inspecting the perfomance of these interpolators. however, the cross-validation is used for evaluting the accuracy of each method. the research remarks the good performance of both the kriging approaches for hydrogrphic purposes and it demonstrates the relevance of the choice of the mathematical model to build the semi-variogram. i would like to conclude these introductory notes by thanking the authors for their interesting and valuable papers and the reviewers for their indispensable and qualified contributions. furthermore, i would like to thank the editor in chief, prof. francesco lamonaca, as his support has been fundamental for accomplishing this special issue. it was a great honour for me to act as guest editors for this special issue, and i believe that the readers will find this acta imeko issue useful and will be inspired by the themes and methodologies proposed, for continuing the innovation in metrology for the sea. silvio del pizzo section editor digital tools as part of a robotic system for adaptive manipulation and welding of objects acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 8 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 digital tools as part of a robotic system for adaptive manipulation and welding of objects zuzana kovarikova1,2, frantisek duchon1, andrej babinec1, dusan labat2 1 institute of robotics and cybernetics, faculty of electrical engineering and information technology, slovak university of technology in bratislava, ilkovicova 3, 812 19 bratislava, slovakia 2 vuez, a. s., hviezdoslavova 35, 934 39 levice, slovakia section: research paper keywords: industrial robot; simulation; digital twin; robotized welding; sql server; hmi; database; 2d scanner; 3d scanner; industrial web; automatized measuring system; industry 4.0 citation: zuzana kovarikova, frantisek duchon, andrej babinec, dusan labat, digital tools as part of a robotic system for adaptive manipulation and welding of objects, acta imeko, vol. 11, no. 3, article 11, september 2022, identifier: imeko-acta-11 (2022)-03-11 section editor: zafar taqvi, usa received march 26, 2022; in final form july 22, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this article was written thanks to the support under the operational program of integrated infrastructure for the following projects: "robotic workplace for intelligent welding of small volume production (izvar)", code itms2014 +:313012p386, and "digitization of robotic welding workplace (diroz)", code itms2014 +:313012s686 (both co-financed by the european regional development fund). corresponding author: zuzana kovarikova, e-mail: kovarikova@vuez.sk 1. introduction traditionally it has been difficult to use automation in small batch production with hight variation in volumes and high mix of products [1]. there is a great potential for small batch producers to use flexible automation in manufacturing operation to remain competitive [1]. to achieve flexibility, it is crucial to design the structure of digital tools so that the greatest possible adaptability is achieved with tools that require a minimum time of conversion of technological units. the interaction between automation controllers and computer-aided design/manufacturing (cad/cam) systems capable of offline programming, is generally a way to decrease production down time due to programming [1]. the technology components modelled in cad tools are an important input for the creation of the digital twin. digital twin (dt) is the technical core for establishing cyberphysical production system (cpps) in the context of industry 4.0 [2]. the importance of digital twin (dt), which is characterized by the cyber–physical integration, is increasingly emphasized by both academia and industry [3]. through data modelling, data are stored according to certain criteria and logic, which can facilitate data processing [3]. theories of service modelling are useful for the identification, analysis, and upgrade of services [3]. simulation theories are useful for operation analysis (e.g., structural strength analysis and kinetic analysis) in a simulation environment [3]. abstract the aim of this article is to describe the design and verification of digital tools usable for sharing information within a team of workers and machines that manage and execute production carried out by a robotic system. the basic method is to define the structure of the digital tool and data flows necessary to enable an exchange of data needed to perform robotic manipulation and robotic welding of variable products minimizing at the same time strenuous human activity. the proposed structure of data interconnects a set of intelligent sensors with control of 18 degrees of freedom of 3 robotic manipulators, a welding device, and a production information system. part of the work was also to verify the functionality of the proposed structure of the digital tools. in the first phase, simulations using a digital twin prototype of the workplace for robotic manipulation and robotic welding were performed to verify the functionality the digital tools. subsequently, a digital tool was tested in the environment of a real prototype workplace for robotic manipulation and robotic welding. simulation results and data obtained from the prototype tests proved the functionality of the digital tool inclusive of the production information system. https://www.fei.stuba.sk/ mailto:kovarikova@vuez.sk acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 digital twins are more than just pure data, they include algorithms, which describe their real counterpart and take a decision in the production system. [4]. dts can make a production process more reliable, flexible, and predictable [3]. above all, dts can visualize and update the real-time status, which is useful for monitoring a production process [3]. 2. robotic system for adaptive manipulation and welding of objects the robotic system for adaptive manipulation and welding of objects shown in figure 1 consists of three robots. there are two robots for manipulating the to-be-welded parts and one robot for tungsten inert gas welding. the welding robot can scan the welding gap and the surface parameters of the final weld using a 2d laser scanner installed on the robot body. there are also two warehouses of parts equipped with 3d laser scanners, each of them installed above the concerned warehouse. the 3d laser scanners give feedback to the control system of robotically positioned to-be-welded parts. the source of energy needed for welding in the robotic system for adaptive manipulation and welding of objects is a robotic welding machine. the robotic welder is equipped with a digital interface enabling to set its parameters from the central control system by digital communication. adaptation of the workplace for handling of parts and products of various shapes is enabled by a quick-change system of robotic grippers. each robotic manipulator has one robotic gripper stand with six positions for setting up effectors of different types. the robotic system for adaptive manipulation and welding of objects also includes an automated system for measuring process quantities which is connected to the production and quality information system by digital communication. coordination of workplace components is ensured by a central control system which is a mediator and provider of process variables scanned at the workplace. the process variables are provided to the human-machine interface and the sql database where digital data are registered and archived. power and media distribution systems provide operating power distribution, air distribution for operation of grippers and inert gas distribution to create a protective atmosphere. 3. digital tools and flow of digital data the diagram in figure 2 shows the digital tools and the flow of digital data. an important source as well as consumer of the data is the prototype of the workplace itself. the data of the figure 1. robotic system for adaptive manipulation and welding of objects. figure 2. digital tools and data flow in prototype of the robotic system for adaptive manipulation and welding of objects. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 robotized process is read, and the required parameters are set using a programmable logic controller. the programmable logic controller is a central control system of the robotic system for adaptive manipulation and welding of objects. 3.1. robot controllers there are three robot controllers in the prototype of the robotic system for adaptive manipulation and welding of objects. two controllers control robots for manipulation of to-be-welded objects, one of them also manipulation of products after welding operation. one robot controller controls trajectory of the tool centre point, i.e. tungsten electrode tip. 3.2. robotic removal of to-be-welded parts from 3d laser scanners robotic positioning of the to-be-welded parts removed from the parts warehouse to a position close to the welding position is performed automatically based on the feedback from 3d laser scanners. each robotic manipulator has one 3d laser scanner able to obtain point cloud data characterizing the state of the warehouse of the stored parts before every single removal. this automatic robotic manipulation process is shown in figure 3. 3.3. 2d laser measurement system the welding robot is equipped with a 2d laser sensor that reads data needed to evaluate the geometry of the weld gap. the measured data are used to correct the weld gap, to generate the welding trajectory, and to automate the quality control of the performed welds. 3.4. ftp server the ftp server contains data in the file format. cad data corresponding to the design of the prototype workplace (including the design of variable robotic grippers and support constructions for setting up parts in warehouses) is stored in this data repository. data on the required design of the positioned parts, to-be-welded parts, and welded parts defining the desired product are also stored here. the ftp server contains also robotic system simulation files for adaptive manipulation and welding of objects as well as thermodynamic simulation files. in addition, the ftp server stores files with measured data obtained from the 2d laser scanner, thermovision camera measurements as well as historical trends from an automated system for process measurements, and photo documentation. 3.5. sql server the database of the robotic system for adaptive manipulation and welding of objects is implemented and operated in the ms sql server environment. the database contains records of robotic welds, data on required technological parameters of robotic welding and measured data of process variables. 3.6. web server the web server of the robotic system for adaptive manipulation and welding of objects provides data from the database in ms sql through web pages in the form of tables and behaviour of selected quantities in graphical form. it also allows the welding technologist to enter the required values of welding parameters to optimize robotic welding. 3.7. human-machine interface the human-machine interface (figure 4) for controlling and monitoring the state of the robotic system for adaptive manipulation and welding of objects provides windows for setting the robotic welding parameters and for monitoring the robotic welding process, a control panel for controlling the prototype workplace, and tools for viewing measured process variables in both tabular and graphical forms. the humanmachine interface is installed on operator panels and computers with visualization immediately next to the prototype workplace. 3.8. quality control the quality control of the final product is implemented in two ways. the first one is an automated weld inspection performed immediately after welding by robots using the 2d laser scanner installed on the welding robot. automated quality control data is recorded and archived automatically via measurement data files. the second one is a manual quality control of the final product performed by a person who enters the control results into preprepared protocols archived in the ftp server. 4. simulation of prototype robotic system for adaptive manipulation and welding of objects simulation of both the robotic positioning of parts and the robotic welding is preceded by testing of the feasibility to obtain a quality product in the prototype. for this simulation, a digital twin of the robotic system for adaptive manipulation and welding of objects is used. figure 3. robotic removal of to-be-welded parts from 3d laser scanners. figure 4. human-machine interface of the robotic system for adaptive manipulation and welding of objects. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 the advantage of verifying the design by means of a digital twin consists in the possibility of tuning a correct synchronization of the movements of three robotic manipulators which operate in a common working space during robotic positioning and welding of the product. in this way, it is possible to prevent potential collision events and to optimize the cycle time of welding and handling processes. in the digital twin, it is also possible to verify correctness of the design of robotic grippers. when simulating the robotic manipulation of a given part, it is necessary to take into account its geometry and weight. in the digital twin, dynamic events that occur during the positioning of welded parts can also be simulated. figure 5 shows the verification of the automated robotic grip of a selected robotic gripper from a quick-change robotic gripper system. attachments of robotic grippers of various types can be verified by simulation via a digital twin which contains a quickchange system of robot effectors. before testing the robotic welding in the prototype workplace, a thermodynamic simulation is performed simulating the propagation of heat from the weld site into the welded body. a render of the thermodynamic simulation is shown in figure 6. thermodynamic simulations are compared with temperature data measured by a thermovision camera during tests of welding in the prototype workplace and help to optimize preparation of specifications of robotic tig welding parameters. 5. simulation and testing outputs the simulation of the process of robotic positioning and robotic welding for both fillet welds and butt welds was verified by the simulation. the design of the robotic positioning of the final product into the robotic ultrasound diagnostic workplace was also verified by means of a digital twin. figure 7 shows robotic manipulation of two to-be-welded flat sheets (up) and robotic welding during performing a fillet weld of them (down). with the same robotic fingers, the sheets were robotically positioned and held also when performing a butt weld as shown in figure 8. figure 5. simulation of changing robot’s grippers in digital twin. figure 6. thermodynamic simulation of tig-welding two tubes together. figure 7. up: robotic manipulation of two to-be-welded flat sheets simulation. down: robotic holding of two to-be-welded flat sheets simulation. figure 8. robotic positioning of to-be-welded parts in a prototype robotic workplace. position before weld gap correction from data obtain by 2d laser scanner. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 testing of robotic positioning of to-be-welded parts in a prototype robotic workplace showed sliding of the sheets in this type of fingers, which had a negative impact on the feasibility and quality of the weld. for this reason, a different type of fingers was designed, as shown in figure 9. during the design of these fingers, the ansys calculation program was used in which the design was optimized so that the finger was as light as possible and at the same time sufficiently strong. figure 9 on the left shows the designed robotic finger before performing the optimization. the finger after optimization is shown on the right. the described robotic system was designed for adaptive manipulation and welding. thus, with the help of robotic manipulators, it is possible to position and hold objects of different sizes and shapes during robotic welding. the designs of all considered scenarios of robotic positioning are verified in advance by the digital twin of the workplace of robotic manipulation and robotic welding. figure 10 shows the output of the simulation of robotic manipulation and robotic welding of cylindrical objects by realization of butt welding. the upper part shows a simulation of the simultaneous positioning of three robotic manipulators during the welding. after welding, the welding robot is repositioned to its home position by the central control system. the first robotic manipulator is instructed to open the gripper that held the part during the welding process. subsequently, with the second robotic manipulator, the final product is robotically positioned in the output warehouse. the output of the robotic positioning simulation of the final product by the second robotic manipulator is shown in figure 10 below. after obtaining suitable robotic trajectories of both robotic manipulators for positioning and holding to-be-welded parts as well as trajectories of the welding robot, the parameters obtained in the digital twin technology were verified in a prototype robotic positioning and welding workplace. photographs from the testing of the robotic process in the workplace prototype are shown in figure 11. the upper part shows the robotic positioning of to-be-welded parts and the automated measurement of the weld gap by a laser scanner before its correction. after automatic correction of the weld gap, figure 9. design of the robotic fingers. left – before optimization in ansys. right – after optimization in ansys. figure 10. up: robotic holding during welding of two to-be-welded cylindrical objects simulation. below: robotic manipulation of welded cylindrical object simulation. figure 11. cylindrical objects before their automatic weld-gap correction and during robotized welding in the workspace. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 synchronized positioning of robotic manipulators holding parts and robotic welding is performed as shown in figure 11 below. the robotic workplace of handling and welding is followed by the workplace of robotic ultrasonic inspection for proving the quality of welds by a non-destructive method. one of the robotic manipulators of the welding workplace is used for positioning of the welded and cooled products in the workplace of robotic ultrasound diagnostics as it is shown in figure 12. positioning processes are synchronized with each other. after performing robotic ultrasound diagnostics, the tested product is again positioned by a robotic manipulator and placed in a warehouse of quality products or failures, depending on the results of the implemented non-destructive inspection. figure 13 shows robotic manipulation of the to-be-tested object in real workspace. 6. testing the functionality of prototype digital data tools testing of robotic manipulation and robotic welding as well as testing of functionality of digital tools as an effective means of data exchange between the workplace and the workplace operators, welding technologists, quality control workers, designers, robot programmers, and the central control system and the production management was carried out in the prototype figure 12. robotic manipulation of the to-be-ultrasonic-robotic-diagnostictesting objects in digital twin. figure 13. robotic manipulation of the tested objects in the robotic ultrasonic diagnostic workspace. figure 14. testing of robotic manipulation and robotic welding in prototype of the robotic system for adaptive manipulation and welding of objects. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 7 workplace of the robotic system for adaptive manipulation and welding of objects. the prototype of the robotic system for adaptive manipulation and welding of objects is shown in figure 14. in the prototype of the robotic system for adaptive manipulation and welding of objects, several welded objects of different sizes and shapes were tested. one of them is shown in figure 15. using a digital twin, it was possible to simulate robotic positioning and welding using a digital twin of the workspace. digital tools as part of the robotic system for adaptive manipulation and welding of objects allow to automatically record data about each tested sample and store them in an sql database. from the database, measured values of process variables can be displayed directly at the prototype workplace on the hmi system monitor. at the same time, these measured values are available to the welding technologist to optimize future robotic welding procedures as well as to the quality control staff via a web interface in both tabular and graphical forms. measured values of welding voltage and welding current from the robotic welding of the test sample in a graphical form are shown in figure 16. the graph is drawn from web application of the prototype. 7. conclusion the research presented in the article shows that digital tools as part of a robotic system for adaptive manipulation and welding of objects can be effectively used in modelling or design verification by simulations through a digital twin prototype of the robotic workplace. the exchange of digital data takes place between the components of the prototype consisting of one welding robot, two handling robots equipped with a quickchange robotic gripper system, 3d scanners, a 2d scanner, an automated process variable measurement system, a sql database, a ftp server, and a web portal. implementation of the figure 15. one type of welded samples. figure 16. graph of measured values of welding voltage (blue) ang welding current (violet) for testing sample in figure 15. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 8 digital tools into the prototype enables to adapt the workplace for production of various products of different shapes and sizes. in this prototype, the web portal allows comfortable entering and mining of data characterizing the process of robotic manipulation and robotic welding. acknowledgement this article was written thanks to the support under the operational program of integrated infrastructure for the following projects: "robotic workplace for intelligent welding of small volume production (izvar)", code itms2014 +:313012p386, and "digitization of robotic welding workplace (diroz)", code itms2014 +:313012s686 (both co-financed by the european regional development fund). references [1] m. lofving, p. almstrom, c. jarebrant, b. wadman, m. widfeldt, evaluation of flexible automation for small batch production, sciencedirect, 2018, pp. 177-184. doi: 10.1016/j.promfg.2018.06.072 [2] ch. liu, p. jiang, w. jiang, web-based digital twin modeling and remote control of cyber-physical production systems, robotics and computer integrated manufacturing, 2020, pp. 1-16. doi: 10.1016/j.rcim.2020.101956 [3] f. tao, h. zhang, a. liu, a.y.c. nee, digital twin in industry: state-of-the-art, ieee transactions on industrial informatics, vol. 15, no 4, april 2019, pp. 2405-2415. doi: 10.1109/tii.2018.2873186 [4] r. ferro, h. sajjad, r. e. c. ordonez, steps for data exchange between real environment and virtual simulation environment, iccms ’21, 25-27 june 2021, melbourne, vic, australia, isbn 978-1-4503-8979-2/21/06. doi: 10.1145/3474963.3474988 https://doi.org/10.1016/j.promfg.2018.06.072 https://doi.org/10.1016/j.rcim.2020.101956 https://doi.org/10.1109/tii.2018.2873186 https://doi.org/10.1145/3474963.3474988 a comparison between aeroacoustic source mapping techniques for characterisation of wind turbine blade models with microphone arrays acta imeko issn: 2221-870x december 2021, volume 10, number 4, 147 154 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 147 a comparison between aeroacoustic source mapping techniques for the characterisation of wind turbine blade models with microphone arrays gianmarco battista1, marcello vanali2, paolo chiariotti3, paolo castellini1 1 università politecnica delle marche, via brecce bianche, 60121 ancona, italy 2 università di parma, parco area delle scienze, 43124 parma, italy 3 politenico di milano, via giuseppe la masa, 20156 milano, italy section: research paper keywords: aeroacoustic measurements; microphone array measurements; wind turbines; acoustic source identification citation: gianmarco battista, marcello vanali, paolo chiariotti, paolo castellini, a comparison between aeroacoustic source mapping techniques for characterisation of wind turbine blade models with microphone arrays, acta imeko, vol. 10, no. 4, article 24, december 2021, identifier: imeko-acta10 (2021)-04-24 section editor: francesco lamonaca, university of calabria, italy received july 26, 2021; in final form september 30, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: gianmarco battista, e-mail: g.battista@staff.univpm.it 1. introduction the growing interest in renewable energy sources is requesting advances on several disciplines in order to reduce technological barriers and improve energy conversion efficiency. one of the mainstream technologies is wind power. it is well evident that the worldwide installed capacity of wind energy assets is growing exponentially since early 2000's. on the one hand, this sector is grabbing the attention of many industries and research groups, on the other hand, it is going through increasing regulations. in fact, one of the critical aspects of wind turbines is noise pollution. in order to mitigate wind turbine blade noise, the identification of location and strength of aeroacoustic noise sources is mandatory. this knowledge makes it possible to improve blade profiles and design effective aerodynamic appendages, such as trailing-edge serrations. in this work, acoustic imaging techniques, based on microphone arrays [1], have been used to characterise a scale single-blade rotor, installed in a semi-anechoic chamber, in different operating conditions. the requirements of a mapping technique for this application are: • the ability to deal with rotating sources; • sufficient spatial resolution with respect to the model size to distinguish different sources on the blade; • sufficient dynamic range to identify also weak sources with respect to the strongest one. a first classification of acoustic imaging methods can be done distinguishing time domain and frequency domain approaches. time domain approaches [2] are typically used for selectively shaping and steering the directivity of the array (e.g., directivemicrophone like behaviour). these methods can be used when abstract characterising the aeroacoustic noise sources generated by a rotating wind turbine blade provides useful information for tackling noise reduction of this mechanical system. in this context, microphone array measurements and acoustic source mapping techniques are powerful tools for the identification of aeroacoustic noise sources. this paper discusses a series of acoustic mapping strategies that can be exploited in this kind of applications. a single-blade rotor was tested in a semi-anechoic chamber using a circular microphone array. the virtual rotating array (vra) approach, which transforms the signals acquired by the physical static array into signals of virtual microphones synchronously rotating with the blade, hence ensuring noise-source stationarity, was used to enable the use of frequency domain acoustic mapping techniques. a comparison among three different acoustic mapping methods is presented: conventional beamforming, clean-sc and covariance matrix fitting based on iterative re-weighted least squares and bayesian approach. the latter demonstrated to provide the best results for the application and made it possible a detailed characterization of the noise sources generated by the rotating blade at different operating conditions. mailto:g.battista@staff.univpm.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 148 the sources of interest are time-variant, both in terms of position and emitted noise. [3]. frequency domain approaches are more used for characterizing the acoustic sources in terms of location and strength. most of frequency domain techniques make use of the cross-spectral matrix (csm) estimation from microphone signals as input data. therefore, the effect of incoherent background noise on acoustic maps can be attenuated by means of the averaging process, moreover, some methods can handle the removal of the csm main diagonal to neglect the contribution of incoherent noise across all the array microphones (e.g., wind noise). conventional beamforming (cb) [1] is the most widespread frequency domain mapping technique due to its robustness and low computational cost. however, it suffers of some limitations in terms of dynamic range and spatial resolution. in literature [1][4], several advanced frequency domain mapping techniques are available that go beyond cb limitations and make it possible also the quantification of the noise source. advanced frequency domain mapping techniques are generally preferred to time domain approaches in aeroacoustic applications since they generate acoustic maps with high dynamics and fine spatial resolution [4]. in frequency domain, three categories of mapping techniques can be identified depending on how the region of interest (roi) is mapped using pressure data at microphones, i.e., how the inverse operator is defined: beamforming, deconvolution, and inverse methods. the basics of different approaches for defining inverse operators is provided in the next section, while the detailed review is provided by leclère et al. [5]. among beamforming techniques, it is worth noticing functional beamforming [6], that is a variant of cb. this simple method enhances performance and flexibility of cb in terms of resolution and dynamics of maps. however, it is not compatible with the diagonal removal, therefore, it is not very effective in presence of a relevant background noise. one of the most recognised deconvolution methods is damas (deconvolution approach for the mapping of acoustic sources) [7]. this method aims at retrieving actual source distribution that generated the cb map by solving an inverse problem. the results achievable with damas are generally suitable for all demanding applications, however, the computational effort is quite high, since it requires the calculation of the array point spread function (psf), for all candidate sources in the roi, and finding a solution of an inverse problem. the deconvolution method named clean-sc (clean based on source coherence) [8] is currently the state of the art as aeroacoustic applications are concerned, since it has a low computational cost (just slightly higher than cb) and it is very effective in generating maps with high dynamics. the main drawback of clean-sc is the spatial resolution in separating sources close together, since it has the same limitations of cb. lastly, inverse methods such as generalized inverse beamforming [9], bayesian approach to sound source reconstruction [21], equivalent source method [10] and covariance matrix fitting (cmf) [11] aims at retrieving the complete source map at once, thus being capable of accounting for interaction between sources. however, dealing with underdetermined and ill-posed problems, they require reliable regularization techniques (e.g. empirical bayesian regularization [12]). the application of frequency domain approaches requires a stationary acoustic field that is not the case of a rotating source viewed by a static array. the virtual rotating array (vra) approach [13] has been adopted in this work to fulfil the requirement of a static source field and enable the application of any frequency domain mapping technique. three methods have been chosen for this test case: cb as baseline, clean-sc since it is a reference technique for aeroacoustic source mapping and cmf based on iterative reweighted least squares and bayesian approach (cmf-irls) [14]. a comparison between results obtained with frequency domain techniques is performed. finally, the characterisation of the wind turbine blade model at different operating speed with cmf-irls is provided. 2. theoretical backgrounds of acoustic source mapping the direct and the inverse acoustic problem formulation can be described as a linear problem, as frequency domain approaches are concerned. consider a set of 𝑁 elementary sources (monopoles, dipoles etc.) whose complex coefficients are collected in the vector 𝒒, and a set of 𝑀 receiver locations where the acoustic pressure is evaluated and collected in the complex vector 𝒑. the discrete acoustic propagator 𝑮 is a complex 𝑀-by-𝑁 matrix that encodes the amplitude and phase relationships between the sources and the receivers, for a given frequency. the direct acoustic problem regards the calculation of pressures at receiver location (effects), given the source strengths (causes) and the acoustic propagator: 𝑮 𝒒 = 𝒑 . (1) this is a mathematically well determined problem with a unique solution. conversely, the calculation of source strengths (causes) from observed pressures (effects), for given 𝑮, represents the inverse acoustic problem. solving this problem is not trivial, due to its ill-posed nature. in fact, the existence, the uniqueness, and the stability of the solution are not guaranteed [15]. the solution of inverse problems can be expressed as: �̂� = 𝑯 𝒑 , (2) where, 𝑯 is the inverse operator, that can assume different forms depending on the chosen approach. it is then clear that the source strengths �̂�(𝑯) can be only estimated. moreover, this estimation strongly depends on a priori assumptions made with respect to the acoustic propagator and the pressure data 𝒑 measured. both direct and inverse problems can also be written in their quadratic form: 𝑮 𝑸 𝑮𝐻 = 𝑷 (3) �̂� = 𝑯 𝑷 𝑯𝐻 , (4) where the superscript ∙ 𝐻 denotes the conjugate transpose operator, 𝑸 and 𝑷 are source and pressure cross-spectral-matrix (csm) respectively. beamforming approaches solve a scalar inverse problem, i.e. each potential source strength in the region of interest is estimated independently from the others. beamforming inverse operators 𝒉𝑛 , named steering-vectors, are calculated as function of the direct propagator columns 𝒈𝑛 and act as spatial filter, whose properties depend on their formulation [16]. beamforming techniques are widely used for its simplicity and robustness. however, sound source quantification is not possible, unless dedicated integration techniques are applied [17]. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 149 deconvolution methods have been developed to overcome beamforming limitations, since array spatial response, i.e. the point spread function (psf), affects the acoustic map retuned by beamformers. deconvolution methods aim at retrieving the actual source distribution that generated the beamforming map removing the psf effect. in opposition to beamforming approach, inverse methods aim at estimating all potential sources together, hence, accounting also for interaction between sources. frequently, this entails the solution of heavily under-determined problems, since the number of microphones (equations) is much lower than the number of potential sources in the region of interest (unknowns). another issue with inverse methods is to adopt a robust regularization mechanism that is capable to estimate the proper amount of regularization depending on the specific problem and the measured data. also, the choice of the roi and its discretization with elementary sources plays a crucial role in retrieving an accurate solution. although the complexity of these methods, the advantages justify their application. 3. materials and methods the object of study of this paper is a scale blade model, having a radius of 1.5 m. the design of this model is targeted to small wind turbines, that are defined by the standard iec61400 as the ones having the rotor spanning up to 200 m2 (about 16 m diameter). figure 1 shows a simplified scheme of the experimental setup installed in the semi-anechoic chamber of università politecnica delle marche. the single-blade rotor must reach 650 rpm to operate at nominal conditions in terms of aerodynamic angle of attack and wind speed at tip. the latter corresponds to 102 m/s or 367.2 kph which is typical for small wind turbines. the single-blade rotor is placed at a distance of 3 m in front of the circular microphone array, which has 40 equally spaced 1/4'' microphones (b&k array microphone type 4951) installed on a circumference of 𝐷 = 3 m diameter. the centre of the array is aligned with the rotation axis and the microphone plane is parallel to the blade rotation plane. an asynchronous electric motor, controlled by an inverter, drives the single-blade rotor at desired angular speed. the motor has been also equipped with an incremental encoder for measuring the angular position of the blade. all sensor signals have been synchronously acquired at 102.4 ksamples/s for each channel (microphones and encoder). each acquisition lasts for 7.5 s and is performed at constant angular speed of the rotor. realisation of acoustic maps for moving sources usually requires time domain beamforming, since the distances from microphones to focus points constantly change over time, thus requiring time-depending delays for focusing the array. however, aeroacoustic applications require maps with high dynamic range and fine spatial resolution which is achievable only with more sophisticated frequency domain approaches. most of them use the microphone csm as input data to provide the acoustic map. when dealing with stochastic signals, such as aeroacoustic noise, the cross-spectra must be averaged over several time snapshots to get meaningful spectral estimation and to reduce the effect of uncorrelated background noise. the averaging process requires the sources to be stationary in time and space with respect to the array. however, pressure signals of the rotating blade, acquired with the static circular array does not fulfil these conditions, in fact, the source-microphone relative position changes over time. in order to face this aspect, the virtual rotating array (vra) [13] approach has been adopted to turn sound pressure signals, recorded with the static physical array, into signals of a virtual array that is rotating synchronously to the blade. in this way, the blade appears in a fixed position with respect to the vra, thus making it possible to adopt any frequency domain imaging technique. the simplest realization of vra requires a circular array, that must be parallel and co-axial with the rotor, and the knowledge of the instantaneous angular position of the blade. when a rotating array is used (both physical and virtual), the medium does not rotate at the same speed, therefore, it appears to rotate from the perspective of vra. the acoustic propagator assumed for calculations of acoustic maps must consider the propagation of acoustic waves through a rotating flow field to obtain meaningful results. 3.1. virtual rotating array the working principle of vra relies on the transformation of the pressure signals 𝑝𝑚 (𝑡) recorded by the physical array into pressure signals 𝑝𝑚𝑣 (𝑡) as if they were recorded by microphones virtually rotating. the virtual array has the same layout of the physical one, thus having 𝑀 = 40 microphones equally spaced along the circumference. the position of a virtual microphone, rotating on the same circumference of the physical array, does not correspond to the position of any physical microphone most of the time, but its signal can be estimated by means of spatial interpolation of signals recorded on the array circumference. the instantaneous position of each virtual microphone is determined from the angular position of the rotor 𝜙(𝑡). calculation of pressure value for each sample of a virtual microphone signal requires the identification of which pair of physical adjacent sensors must be selected for the interpolation. these are identified by the indexes 𝑚𝑙 and 𝑚𝑢, that are function of time 𝑡 and the virtual microphone index 𝑚𝑣: 𝑚𝑙 (𝑚𝑣, 𝑡) = ⌊𝑚𝑣 + 𝜙(𝑡) 𝛼 − 1⌋ mod 𝑀 + 1 𝑚𝑢(𝑚𝑣, 𝑡) = ⌊𝑚𝑣 + 𝜙(𝑡) 𝛼 ⌋ mod 𝑀 + 1, (5) where, ⌊∙⌋ is the floor function and 𝛼 = 2 π 𝑀⁄ is the angular spacing between sensors. once selected the pair of microphones for spatial interpolation, for each 𝑡 and 𝑚𝑣, the value of virtual signal sample 𝑝𝑚𝑣 (𝑡) is calculated as the weighted sum of the samples of physical microphones figure 1. scheme of the set-up in the anechoic chamber. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 150 𝑝𝑚𝑣 (𝑡) = 𝑝𝑚𝑙 𝑠𝑢 + 𝑝𝑚𝑢 𝑠𝑢 , (6) being the weights determined as 𝑠𝑢 (𝑡) = 𝜙(𝑡) 𝛼 − ⌊ 𝜙(𝑡) 𝛼 ⌋ 𝑠𝑙 (𝑡) = 1 − 𝑠𝑢 (𝑡) . (7) the signals 𝑝𝑚𝑣 (𝑡) obtained with this procedure can be used to estimate the microphone csm. 3.2. processing techniques applied to the case of study the instantaneous angular position of the rotor 𝜙(𝑡) is retrieved from the encoder signal and it is used to calculate vra time histories. the region of interest has been defined as a rectangular area of 1.80 x 0.90 m, positioned on the rotor plane, i.e. at 3 m from the array plane. from vra point of view, this area contains the blade and the hub of the rotor. the rectangular area has been discretised with a grid of monopoles with 0.02 m step, thus having 4186 potential sources. the acoustic propagator from monopoles to microphones is modelled with pressure to pressure free-field propagator [14]. geometric linear distances are replaced by the actual propagation distances, calculated considering the rotating flow field. for this purpose, the angular velocity can be assumed constant during each measurement since the fluctuations are negligible with respect to the mean value. propagation distances have been calculated with the acoular software [18]. as stated in the introduction, three frequency domain acoustic mapping techniques are applied to the case of study, exploiting the vra signals: cb, clean-sc and cmf-irls. the application of cb is intended here as the baseline performance of acoustic imaging techniques. the deconvolution with clean-sc requires to choose only a single parameter, the loop gain, which is set here to 𝜑 = 0.6. lastly, cmf-irls, which belongs to the branch of inverse methods, is chosen since its fully capable to deal with spatially extended sources which is a source configuration highly and naturally expected in this application. the cmf-irls is used to map the full csm, without any decomposition, and the sparsity constraint on the solution is enforced by setting 𝑝 = 1. a priori information is injected in the irls procedure as form of spatial weighting (named "aperture function" in bayesian approach). the first weighting function is the cb map that eases the localisation task, since it is a rough but reliable information on source distribution. the second weighting strategy is adopted to avoid high level peaks on the map at the edge of the roi, typical with inverse methods. figure 2 depicts the weighting function, determined with the cosine function near all the edges, thus resulting in a cosine-tapered spatial window. point by point product of these two weighting functions is used as total pre-weighting. for all mapping techniques, the diagonal of csm is removed, following the common practice adopted in aeroacoustic source imaging to minimise the effect of background noise. 3.3. considerations on measurement uncertainty a reference metrological analysis of acoustic beamforming measurements has been conducted by castellini and martarelli in [19], where a type b approach, based on an analytical model, was adopted to assess how the uncertainty on input quantities affects the localisation and quantification uncertainties. instead, merinomartínez et al. [17] investigated the accuracy of different mapping methods in aeroacoustic applications. in this paper, the focus is on the identification of sources of noise, rather than the absolute level. therefore, acoustic maps must provide an estimation of source locations and their relative level. from this consideration, the target accuracy of the measurement procedure should be sufficient to distinguish different noise sources on the blade. the resolution of mapping grid has been chosen to be compatible with the blade size and guarantee at least 2.5 potential sources for each wavelength, fulfilling the guideline provided in [20] for inverse problems (cmf-irls). the smallest wavelength of analysis is about 0.076 m (the exact value depends on the actual speed of sound). as regards beamforming-based techniques, the steering vector formulation chosen provides the correct source location at the expense of source level (“formulation iv” described in [16]). in array measurements, one of the most important aspects is the uniformity of frequency response of all microphones, rather than the absolute quality of the sensors. in fact, the array microphones b&k type 4951 are specifically designed for array applications, since they are phase-matched. the nominal sensitivity of this type of microphones is 6.3 mv/pa (ref. 250 hz). all microphones were calibrated with the pistonphone b&k type 4228, which provides a sine wave at 251.2 hz ± 0.1 % and 124.0 ± 0.2 db spl, hence the individual sensitivity was measured for each sensor. field-calibration was performed before starting the measurement campaign. the technical specifications for free-field frequency response (ref. 250 hz) are: ± 1 db, from 100 hz to 3 khz, ± 3 db, from 3 khz to 10 khz. as regard the phase-matching, the manufacturer guarantees the following specifications with respect to a factory reference: ± 3°, from 100 hz to 3 khz, ± 5°, from 3 khz to 5 khz. the relative positions between the microphones in the array represents another source of uncertainty, which affects amplitude and phase of measured pressure. in fact, a mismatch between nominal and actual sensor locations induces an error (for each microphone) in the spatial sampling of the acoustic field, in terms of amplitude and phase. this has an influence on the quality of the maps since the nominal layout is used for calculations. however, the uncertainty on amplitude and phase, caused by frequency responses and sensor arrangement, can be considered random across the sensors, hence it can be assumed as spatial white noise. it is possible to consider this as a sort of “array noise”, which is averaged across the microphones and the mean value tends to zero, as the number of sensors increases. therefore, the source levels in the map are not significantly affected from the array noise. instead, the standard deviation figure 2. arbitrary weighting function for cmf-irls with respect to the blade geometry. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 151 quantifies the level of the array noise. an accurate estimation of this parameter is rather difficult in practice, but the overall effects can be assessed comparing the degradation of array spatial response and dynamics of cb map with respect to the ideal condition. an experimental test was conducted with a point source emitting a sine wave at 2 khz and positioned on the rotor hub (aligned with the rotation axis). a sine wave is used to have a high signal-to-noise ratio with respect to environmental acoustic background noise. the cb map obtained from this experiment is compared with the map obtained in ideal conditions with simulated data. a visual inspection of the maps reveals if the degradation of performance, due to the microphone response and sensor positioning uncertainty, is acceptable or not. performance degradation occurring in this setup does not significantly affects the acoustic maps and is in line with the accuracy requirements. similar tests were conducted for assessing the rotor-array relative position and alignment. the rotor-array positioning is important to fulfil the stationarity of rotating sources with respect to the vra. in addition to the test with the point source on the rotor hub, other four tests were done with the same source, but placed on the blade tip. in these tests, the rotor is placed at different angles (steps of 90°), still using a 2 khz sine wave. the rotor-array alignment was done with typical distance sensors, then cb maps were used to verify and correct it. from acoustic maps, the position of the test point sources is retrieved to estimate the offset and the angle between rotor and array axes. from the test with the source placed on the hub, the offset is estimated, which results to be in the same order of magnitude of grid resolution, in both horizontal and vertical directions. the other four tests were used to estimate the misalignment. the least square fitting plane is calculated from the four source positions, then the scalar product between the normal of the array plane and the normal of the rotor plane is calculated to estimate the angle between the two axes, which results to be less than 3°. a final test with the same point source, placed at a radius of 0.6 m and the blade rotating at 100 rpm, was conducted to assess the correctness of all operations needed to map a rotating source via the vra method. all mapping algorithms were able to correctly locate the point test source on the rotating blade using vra signals. the last important aspect is the estimation of the actual speed of sound to use for the calculation of the propagator and the steering vectors. with this purpose, two measures of air temperature inside the chamber were taken during microphone recordings, one at the beginning and one at the end of each acquisition. the model adopted for indirect measurement of the speed of sound 𝑐 is 𝑐 = 331.3 √1 + 𝑇 273.15 k , (8) where, 𝑇 is the average of initial and final air temperatures. the digital air temperature sensor has a resolution of 0.1 °c, which is sufficient for the accuracy requested in this case. despite the importance of this parameter for the quality of the maps, little attention is often paid to this parameter [19]. in the whole test campaign, 𝑐 is always about 342 m/s. 4. results the analysis has been performed from 700 hz to 4.5 khz. this band approximately corresponds to helmholtz number range 6 40 (𝐻𝑒 = 𝐷 𝜆⁄ ) in which the array provides adequate results. below this range the array has very poor performance. the benchmarking of mapping techniques has been performed on the measurement acquired at the nominal operating condition of the blade, i.e. 650 rpm. for clarity of acoustic maps, the frequency range of analysis is split in two bands, where the noise generation is located in different parts of the blade. figure 3 shows cb maps, which are characterised by blurred sources and low dynamics. these effects, caused by the array psf, make it difficult to identify weaker sources, since they are covered by the sidelobes of the main sources. maps obtained with clean-sc and cmf-irls are showed respectively in figure 4 and figure 5. both methods make it possible a better localisation and they also reveal weaker sources on the blade. in fact, as expected, these methods provide higher performance in terms of spatial resolution and dynamics of acoustic maps with respect to cb. in addition, they also provide quantitative information. the advantages of clean-sc are the robustness of results, the dynamics of the maps and the low computation time. however, due to the nature of the algorithm, clean-sc does not make it possible to fully represent extended sources [10]. this drawback is partially mitigated choosing a loop gain 𝜑 < 1, as in this case, however, the limitation still holds. instead, cmf-irls is capable to reveal the correct spatial extension of acoustic sources, but it is computationally more demanding. despite, it is an inverse method, the maps returned by cmfirls do not have any source located at the edges of the mapping plane because of the pre-weighting strategy adopted. another aspect to notice is the different source distribution returned in the low frequency range by clean-sc and cmf-irls. figure 4 (left) depicts the strongest source in between the leading edge and the trailing edge, while figure 5 (left) shows well separated sources. this is caused by the localisation mechanism adopted by clean-sc, that relies on the spatial resolution of cb. in fact, figure 3. cb 650 rpm. left: 700-2500 hz. right: 2500-4500 hz. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 152 the clean-sc algorithm establish that the source position is detected picking up the maximum location of the so called "dirty map", which is the output of cb at the current iteration of clean-sc procedure. when two sources are closer than the mainlobe width, considering the psf of the array at the frequency of analysis, the maximum of the total map lays somewhere in between the real sources, depending on their relative strength. this problem does not occur with cmf-irls, being it an inverse method, which considers all potential sources at once. since cmf-irls demonstrated to be the best performing among the methods compared in this work, it has been used for characterising the noise emission of the blade at different operating conditions. two additional rotation speed of the rotor are tested: 500 and 350 rpm. figure 6 and figure 7 shows similar structures of source distribution with respect to the nominal working condition. for the lower frequency band, the noise sources are mainly in the last 0.5 m, in the radial direction, and are quite aligned with the leading and trailing edges. for the higher band, the noise is mostly located at the tip of the blade; a figure 4. clean-sc650 rpm. left: 700-2500 hz. right: 2500-4500 hz. figure 5. cmf-irls 650 rpm. left: 700-2500 hz. right: 2500-4500 hz. figure 6. cmf-irls 500 rpm. left: 700-2500 hz. right: 2500-4500 hz. figure 7. cmf-irls350 rpm. left: 700-2500 hz. right: 2500-4500 hz. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 153 source is identified at the tip location and the other two are located almost symmetrically with respect to the tip. also some high frequency noise of the hub is visible. the extent and the level of sources decrease proportionally to the angular velocity. in order to have an overview of how the source distribution changes with respect to frequency, a synthetic visualisation is depicted in figure 8. this view results from the integration of the acoustic map along the chord-wise direction (𝑦 direction), therefore, it shows how the reconstructed source distribution changes with respect to frequency and radial direction (𝑥). 5. conclusions a measurement campaign has been conducted in a semianechoic chamber on a single blade rotor, for its aeroacoustic characterisation with acoustic imaging techniques exploiting microphone arrays. the strategy of vra makes it possible to use those advanced frequency domain approaches for acoustic source mapping, that are typically required in aeroacoustic applications. the benchmarking among three methods demonstrated the advantages of cmf-irls versus cb and clean-sc. the performance of cmf-irls is adequate for a detailed characterisation of the acoustic source distribution generated by the wind turbine blade model in different operating conditions. therefore, this measurement technique is a powerful tool for improving the design of wind turbine blade models, since it is capable to identify aeroacoustic noise sources with high dynamic range and spatial resolution. however, the applicability is limited to models of limited size since vra requires the array and the rotor to be aligned and co-axial. acknowledgement the authors would like to thank prof. renato ricci of università politecnica delle marche for providing the wind turbine blade model used in the measurement campaign and for the useful discussions on commenting the aeroacoustic results. references [1] p. chiariotti, m. martarelli, p. castellini, acoustic beamforming for noise source localization: reviews, methodology and applications, mechanical systems and signal processing, vol. 120, 2019, pp. 422–448. doi: 10.1016/j.ymssp.2018.09.019 [2] r. dougherty, advanced time-domain beamforming techniques, 10th aiaa/ceas aeroacoustics conference, american institute of aeronautics and astronautics, 2004. doi: 10.2514/6.2004-2955 [3] p. sijtsma, s. oerlemans, location of rotating sources by phased array measurements, 7th aiaa/ceas aeroacoustics conference and exhibit, american institute of aeronautics and astronautics, 2001. doi: 10.2514/6.2001-2167 [4] r. merino-martínez, p. sijtsma, m. snellen, t. ahlefeldt, j. antoni, c. j. bahr, d. blacodon, d. ernst, a. finez, s. funke, t. f. geyer, s. haxter, g. herold, x. huang, w. m. humphreys, q. leclère, a. malgoezar, u. michel, t. padois, a. pereira, c. picard, e. sarradj, h. siller, d. g. simons, c. spehr, a review of acoustic imaging methods using phased microphone arrays, ceas aeronautical journal, vol. 10, no. 1, mar. 2019, art. no. 1, doi: 10.1007/s13272-019-00383-4 [5] q. leclère, a. pereira, c. bailly, j. antoni, c. picard, a unified formalism for acoustic imaging based on microphone array measurements, international journal of aeroacoustics, vol. 16, no. 4-5, jul. 2017, art. no. 4–5. doi: 10.1177/1475472x17718883 figure 8. chord-wise integrated maps versus frequency, cmf-irls. the tip and the root of the blade are represented by the horizontal black lines, while the vertical lines represent the centre of 1/3 octave bands. top left: 650 rpm. top right: 500 rpm. bottom: 350 rpm. all maps have the same colorscale. https://doi.org/10.1016/j.ymssp.2018.09.019 https://doi.org/10.2514/6.2004-2955 https://doi.org/10.2514/6.2001-2167 https://doi.org/10.1007/s13272-019-00383-4 https://doi.org/10.1177/1475472x17718883 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 154 [6] r. p. dougherty, functional beamforming, 5th berlin beamforming conference, 19–20 february 2014, berlin, germany, gfai e.v., berlin (2014) [7] t. f. brooks, w. m. humphreys, a deconvolution approach for the mapping of acoustic sources (damas) determined from phased microphone arrays, journal of sound and vibration, vol. 294, n. 4–5, july 2006, pp. 856–879. doi: 10.1016/j.jsv.2005.12.046. [8] pieter sijtsma, clean based on spatial source coherence, international journal of aeroacoustics, vol. 6, no. 4, dec. 2007, art. no. 4. doi: 10.1260/147547207783359459 [9] t. suzuki, l1 generalized inverse beam-forming algorithm resolving coherent/incoherent, distributed and multipole sources, journal of sound and vibration, vol. 330, n. 24, november 2011, pp. 5835–5851. doi: 10.1016/j.jsv.2011.05.021 [10] g. battista, p. chiariotti, m. martarelli, p. castellini, inverse methods in aeroacoustic three-dimensional volumetric noise source localization and quantification, journal of sound and vibration, vol. 473, may 2020, p. 115208. doi: 10.1016/j.jsv.2020.115208 [11] t. yardibi, j. li, p. stoica, l. n. cattafesta, sparsity constrained deconvolution approaches for acoustic source mapping, the journal of the acoustical society of america, vol. 123, no. 5, 2008, art. no. 5, doi: 10.1121/1.2896754 [12] a. pereira, j. antoni q. leclère, empirical bayesian regularization of the inverse acoustic problem, applied acoustics, vol. 97, october 2015, pp. 11–29. doi: 10.1016/j.apacoust.2015.03.008 [13] g. herold, e. sarradj, microphone array method for the characterization of rotating sound sources in axial fans, noise control engineering journal, vol. 63, no. 6, art. no. 6, nov. 2015. doi: 10.3397/1/376348 [14] g. battista, g. herold, e. sarradj, p. castellini, p. chiariotti, irls based inverse methods tailored to volumetric acoustic source mapping, applied acoustics, vol. 172, jan. 2021, p. 107599. doi: 10.1016/j.apacoust.2020.107599 [15] j. hadamard, sur les problèmes aux dérivés partielles et leur signification physique, princeton university bulletin, vol. 13, pp. 49–52, 1902. [16] e. sarradj, three-dimensional acoustic source mapping with different beamforming steering vector formulations, advances in acoustics and vibration, vol. 2012, 2012, no. 292695. doi: 10.1155/2012/292695 [17] roberto merino-martínez, salil luesutthiviboon, riccardo zamponi, alejandro rubio carpio, daniele ragni, pieter sijtsma, mirjam snellen, christophe schram , assessment of the accuracy of microphone array methods for aeroacoustic measurements. journal of sound and vibration, vol. 470, march 2020, pp. 115176. doi: 10.1016/j.jsv.2020.115176 [18] e. sarradj, acoular acoustic testing and source mapping software. 2018. online [accessed 22 december 2021] http://www.acoular.org [19] p. castellini, m. martarelli, acoustic beamforming: analysis of uncertainty and metrological performances. mechanical systems and signal processing, vol. 22, n. 3, april 2008, pp. 672–92. doi: 10.1016/j.ymssp.2007.09.017 [20] k. r. holland, p. a. nelson, the application of inverse methods to spatially-distributed acoustic sources. journal of sound and vibration, vol. 332, n. 22, october 2013, pp. 5727–5747. doi: 10.1016/j.jsv.2013.06.009 [21] j. antoni, a bayesian approach to sound source reconstruction: optimal basis, regularization, and focusing, the journal of the acoustical society of america, vol. 131, no. 4, art. no. 4, apr. 2012, pp 2373-2890. doi: 10.1121/1.3685484 https://doi.org/10.1016/j.jsv.2005.12.046 http://dx.doi.org/10.1260/147547207783359459 https://doi.org/10.1016/j.jsv.2011.05.021 https://doi.org/10.1016/j.jsv.2020.115208 http://dx.doi.org/10.1121/1.2896754 https://doi.org/10.1016/j.apacoust.2015.03.008 https://doi.org/10.3397/1/376348 https://doi.org/10.1016/j.apacoust.2020.107599 https://doi.org/10.1155/2012/292695 https://doi.org/10.1016/j.jsv.2020.115176 http://www.acoular.org/ https://doi.org/10.1016/j.ymssp.2007.09.017 https://doi.org/10.1016/j.jsv.2013.06.009 https://doi.org/10.1121/1.3685484 measurements of virial coefficients of helium, argon and nitrogen for the needs of static expansion method acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 4 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 measurements of virial coefficients of helium, argon and nitrogen for the needs of static expansion method sefer avdiaj1,2, yllka delija1 1 department of physics, university of prishtina, prishtina 10000, kosovo 2 nanophysics, outgassing and diffusion research group, nanoalb-unit of albanian nanoscience and nanotechnology, 1000 tirana, albania section: research paper keywords: real gases; virial equation of state; compressibility factor; virial coefficients citation: sefer avdiaj, yllka delija, measurements of virial coefficients of helium, argon and nitrogen for the needs of static expansion method, acta imeko, vol. 11, no. 2, article 26, june 2022, identifier: imeko-acta-11 (2022)-02-26 section editor: sabrina grassini, politecnico di torino, italy received august 4, 2021; in final form march 15, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: yllka delija, e-mail: delijayllka@gmail.com 1. introduction the deviations of real gases from ideal gas behaviour are best seen by determining the compressibility factor [1]: 𝑍 = 𝑃 𝑉𝑚 𝑅 𝑇 , (1) where 𝑃 is the pressure, 𝑉𝑚 (𝑉/𝑛) is the molar volume, 𝑇 is the temperature, and 𝑅 is the universal gas constant. at low pressures and high temperatures, the ideal gas law can adequately predict the behaviour of natural gas. however, at high pressures and low temperatures, the gas deviates from the ideal gas behaviour and is described as real gas. this deviation reflects the character and strength of the intermolecular forces between the particles making up the gas. several equations of state have been suggested to account for the deviations from ideality. a very handy expression that allows for deviations from ideal behaviour is the virial equation of state [2]. this is a simple power series expansion in either the inverse molar volume, 1/𝑉𝑚 ,: 𝑍 = 𝑝 𝑉𝑚 𝑅 𝑇 = 𝐴 + 𝐵 𝑉𝑚 + 𝐶 𝑉𝑚 2 + ⋯, (2) or the pressure, 𝑝,: 𝑍 = 𝑝 𝑉𝑚 𝑅 𝑇 = 𝐴′ + 𝐵′ 𝑝 + 𝐶 ′𝑝2 + ⋯, (3) with 𝐴, 𝐴’first virial coefficient, 𝐵, 𝐵’second virial coefficient, 𝐶, 𝐶’third virial coefficient. all virial coefficients are temperature-dependent. according to statistical mechanics, the first virial coefficient is related to one-body interactions, the second virial coefficient is related to two-body interactions and the higher virial coefficients are related to multi-body interactions [1], [3]. the main goal of this experiment is to determine values for the second virial coefficient b of gases helium, argon and nitrogen at room temperature. the second virial coefficient has a theoretical relationship with the intermolecular forces between a pair of molecules which can provide quantitative information on these forces. abstract generally, there are three primary methods in vacuum metrology: mercury manometer, static expansion method, and continuous expansion method. for the pressure below 10 pa, the idea of the primary standard is that the gas is measured precisely at a pressure as high as possible, and then the gas is expanded to the bigger volumes; this allows to calculate the expanded pressure. an important parameter that needs to be taken care of in primary vacuum calibration methods is the compressibility factor of the working gas. the influence of virial coefficients on the realization of primary standards in vacuum metrology, especially in the realization of the static expansion method is very important. in this paper we will present the measured data for virial coefficients of three gases helium, argon and nitrogen measured at room temperature and a pressure range from 3 kpa to 130 kpa. the dominating term due to real gas properties arises from the second virial coefficient. the influence of higher orders of virial coefficients drops rapidly with lower pressure, particularly for gas pressures values lower than one atmosphere. hence, in our calculation, the series of real gas was used for the first and second virial coefficients but not for higher-order virial coefficients. mailto:delijayllka@gmail.com acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 the results will serve the scientific community in the field of metrology for the most accurate measurement standards in vacuum metrology, in atomic physics will show the level of interaction of atoms and molecules of certain gases, and in the chemical aspect we will gain knowledge about behaviour of gases at different pressures; in this way, it will be possible to obtain information about the limit of transition from ‘ideal gas’ to ‘real gas’ and vice versa. 2. experimental setup a schematic illustration of the experimental setup is shown in figure 1. our system comprises five chambers (𝑉0, 𝑉1, 𝑉2, 𝑉3 and 𝑉4) of significantly different sizes, the volumes of which have been determined by using the gas expansion method. the vacuum isolation valves are installed between the chambers, and a turbomolecular pump (hicube eco) is connected to valve 5. a capacitance diaphragm gauge (cdg025d) is attached to the chamber 𝑉0 to measure the precise pressure before and after expansion. the whole experiment was performed under wellcontrolled ambient temperature in the air-conditioned laboratory. in order to reduce the temperature drifts and transient local heating during the working day, we kept all electrical equipment (pump and lights) switched on at all times (24 h/day). the average temperature was 296 k and the pressure range was from 3 kpa to 130 kpa [4]. 3. volume determination to determine the volume of vacuum chambers, we must know the value of one of the volumes. we measured the geometrical dimensions of the known volume 𝑉0 by using a calibrated caliper. a cylinder with radius 𝑟 and height ℎ has a volume given by 𝑉0 = π 𝑟 2ℎ. using the dimensions of the volume 𝑉0 we have calculated mean value of the volume 𝑉0 = 0.71 l with associate uncertainty 0.14 % for coverage factor 𝑘 = 1. all the uncertainties throughout this paper are given for coverage factor 𝑘 = 1 [1], [4]. therefore, to determine volumes 𝑉1, 𝑉2, 𝑉3 and 𝑉4 we used the static expansion method. in this method, the pressures before and after the expansion are used for determining the expansion ratio. the gas is first enclosed in the smaller volume, then it is allowed to enter the larger volumes to expand under nearly perfect isothermal conditions [5]. this procedure is applicable only if the pressure after expansion can be measured with about the same accuracy as the pressure before expansion. argon is used as a gas for this type of calibration, although helium and nitrogen could also be used. the purity of the argon gas was 5.0 𝑁 (99.999 %). the mean values, standard deviation and uncertainty of volumes of the five vacuum chambers are given in table 1. the measurement uncertainty depends on the ratio of volumes (pressures before and after expansions) [6]. 4. compressibility factor in this section, an analytical approach for calculating the compressibility factor of gases is presented. in this experiment, the gas was expanded four times in different volumes. the experimental procedure can be described as follows. in the beginning, all valves are opened and the entire system is pumped down to less than 10−5 pa. valves 2, 3, 4, 5 are closed, valves 0, 1 are opened and gas comes out very quickly into 𝑉0 + 𝑉1. with the left hand, a knob has adjusted the regulator to fill the system at a pressure of a little over 130 kpa. the pressure is monitored for a few minutes until is stable, then is recorded as 𝑝1. the ambient temperature t is also recorded. then valve 2 is opened and the gas expanded into 𝑉0 + 𝑉1 + 𝑉2. again, after equilibrium, the pressure is recorded as 𝑝2. then valve 3 is opened and gas is expanded into 𝑉0 + 𝑉1 + 𝑉2 + 𝑉3. when equilibrium is reached the pressure is recorded as 𝑝3. for the final measurement, valve 4 is left open and the gas expanded into all volumes 𝑉0 + 𝑉1 + 𝑉2 + 𝑉3 + 𝑉4. after the gas is expanded in the entire system, pressure dropped below 3 kpa and is recorded as 𝑝4. this procedure is repeated nine times, and the equilibrated pressures are recorded at each stage [7], [8]. for pressure measurements, we have used cdg025d with an uncertainty of 0.2 % [9]. once each gas transfer pressure is measured, the volumes and temperature are also known, we need only the number of moles of gas to find directly its compressibility factor 𝑍. from the general gas equation, we obtain the expression for the amount of substance: 𝑛 = 𝑝 𝑉 𝑇 𝑇0 𝑃0 𝑉0 (4) where 𝑇0 is the standard temperature, 𝑃0 is the standard pressure and 𝑉0 is the molar volume at stp [1]. after we obtained the complete data set 𝑃, 𝑉, 𝑇 and 𝑛, the compressibility factor is calculated by using equation (1). plots of compressibility factor (𝑍) as a function of inverse molar volume (1/𝑉𝑚 ) for (a) helium, (b) argon and (c) nitrogen are presented in figure 2. for these real gases in the graphs, we notice that the shapes of the curves look a little different for each gas, and most of the curves only approximately resemble the ideal gas line at 𝑍 = 1 over a limited pressure range [10]-[12]. figure 1. schematic illustration of the experimental setup. valves 1, 2, 3 and 4 are installed between the chambers 𝑉0, 𝑉1, 𝑉2, 𝑉3 and 𝑉4. the capacitance diaphragm gauge (cdg) is attached to the chamber 𝑉0. a black rubber hosing connects the gas regulator to valve 0 and a flexible metal hose is used from valve 5 to the turbomolecular vacuum pump. table 1. mean value, standard deviation and uncertainty of volumes. �̅� 𝜹 𝒖 𝑉0(l) 0.7100 9.61 · 10 -4 1.36 · 10-3 𝑉1(l) 0.0668 5.82 · 10 -5 8.71 · 10-4 𝑉2(l) 0.7879 2.04 · 10 -3 2.59 · 10-3 𝑉3(l) 0.1652 2.82 · 10 -3 1.71 · 10-2 𝑉4(l) 0.5416 5.48 · 10 -3 1.01 · 10-2 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 5. results 5.1. calculation of virial coefficients at low pressures, e.g. below two atmospheres, the third and all higher virial terms usually may be neglected; hence, the equation of state (2) becomes: 𝑍 = 𝑝 𝑉 𝑛 𝑅 𝑇 = 𝐴 + 𝐵 𝑉𝑚 + ⋯ (5) using spreadsheet software such as excel, we plotted 𝑍 as a function of 1/𝑉𝑚 for each trial. for each gas, some curvature is observed, and we determined values for both the first (a) and second (b) virial coefficients. using the linest function in excel, we determined the values and standard deviations of the slope and y-intercept in a linear fit. by properly combining values of b and their uncertainties from multiple trials, an average value of this coefficient and its standard deviation for each gas was calculated [1], [3]. the mean value, standard deviation and uncertainty of the first and second virial coefficients for these three gasses are summarized in table 2. figure 3 shows values of the second virial coefficient for (a) helium, (b) argon and (c) nitrogen at room temperature. 6. discussion in table 3, the experimental values obtained in this work for second virial coefficients are compared with values from the literature [1], [13]-[17]. it can be seen the experimental results show good agreement with the data in the literature. according to the above results, it is possible to make the following considerations: a. the compressibility factor of helium is greater than 1, which shows that repulsive forces dominate between molecules and atoms. whereas for argon and nitrogen the value of z is lower than 1, in this case, the attractive forces are stronger (figure 2). b. at room temperature, the second virial coefficient is positive for helium and negative for argon and nitrogen (figure 3). figure 2. the compressibility factor, z, as a function of inverse molar volume (1/𝑉𝑚 ) for (a) helium, (b) argon and (c) nitrogen. figure 3. values of 𝐵 coefficient for (a) helium, (b) argon and (c) nitrogen at 296 k. table 2. mean value, standard deviation and uncertainty of 𝑨 and 𝑩 coefficients for helium, argon and nitrogen. 𝐻𝑒 �̅� 𝛿 𝑢 𝐴 1.00120 1.26 · 10-5 4.22 · 10-6 𝐵 (m3/mol) 1.16 · 10-5 2.33 · 10-7 7.78 · 10-8 𝐴𝑟 �̅� 𝛿 𝑢 𝐴 0.99871 1.63 · 10-4 5.42 · 10-5 𝐵 (m3/mol) -1.61 · 10-5 3.75 · 10-7 1.24 · 10-7 𝑁2 �̅� 𝛿 𝑢 𝐴 0.9998 3.75 · 10-5 1.25 · 10-5 𝐵 (m3/mol) -5.1 · 10-6 7.25 · 10-8 2.41 · 10-8 1,00119 1,00121 1,00123 1,00125 1,00127 1,4 2,4 3,4 4,4 𝑦 = 𝑍 𝑥 = 1/𝑉𝑚 (𝑎) 0,99845 0,99855 0,99865 0,99875 0,99885 0,99895 0,99905 1,4 2,4 3,4 4,4 𝑦 = 𝑍 𝑥 = 1/𝑉𝑚 (𝑏) 0,99975 0,99977 0,99979 0,99981 0,99983 0,99985 0,99987 0,99989 1,5 2,0 2,5 3,0 3,5 4,0 4,5 𝑦 = 𝑍 𝑥 = 1/𝑉𝑚 (𝑐) 1,12e-05 1,14e-05 1,16e-05 1,18e-05 1,20e-05 1,22e-05 1 2 3 4 5 6 7 8 9 𝐵 (m 3 / m o l) (𝑎) -1,70e-05 -1,65e-05 -1,60e-05 -1,55e-05 -1,50e-05 -1,45e-05 1 2 3 4 5 6 7 8 9 𝐵 (m 3 / m o l) (𝑏) -5,30e-06 -5,20e-06 -5,10e-06 -5,00e-06 -4,90e-06 -4,80e-06 1 2 3 4 5 6 7 8 𝐵 (m 3 / m o l) (𝑐) acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 7. conclusions based on the obtained experimental data, the pressure that is generated in the primary pressure standard for these initial pressure values must be corrected at most up to 0.005% since the deviation of the real gas from the ideal behaviour is very small. the gas with the smallest deviation from the ideal behaviour was nitrogen with a correction of about 0.001%: based on these results, we are going to use nitrogen in future calibration procedures. acknowledgement the present work was partly financed by the ministry of education, science and technology of the republic of kosovo (mest) for the project: developing of primary metrology in the republic of kosovomeasurement of the compressibility factor of noble gases and nitrogen. most of the experimental equipment was donated by the u.s. embassy in kosovo. references [1] j. m. h. levelt sengers, m. klein, j. s. gallagher, pressurevolume-temperature relationships of gases; virial coefficients, national bureau of standards, washington d.c. (1971). online [accessed 14 march 2022] doi: https://apps.dtic.mil/sti/citations/ad0719749 [2] t. d. varberg, a. j. bendelsmith, k. t. kuwata, measurement of the compressibility factor of gases: a physical chemistry laboratory experiment, journal of chemical education (2011), art. 1591. doi: 10.1021/ed100788r [3] p. j. mcelroy, r. battino, m. k. dowd, compression-factor measurements on methane, carbon dioxide, and (methane + carbon dioxide) using a weighing method, j. chem. thermodynamics 21(12) (1989), pp 1287-1300 doi: 10.1016/0021-9614(89)90117-1 [4] s. avdiaj, j. šetina, b. erjavec, volume determination of vacuum vessels by gas expansion method, metrology society of india, mapan 30 (2015), pp. 175-178. doi: 10.1007/s12647-015-0137-1 [5] a. kumar, v. n. thakur, h. kumar, characterization of spinning rotor gauge-3 using orific flow system and static expansion system, acta imeko 9(5) (2020), pp 325. doi: 10.21014/acta_imeko.v9i5.993 [6] c. mauricio villamizar mora, j. j. duarte franco, v. j. manrique moreno, c. e. garcía sánchez, analysis of the mathematical modelling of a static expansion system, acta imeko 10(3) (2021), pp. 185-191. doi: 10.21014/acta_imeko.v10i3.1061 [7] k. jousten, handbook of vacuum technology, wiley-vch, germany (2016), isbn 978-3-527-41338-6, pp 710-715. doi: 10.1002/9783527688265.ch2 [8] john w. moore, conrad l. stanitski, peter c. jurs, chemistry_ the molecular science-brooks cole, cengage learning; 4th edition (march 5, 2010), isbn-10: 1-4390-4930-0, pp 440-150. [9] inficon sky cdg025d capacitance diaphragm gauge on eurovaccum. online [accessed 20 april 2022] https://eurovacuum.eu/products/gauges/cdg025d/ [10] h. boschi-filho, c.c.buthers, second virial coefficient for real gases at high temperature, j. phys. chem. 73(3) 1969, pp 608– 615, art. 380. doi: 10.1021/j100723a022 [11] r. balasubramanian, x. ramya rayen, r. murugesan, second virial coefficients of noble gases, international journal of science and research (ijsr), 6(10) (2017). online [accessed 14 march 2022] https://www.ijsr.net/archive/v6i10/art20177192.pdf [12] c. gaiser, b. fellmuth, highly-accurate density-virial-coefficient values for helium, neon, and argon at 0.01℃ determined by dielectric-constant gas thermometry, the journal of chemical physics 150 (2019), art. no. 134303. doi: 10.1063/1.5090224 [13] b. schramm, e. elias, l. kern, gh. natour, a. schmitt, c. weber, precise measurements of second virial coefficients of simple gases and gas mixtures in the temperature range below 300 k, 95 (1991), pp. 615-621. doi: 10.1002/bbpc.19910950513 [14] a. hutem, s. boonchui, numerical evaluation of second and third virial coefficients of some inert gases via classical cluster expansion, journal of mathematical chemistry 50 (2012), pp 1262–1276. doi: 10.1007/s10910-011-9966-5 [15] e. bich, r. hellmann, e. vogel, ab initio potential energy curve for the helium atom pair and thermophysical properties of the dilute helium gas. ii. thermophysical standard values for lowdensity helium, an international journal at the interface between chemistry and physics 105(23-24) (2007). doi: 10.1080/00268970701730096 [16] d. w. rogers, concise physical chemistry, wiley; 1st edition, march, 2011, pp 18-34, isbn: 978-0-470-52264-6. [17] d. white, t. rubin, p. camky, h. l. johnston, the virial coefficients of helium from 20 to 300 k, j. phys. chem. 64(11) 1960, pp. 1607-1612, art. 774. doi: 10.1021/j100840a002 table 3. comparison between experimental values of second virial coefficients and literature values. 𝑇 (k) 𝐻𝑒 𝐴𝑟 𝑁2 literature 293 11.2 [14] -16.9 [14] -6.1 [14] 296 11.7 [13] -16.0 [13] -5.0 [13] 298 11.8 [15] -15.8 [16] -4.5 [16] 300 11.9 [17] -15.7 [1] -4.7 [1] this work 296 11.6 -16.1 -5.1 𝐵 in 10−6 (m3/mol) https://apps.dtic.mil/sti/citations/ad0719749 https://doi.org/10.1021/ed100788r https://doi.org/10.1016/0021-9614(89)90117-1 https://doi.org/10.1007/s12647-015-0137-1 http://dx.doi.org/10.21014/acta_imeko.v9i5.993 http://dx.doi.org/10.21014/acta_imeko.v10i3.1061 https://doi.org/10.1002/9783527688265.ch2 https://eurovacuum.eu/products/gauges/cdg025d/ https://doi.org/10.1021/j100723a022 https://www.ijsr.net/archive/v6i10/art20177192.pdf https://doi.org/10.1063/1.5090224 https://doi.org/10.1002/bbpc.19910950513 https://doi.org/10.1007/s10910-011-9966-5 https://doi.org/10.1080/00268970701730096 https://doi.org/10.1021/j100840a002 digital signal processsing functions for ultralow frequency calibrations acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 374 acta imeko issn: 2221-870x december 2020, volume 9, number 5, 374 378 digital signal processsing functions for ultralow frequency calibrations henrik ingerslev1, søren andresen2, jacob holm winther3 1 brüel & kjær, danish primary laboratory for acoustics, denmark, henrik.ingerslev@bksv.com 2 hottinger, brüel and kjær, denmark, soren.andresen@bksv.com 3 brüel & kjær, danish primary laboratory for acoustics, denmark, jacobholm.winther@bksv.com abstract: the demand from industry to produce accurate acceleration measurements down to ever lower frequencies and with ever lower noise is increasing [1][2]. different vibration transducers are used today for many different purposes within this area, like detection and warning for earthquakes [3], detection of nuclear testing [4], and monitoring of the environment [5]. accelerometers for such purposes must be calibrated in order to yield trustworthy results and provide traceability to the si-system accordingly [6]. for these calibrations to be feasible, suitable ultra low-noise accelerometers and/or signal processing functions are needed [7]. here we present two digital signal processing (dsp) functions designed to measure ultra low-noise acceleration in calibration systems. the dsp functions use dual channel signal analysis on signals from two accelerometers measuring the same stimuli and use the coherence between the two signals to reduce noise. simulations show that the two dsp functions are estimating calibration signals better than the standard analysis. the results presented here are intended to be used in key comparison studies of accelerometer calibration systems [8][9], and may help extend current general low frequency range from e.g. 100mhz down to ultra-low frequencies of around 10mhz, possibly using somewhat same instrumentation. keywords: low-noise; coherent power; coherent phase; calibration; dual-channel; ultra-low frequencies 1. introduction in the field of dual channel signal analysis there are some very powerful functions for analysing signals, such as the well-known frequency response function and coherence. but there are also other functions like the coherent power function (cop) and non-coherent power function which are very powerful for decomposing noisy signals into the coherent part and the non-coherent part [10][11][12]. consider an accelerometer calibration setup with two accelerometers mounted close to each other and measuring the same stimuli. they will both measure the acceleration of the shaker, but since they are different sensors with different conditioning, they will have different noise. the two signals will have a coherent part which is the acceleration signal and a noncoherent part which is the noise. hence, the cop can be a powerful tool for extracting the signal from the noise and thereby increase the measuring accuracy of the power. a similar function for increasing the measuring accuracy of the phase by separating the coherent phase from the non-coherent phase is also derived in the next section, called the coherent phase (or argument) function (coa). for the coa to work in a proper manner, it is crucial that the signal applied to the shaker is a continuous signal, like a sine or a multi-sine, and that the frequencies of the sines are very precise and phase synchronized with the frequencies of the fourier transformation, to prevent the phase from drifting or even make jumps. more details on this will be given in section 1.2. the two dsp functions analysed in this article may prove relevant to be used for e.g. key comparison of calibration systems down to extremely low frequencies of around 10mhz where noise becomes a real challenge [7][8][9]. the degree to which the cop, and the coa can separate a signal into coherent and noncoherent parts increases with the length of the measurement, and generally depends on parameters like how many time-samples the measurement is divided into, how long each time-sample is, the sampling rate, and the fourier transform used. 2. digital signal processing functions theory in this section the theory for two dsp functions is outlined. the two functions are based on dual http://www.imeko.org/ mailto:henrik.ingerslev@bksv.com mailto:soren.andresen@bksv.com mailto:jacobholm.winther@bksv.com acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 375 channel signal analysis and can give a better estimate of signals in very noisy environments, than standard signal analysis. the first function is the cop for estimating the power or amplitude of the signal. and the second is the coa for estimating the phase. both functions rely on the coherence between the two signals. 1.1. coherent power function consider two sensors both measuring the same stimuli and positioned close enough for their mutual transfer function to be considered unity. as illustrated in figure 1 the signal without noise called 𝑢(𝑡), and the noise from each sensor called 𝑛(𝑡) and 𝑚(𝑡) yields the output signals from the two sensors, called 𝑎(𝑡) and 𝑏(𝑡). now consider 𝑗 = 1…𝑁 discrete time-samples measured with the two sensors: 𝑎𝑗(𝑡𝑖) = 𝑢𝑗(𝑡𝑖) + 𝑛𝑗(𝑡𝑖) (1) 𝑏𝑗(𝑡𝑖) = 𝑢𝑗(𝑡𝑖) + 𝑚𝑗(𝑡𝑖) (2) here 𝑡𝑖 is the discrete time in each time-sample, 𝑢𝑗 is the discrete signal, and 𝑛𝑗 and 𝑚𝑗 are discrete noise in each time-sample. by discrete fourier transformation of equation (1) and (2) we get 𝐴𝑗(𝑓𝑘) = 𝑈𝑗(𝑓𝑘) + 𝑁𝑗(𝑓𝑘) (3) 𝐵𝑗(𝑓𝑘) = 𝑈𝑗(𝑓𝑘) + 𝑀𝑗(𝑓𝑘) (4) where 𝑓𝑘 is the discrete frequency, 𝑈𝑗, 𝑁𝑗, and 𝑀𝑗 are the discrete fourier transforms of 𝑢𝑗, 𝑛𝑗, and 𝑚𝑗 respectively. figure 1: illustration of the signals used in the dual channel signal analysis and derivation of the two dsp functions. 𝑢(𝑡) is the signal we want to measure, and 𝑛(𝑡) and 𝑚(𝑡) are the noise contributions from each sensor, which then yields the two output signals 𝑎(𝑡) and 𝑏(𝑡). the cross spectrum is given by 𝑆𝐴𝐵(𝑓𝑘) = 1 𝑁 ∑ 𝐴𝑗(𝑓𝑘)𝐵𝑗 ∗(𝑓𝑘) 𝑁−1 𝑗=0 (5) for 𝑁 → ∞, and * denotes complex conjugate. by inserting equation (3) and (4) in equation (5), and using that 𝑈𝑗 , 𝑁𝑗 , and 𝑀𝑗 are uncorrelated the cross spectrum can be given by 𝑆𝐴𝐵(𝑓𝑘) = 1 𝑁 ∑ 𝑈𝑗(𝑓𝑘)𝑈𝑗 ∗(𝑓𝑘) 𝑁−1 𝑗=0 ≝ 𝑆𝑈𝑈(𝑓𝑘) (6) where 𝑆𝑈𝑈(𝑓𝑘) is the power spectrum of the signal without noise, i.e. the coherent power. therefore, the cop can in this setup be given by: 𝐶𝑂𝑃(𝑓𝑘) = 𝑆𝐴𝐵(𝑓𝑘) (7) 1.2. coherent phase function consider the following function, for 𝑁 → ∞: 𝐷𝐴𝐵(𝑓𝑘) = 1 𝑁 ∑ 𝐴𝑗(𝑓𝑘)𝐵𝑗(𝑓𝑘) 𝑁−1 𝑗=0 (8) it looks like the cross spectrum from equation (5), but without the complex conjugation. this “nonconjugated cross spectrum” is very useful for deriving a function for measuring the phase of the coherent signal. we can similarly to the derivation of the cop insert equation (3) and (4) in (8), and use that 𝑈𝑗, 𝑁𝑗, and 𝑀𝑗 are uncorrelated. hence, the non-conjugated cross spectrum can now be given by, where we have omitted the 𝑓𝑘 dependence to make room. 𝐷𝐴𝐵 = 1 𝑁 ∑ 𝑈𝑗𝑈𝑗 𝑁−1 𝑗=0 (9) = 1 𝑁 ∑|𝑈𝑗| 2 exp⁡(2𝑖∠𝑈𝑗) 𝑁−1 𝑗=0 (10) ≃ 1 𝑁 ∑|𝑈𝑗| 2 𝑁−1 𝑗=0 exp(2𝑖 1 𝑁 ∑ ∠𝑈𝑗 𝑁−1 𝑗=0 ) (11) = 𝑆𝑈𝑈exp(2𝑖∠𝑈̅̅ ̅̅ ) (12) from equation (10) to (11) we have approximated the summation of vectors with length |𝑈𝑗| 2 and angle 2∠𝑈𝑗 by vectors with correct length but all with the mean angle 2∠𝑈̅̅ ̅̅ . figure 2 illustrates this approximation and shows that for relatively small changes in angle from sample to sample this is a good approximation. in calibration applications the signal measures the acceleration of the shaker. and by applying a sine or multi-sine with frequencies exactly the same and phase synchronized with the fourier frequencies to the shaker, the phase from sample to sample can be kept steady without drifting and the approximation will therefore be very good in such calibration applications. http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 376 in equation (12) ∠𝑈̅̅ ̅̅ is the mean phase of the coherent signal. therefore, the coherent phase function can be given by 𝐶𝑂𝐴 = ∠𝑈̅̅ ̅̅ . and by rewriting equation (12), the coa can be given by: 𝐶𝑂𝐴(𝑓𝑘) = 1 2 imag(ln(𝐷𝐴𝐵(𝑓𝑘))) (13) we have in the derivation of equation (13) used that the signal power 𝑆𝑈𝑈 is purely real and can therefore be omitted. figure 2: schematic graphs illustrating the approximation from equation (10) to (11).the arrows represent 𝑈𝑗 and the xand yaxes are the real and imaginary part of 𝑈𝑗 (a) shows the correct summation where each arrow has length |𝑈𝑗| 2and angle 2∠𝑈𝑗 as in equation (10), and (b) shows the approximative summation where each arrow has correct length but a mean angle 2∠𝑈̅̅ ̅̅ as in equation (11). as can be seen from (a) to (b), if 𝑈𝑗 does not change to much between samples the approximation is good. 3. simulations in this section we test the cop and the coa functions on simulated data. we calculate the discrepancy between the functions estimate of the signal amplitude and phase, and the true values. and we compare this with standard signal analysis which is to average the amplitude and phase over all the samples. when measuring the cross spectrum or the nonconjugated cross spectrum for finding the cop and the coa we only measure a finite number of samples 𝑁, which therefore only yields an estimate of the cop and the coa. hence, the more samples the more precise the estimate will be, and in the following the simulations is based on a 102400 s (~ 28 h) time sample divided into 𝑁 = 1024 samples of 100 s each, with 2048 discrete measurement points in each sample, and a fourier transform from 10mhz to 10.24 hz with a 10 mhz step. how well the cop and coa works is estimated by representing 𝑎𝑗, and 𝑏𝑗 from equation (1) and (2) with simulated data. here the signal 𝑢𝑗 is a multi-sine with 𝑙 = 1…𝑀 frequencies (𝑓𝑙) and phases (𝜙𝑙), all with amplitude 𝑈0: 𝑢𝑗(𝑡𝑖) = 𝑈0∑sin⁡(2𝜋𝑓𝑙𝑡𝑖 + 𝜙𝑙) 𝑀 𝑙=1 (14) the noise 𝑛𝑗, and 𝑚𝑗 are random generated white noise with amplitude 𝑁0. and the signal to noise ratio is given by: 𝑆𝑁𝑅 = 𝑈0 𝑁0 (15) the frequencies (𝑓𝑙) of the multi-sine in equation (14) must be precisely the same or synchronized to a subset of the fourier transformation frequencies (𝑓𝑘) from equation (3) and (4), otherwise the coa will drift. this requirement is easily met in the simulations presented here, since the subset of frequencies (𝑓𝑙) can be set to be identical to some of the frequencies (𝑓𝑘). but in real measurements this requirement might be challenging to meet. figure 3: based on simulated data the coherent power function and coherent phase function is tested for its strength for estimating the amplitude and phase of sine waves in noisy data. the graph shows the deviation of the amplitude and phase from the true vales. (a) shows the mean amplitude, ⁡0.5(𝐴 + 𝐵), and the coherent amplitude, √𝐶𝑂𝑃 . (b) shows the mean phase 0.5(∠𝐴 + ∠𝐵), and the coherent phase 𝐶𝑂𝐴. figure 3(a) shows the discrepancy between the signal amplitudes 𝑈0 from equation (14) and the coherent amplitude, defined as √𝐶𝑂𝑃(𝑓𝑘) , for a signal to noise ratio of 𝑆𝑁𝑅 = 0.32 in red circles. and for comparison we also plot the mean amplitude in blue circles, that is, 0.5(𝐴(𝑓𝑘) + 𝐵(𝑓𝑘)) where 𝐴(𝑓𝑘) = ∑|𝐴𝑗(𝑓𝑘)| and 𝐵(𝑓𝑘) = ∑|𝐴𝑗(𝑓𝑘)| is the average amplitudes over all samples. and we plot only at the frequencies of the signal, i.e. at 𝑓𝑘 = 𝑓𝑙. it is clearly seen that the cop function estimates the amplitude better that the mean amplitude in the full frequency range. http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 377 similarly, figure 3(b) shows the discrepancy between the phase 𝜙𝑙 and the coa for a signal to noise ratio of 𝑆𝑁𝑅 = 0.024 in red circles. and for comparison we also plot the mean phase defined as 0.5(∠𝐴 + ∠𝐵) where ∠𝐴 = ∑arg⁡(𝐴𝑗(𝑓𝑘)) and ∠𝐵 = ∑arg⁡(𝐵𝑗(𝑓𝑘)) are the average phases over all samples. and it is seen that the coa estimates the phase better that the mean phase. figure 4:simulated deviation in amplitude and phase versus signal to noise ratio, in the upper graph for the mean amplitude and the coherent amplitude defined as √𝐶𝑂𝑃, and in lower graph for the mean phase and the coherent phase coa. each deviation point plotted here is an average over all the frequencies from figure 3. the deviations plotted in the four curves in figure 3 seems to be independent of frequency. therefore, by averaging the deviation for each curve in figure 3 over all the frequencies 𝑓𝑙, we get a mean deviation for the cop and the mean amplitude at 𝑆𝑁𝑅 = 0.32, and a mean deviation for the coa and the mean phases at 𝑆𝑁𝑅 = 0.024. we have done this for a range of signal to noise ratios from 𝑆𝑁𝑅 = 0.01 to 𝑆𝑁𝑅 = 10 and plotted it in figure 4. it shows that the cop is better than the mean amplitude from about 𝑆𝑁𝑅 = 1, and the coa is better than the mean phase from about 𝑆𝑁𝑅 = 0.04. this shows that the “critical” snr where the cop and coa functions has lower deviation than the simple mean values is different for cop and coa. and though the data in figure 4 depends highly on the length of the time sample and fourier transform used, this trend of a lower critical snr for the coa function than the cop functions were always seen and needs further investigations. 4. summary we have derived two dsp functions for very accurate measurements of amplitude and phase from two accelerometers measuring the same stimuli. we have tested the two dsp functions on simulated data and our findings based on the simulations shows promise to the functions as good tools for accurately measuring amplitudes and phases of a multi-sine wave in a noisy environment. these findings may prove useful for key comparisons of accelerometer calibration systems down to ultra-low frequencies, since for such measurements noise becomes a huge problem as the frequencies approaches 10mhz. hence, by replacing the accelerometer used in key comparisons by two accelerometers and by using the two dsp functions described here, the frequency range in key comparisons may be possible to extend down to ultralow frequencies of around 10mhz. 5. acknowedgements we would like to acknowledge torben rask licht and lars munch kofoed for fruitful discussions. we would also like to acknowledge the eu empir project “metrology for low-frequency sound and vibration”, 10env03 infra-auv, as the initial reason for the approach on optimising method of analysis in noisy environments, especially found relevant when doing low frequency calibration. 6. references [1] t. bruns, s. gazioch: “correction of shaker flatness deviations in very low frequency primary accelerometer calibration, iop metrologia, vol. 53, no. 3, pp. 986 (2016). [2] j. h. winther, t. r. licht, “primary calibration of reference standard back-to-back vibration transducers using modern calibration standard vibration exciters”, joint conference imeko tc3, tc5, tc22, helsinki, (2017). [3] g. marra, c. clivati et al., “ultra-stable laser interferometry for earthquake detection with terrestrial and submarine optical cables”, science, vol. 361, issue 6401, pp. 486-490, (2018). [4] p. gaebler, l ceranna et al., “a multitechnology analysis of the 2017 north korean nuclear test”, solid earth, 10, 59–78, (2019). [5] r. a. hazelwood, p. c. macey, s. p. robinson, l. s. wang, “optimal transmission of interface vibration wavelets a simulation of seabed seismic responses”, j. mar. sci. eng. 6, 61-79, (2018). [6] l. klaus, m. kobusch, seismometer calibration using a multi-component acceleration exciter, iop conf. series: journal of physics: conf. series 1065 (2018). http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 378 [7] eu empir project “metrology for lowfrequency sound and vibration”, (2020). [8] bipm key comparison, ccauv.v-k5 (2017), https://www.bipm.org/kcdb/comparison?id=453 [9] euramet key comparison, auv.v-k5 (2019), https://www.bipm.org/kcdb/comparison?id=1631 [10] h. herlufsen, “dual channel fft analysis part i”, b&k technical review, no.1, (1984). [11] j. s. bendat, a. g. piersol, “random data”, wiley-interscience, (1986). [12] j. s. bendat, a. g. piersol, “engineering applications of correlation and spectral analysis”, wiley, new york, (1993). http://www.imeko.org/ https://www.bipm.org/kcdb/comparison?id=453 https://www.bipm.org/kcdb/comparison?id=1631 comment to: l. mari “is our understanding of measurement evolving?” acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 3 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 comment to: l. mari “is our understanding of measurement evolving?” franco pavese1 1 independent scientist, research director in metrology (formerly, at cnr then inrim, torino), 10139 torino, italy section: technical note keywords: metrology; measurement science; measurement process; informational component citation: franco pavese, comment to: l. mari “is our understanding of measurement evolving?”, acta imeko, vol. 11, no. 2, article 19, june 2022, identifier: imeko-acta-11 (2022)-02-19 section editor: francesco lamonaca, university of calabria, italy received february 7, 2022; in final form june 1, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: franco pavese, e-mail: frpavese@gmail.commailto:frpavese@gmail.com 1. introduction after having read the recently published paper [1], i am feeling, dictated perhaps by being a long-since metrologist, a compulsory need to write the comments below, more from some sense of surprise than for a less appreciation for its author. the comments concern the following quoted parts (where added section numbers are the ones used here below): (2.) – “doubt: isn’t metrology a ‘real’ science ? … metrology is a social body of knowledge” (original italics); (3.) – “measurements are aimed at attributing values to properties: since values are information entities, any measurement must then include an informational component”; – “listing some necessary conditions that characterize measurement, and that plausibly are generally accepted, is not a hard task: measurement is … (iv) that produces information in the form of values of that property. indeed, (iv) characterises measurement as an information process”. (4.) – “however, not any such process is a measurement, thus acknowledging that not any data acquisition is a measurement. we may call “property evaluation” a process fulfilling (i)-(iv). what sufficient conditions characterize measurement as a specific kind of property evaluation? the answer does not seem as easy” (blank lines added for clarity) … “the possible evolutionary perspectives of measurement can be considered along four main complementary, though somewhat mutually connected, dimensions: – measurable entities as quantitative or non-quantitative properties …; – measurable entities as physical or non-physical properties; – measuring instruments as technological devices or human beings; – measurement as an empirical or an informational process, and therefore the relation between measurement and computation …”. 2. “isn’t metrology a ’real’ science?” my position is that metrology is a part of measurement science, a process intended to share common ways to transmit the knowledge to the community that is not limited to a single generation of scientists and practitioners, and to obtain the necessary consensus. it is metrology the part that defines the meaning of the terms “precision” and “accuracy” by introducing the concept of uncertainty, otherwise not necessarily embedded in the meaning of measurement, as it happened, e.g., in times anterior to modern science. another mandatory requirement of metrology is the need of a multiplicity of the measurements in order to obtain data comparable with each other, so requiring that they are abstract the present contribution is a comment which addresses the paper published in this journal “is our understanding of measurement evolving?” and authored by luca mari. this technical note concerns specific parts of that paper, namely the statements: “doubt: isn’t metrology a ‘real’ science? … metrology is a social body of knowledge”, “measurements are aimed at attributing values to properties: since values are information entities, any measurement must then include an informational component” and “what sufficient conditions characterise measurement as a specific kind of property evaluation?”, and discusses alternatives. mailto:frpavese@gmail.com mailto:frpavese@gmail.com acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 traceable—to a common denominator— to each other, in order to ensure that all the scientists are “on the same page”. the reason for this is that data, numerical or not, are certainly to be considered facts—as opposed to inferences—but not necessarily and usually as un-equivocal as one would ask for. restricting the case to numerical data—e.g., rarely the published ones are the “rough” instrumental indications (as recognized, e.g., by the vim [2])—metrology is the science dealing with correctly setting rules for their elaboration. metrological competence is also necessary to implement the increase in precision of the measurements, the latter being a normal goal of science as one of the ways for increasing knowledge. yet, in order to allow sharing the results, “good-practice” rules must be installed, eventually ordered in protocols and conventions, only apparently a non-scientific stage of the measurement process. the latter requires instead scientific competence and sharing meanings and, consequently, a language (also across local ones). at the end of this process, language is stored into written rules (“scripta manent, verba volant”) —notation being a specific symbolic language—a feature common to every frame of shared knowledge, called consensus (“pacta sunt servanda”), in the lack of human possibility of knowing truth. on the other hand, consensus cannot replace or contrast current knowledge, but its function is merely a notary one (an issue becoming today more and more critically important and subject to misuse). in the above respect, my opinion is that measurement science may be compared to the dna of the observational science, i.e. the body of knowledge, with metrology its rna as presently interpreted in biology, i.e. the tool for implementation of its basic principles and rules. 3. “any measurement must then include an informational component” i agree that a measurement result, numerical or not, is additional information, assumed to be useful for increasing the knowledge level—otherwise there would be no reasons to perform measurements. however, after shannon having extended the scientific meaning of the term measurement, and especially in the present times dominated by informatics, i suggest that assigning a specific further meaning to the term information becomes necessary, as i do not consider it anymore a generic or un-equivocal one. i consider it sufficient here to point out that one meaning of information is the one carried by a value measured according to the scientific definition of measurement, i.e., a “material” information concerning the “external world” as perceived by the humans and their built apparatuses—made to supplement the human limited (standard) sensorial capabilities. instead, i consider a different meaning of information what scientists (and anybody else) elaborate or communicate (the information born from) thoughts of their mind. the difference is the usual one, between the case of becoming informed by our senses about a feature of a world’s phenomenon, and the case of having a personal thought—or based on an inter-subjective one, basically without difference when concerning this issue. in other words, in all cases “information” is a concept from human mind, while a ”measured value” for a human is an external fact, then assimilated through subsequent mind inference. this difference is substantial and does not produce, in my opinion, any “evolutionary perspective” (italics added) in measurement science— at least, when the perspective does not concern the human mind. in this sense and meaning, i respectfully disagree with the above clause (3.) (iv), because it is not an information process in the current sense, and especially not according to the procedures used today in information science. let me introduce here a bit of humour, by citing an extreme case that recently occurred to me in this subject matter. it is popular in this period for an author to pretend to have found an information method to check if the current scientific evaluation of measurement uncertainty leads to correct estimates, e.g., in the case of the uncertainty associated to the value of the universal constants of physics (for the planck one see [3]). his method is said to be based on the information content. i had a short correspondence with him to understand the way he implements the information process and gets his results, until i discovered that he is considering as the information content of a given physical constant value the number of times that the value is cited in the reference document of the si, that being the “firm” basis of the rest of his computations … 4. “what sufficient conditions characterize measurement as a specific kind of property evaluation?” the author indicates as “evolutionary perspective” basically an extension to a wider meaning of the term “measurement”, namely to categories of observations that historically were not comprised in the current definition of measurement, namely the non-quantitative and the informational ones, a goal having attracted more attention in recent times. it seems to me that, for that purpose, it would be simpler to use a term different from “measurement”, e.g., the one used by the author itself, “evaluation” for the non-quantitative case. it does not seem to me a diminutio, being used simply for indicating a previous stage of the process, even in the quantitative case [4— or “representation”, if one prefers to avoid any misunderstanding about the existence of a possible quantification. concerning instead the issue of measurement vs. computation, i think that “computation” has always been in modern science part of the elaboration of the numerical (or logical) data obtained from observations (here the theoretical case is not considered), and that recently the elaboration is done prevalently via automatic computing. this fact has induced the development of a new discipline in science: informatics. arising from its nature, its most important influence in science has been an exponential increase in time of the development of new (machine) languages, obviously having for their roots in the human ones, thus also concerning the organization of the numerical knowledge for it elaboration and use. in that sense, i see informatics as a marginal follow up of the measurement process, not an integral stage of it—as also “simulation” and “extrapolation” are, both based on models, so actually pertaining to the theoretical frame. 5. conclusions i conclude by saying that i am aware and i had direct experience long since that there are gaps between disciplines—i contributed, since 30 years ago, to start a conference series just with the main goal, at that time, to “increase the extent of cooperation by calling scientists from both the mathematical and acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 the metrological fields to meet and exchange experiences” [5], later extended to computational science. these gaps include language: metrology has developed its own idiom that, in my opinion, is satisfactorily summarized in [2]—for a more extended discussion on this issue see [6]. different meanings may be assigned to terms in other discipline idioms, namely those of philosophy of science, or different terms may be used. consequently, it may easily happen that it becomes hard to overcome basic misunderstandings in inter-discipline conversations. on the other hand, this diversity is a richness of science. [7] references [1] l. mari, is our understanding of measurement evolving? acta imeko (2021) 10 (4), pp. 209–213. issn: 2221-870x. doi: 10.21014/acta_imeko.v10i4.1169 [2] jcgm 102:2008. international vocabulary of metrology – basic and general concepts and associated terms (vim) 3rd edition, jcgm (2012) pp. 108 [3] b. menin, the boltzmann constant: evaluation of measurement relative uncertainty using the information approach. journal of applied mathematics and physics (2019) 7, 486–504. doi: 10.4236/jamp.2019.73035 [4] f. pavese, measurement in science: between evaluation and prediction. in advanced mathematical and computational tools in metrology and testing xii (f. pavese, a. b. forbes, n. f. zhang, a. g. chunovkina, eds) series on advances in mathematics for applied sciences 90 (2021), world scientific publishing co, singapore, pp. 346–363. isbn 978-981-124-237-3 (hardcover), 978-981-124-239-7 (ebook) [5] advanced mathematical tools in metrology, workshop (p. ciarlini, m. g. cox, r. monaco, f. pavese, eds., 1993, turin, italy) series on advances in mathematics for applied sciences 16 (1994), world scientific publishing co, singapore. isbn 981-021758-7. see present denomination in ref. 4 above [6] f. pavese, from vim3 toward the next edition of the international dictionary of metrology, acqual, in special issue in memory of paul de bièvre, 2022 (in press) [7] f. pavese, p. de bièvre, fostering diversity of thought in measurement. in advanced mathematical and computational tools in metrology and testing x, vol.10 (f. pavese, w. bremser, a. g. chunovkina, n. fischer, a.b. forbes, eds.), series on advances in mathematics for applied sciences vol 86, world scientific, singapore, 2015, pp 1–8. isbn: 978-981-4678-61-2, isbn: 978-981-4678-62-9(ebook) https://doi.org/10.21014/acta_imeko.v10i4.1169 https://doi.org/10.4236/jamp.2019.73035 iomt-based biomedical measurement systems for healthcare monitoring: a review acta imeko issn: 2221-870x june 2021, volume 10, number 2, 174 184 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 174 iomt-based biomedical measurement systems for healthcare monitoring: a review imran ahmed1, eulalia balestrieri1, francesco lamonaca2 1 university of sannio, 82100 benevento, bn, italy. 2 university of calabria, 87036 arcavacata, rende (cs), italy. section: research paper keywords: internet of things; internet of medical things; iot devices; biomedical devices; biomedical measurement systems; non-invasive medical devices citation: imran ahmed, eulalia balestrieri, francesco lamonaca, iomt-based biomedical measurement systems for healthcare monitoring: a review, acta imeko, vol. 10, no. 2, article 24, june 2021, identifier: imeko-acta-10 (2021)-02-24 section editor: ciro spataro, university of palermo, italy received march 6, 2021; in final form april 30, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: imran ahmed, e-mail: iahmed@unisannio.it 1. introduction biomedical measurement systems (bms) play a key role in the detection and diagnosis of various diseases, providing new solutions for healthcare monitoring and improving bioprocesses and technology for biomedical equipment. generally, bms use measurement devices to collect vital signs, such as heart rate, pulse rate and body temperature, from the human body, and then these vital signs are processed by a processing unit. finally, the results are displayed to aid doctors in the diagnosis of various diseases. however, for old-fashioned bms (for example, nonportable and non-smart ultrasound machines), acquiring vital signs without disturbing the patient’s routine activities is challenging. moreover, these old-fashioned bms require patients to visit the hospital for their check-up, which takes a great deal of time out of their busy lives. therefore, in order to utilise bms without disturbing routine activities, these systems must be able to acquire data in different scenarios [1]. this means not only being limited to situations that require the presence of patients inside hospitals but also outside, for example, industry workers, miners and sports professionals in their working environments as well as military personnel and individuals in their home environment. hence, the use of bms for today’s lifestyle demands that devices belonging to these systems should be compact, user friendly and comfortable for the wearer, with adequate measurement accuracy even in a harsh or complex environment [1], [3]. as a result of these emerging requirements, recent research activities have been directed towards improving bms by using the internet of things (iot) [4]-[8] and by creating a new paradigm, the internet of medical things (iomt) [1], [9]. these iomt solutions are mainly based on wearable and implantable biomedical measurement devices [12] using different sensors, such as tactile [13], silicon [14], polymer [15] and opticalbased sensors [16], [17] or sensors already integrated into commonly used devices, such as smartphones [3], [18]-[23]. wearable iomt bms typically include devices such as smartwatches, armbands, glasses, smart helmets and digital hearing devices [1], [25]. today, many wearable devices are smart in the sense that they can locally process signals acquired from sensors and transmit measurement data through the network to other connected devices (on a mobile phone or through hospital abstract biomedical measurement systems (bms) have provided new solutions for healthcare monitoring and the diagnosis of various chronic diseases. with a growing demand for bms in the field of medical applications, researchers are focusing on advancing these systems, including internet of medical things (iomt)-based bms, with the aim of improving bioprocesses, healthcare systems and technologies for biomedical equipment. this paper presents an overview of recent activities towards the development of iomt-based bms for various healthcare applications. different methods and approaches used in the development of these systems are presented and discussed, taking into account some metrological aspects related to the requirement for accuracy, reliability and calibration. the presented iomtbased bms are applied to healthcare applications concerning, in particular, heart, brain and blood sugar diseases as well as internal body sound and blood pressure measurements. finally, the paper provides a discussion about the shortcomings and challenges that need to be addressed along with some possible directions for future research activities. mailto:iahmed@unisannio.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 175 systems) so that doctors can promptly monitor and analyse the patient’s data to make effective decisions, especially in case of emergency [5], [8]. figure 1 shows some smart wearable and implantable devices that are used for measuring different vital signs. these devices acquire measurement data and then process and send the data to local elaboration units for further processing and for the presentation of the resulting information to clinicians or patients [12]. therefore, they are considered a substratum for the development of iomt bms. in order to stimulate research for designing innovative bms, this paper presents an overview of iomt-based bms. it is an extended version of a previous contribution to technical committee 4 of the 2020 international measurement confederation (imeko tc4) conference held in palermo, italy [9], and takes into account further developments in measurement devices and available techniques used for different medical applications. a discussion concerning each example described is reported, including the advantages, working principles and technology usage. in addition, some general issues and challenges related to the metrological aspects of iomt-based bms are highlighted. the organisation of this paper is as follows. section ii explains the basic architecture of iomt-based bms, and section iii presents the five main categories of existing iomt-based bms while also introducing some important metrological issues in relation to measurement devices used in iomt. section iv discusses the metrological challenges for existing bms, and finally, the conclusions are presented in section v. 2. iot-based biomedical measurement systems (iomt) the main advantage of iomt-based bms is that these systems provide the online monitoring of a patient's health to facilitate a quick response in an emergency and to offer remote access to doctors as well as relatives and the patients themselves for monitoring targeted vital signs (blood pressure, heart rate, glucose level and so on) [26]. to this end, iomt-based bms are usually designed to offer the following features: (i) the continuous monitoring of parameters without disturbing the patient’s daily routine, (ii) an alarm triggered in an emergency, and (iii) the use of low-cost measurement devices. as a consequence, the final aims of an iomt-based bms include the following: (i) a reduction in the cost of hospitalisation, (ii) the optimisation of public health costs, (iii) an increase in the independence and quality of life of older adults and (iv) an improvement in the monitoring of hospitalised and/or critical patients. the general architecture for iomt-based systems is shown in figure 2 [27]. compared with architectures [28]-[30] that are specifically designed for particular applications, such as heart disease, blood pressure and blood sugar, the system shown in figure 2 is more general and demonstrates the common components belonging to complete iot-based bms: (i) a physical layer, (ii) a data integration layer and (iii) an application service/presentation layer. in the physical layer, iomt-based bms mostly use wearable devices to measure the vital signs (heart rate, pulse rate, body temperature, blood pressure, oxygen concentration, lung contraction volume, blood sugar level, respiration rate and so on) of the subjects being monitored. these measurement data are first stored in the storage memory and then transferred to the data integration layer (figure 2) through the internet/bluetooth or any other communication protocol. in the data integration layer, the received data are processed and then sent to the application service/presentation layer. nowadays, various types of software are available to extract useful information from the measurement data. at the application service/presentation layer, data are analysed by the doctor or experts, enabling them to take effective decisions about the disease. in the following sections, some recently developed iomt-based systems applied to the diagnosis of various diseases are discussed along with some metrology-related issues. 3. existing iomt-bms classification overview iomt bms have various medical applications, including healthcare monitoring and the diagnosis of various diseases. based on these applications, this section has classified the existing iomt bms into five main categories: (i) iomt bms for heart disease, (ii) iomt bms for examining internal body a) b) figure 1. smart biomedical measurement devices: (a) smart wearable measurement devices [24], (b) smart implantable measurement devices. figure 2. general architecture of iomt systems. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 176 sounds, (iii) iomt bms for blood pressure, (iv) iomt bms for brain diseases and (v) iomt bms for blood sugar disease. for each category, various examples are discussed, examining their advantages, working principles and technology usage as well as their reliability and accuracy. in the literature, it has been observed that researchers in the field of iomt bms are usually concerned with applying iot technologies to different medical applications without focusing on developing suitable solutions in terms of metrology and requirements. since iomt bms rely on various types of measurement to acquire information about vital signs of the human body, therefore the reliability and accuracy of these systems play a critical and essential role in their actual ability to provide correct and suitable information that can be used by doctors for their diagnoses. it is also essential that iomt-bms devices are properly calibrated to ensure that they are accurate and perform in an appropriate and timely manner. vital-sign measurements that do not have the required accuracy can result in the misdiagnosis of patients’ diseases. accurate and reliable measurements not only ensure effective treatment but also save time and costs related to misdiagnosed patients. thus, the metrological characterisation and calibration of devices are very important for validating the reliability of iomt systems, and researchers must consider these characteristics, starting at the initial design phase and continuing to the final testing phase of the iomt bms. the following subsections discuss existing examples of iomt bms and related metrological issues. 3.1. iomt bms for heart disease the early detection of heart disease is very important for saving lives, and iomt could play a vital role in achieving this goal. in the physical layer, iomt-based systems for heart-disease detection generally take numerous measurements, such as sugar concentration levels, cholesterol levels, heart rate and pulse rate as well as other vital signs, using sensors. these measurements are usually taken by iomt devices, such as smartwatches, electrocardiogram (ecg) monitors and other ecg or opticalsensor based heart-monitoring medical devices. smartwatches mostly use optical sensors to scan the blood flow near the wrist to measure these vital signs, while ecg monitors use electrodes to acquire the electrical signals moving through the heart, record the strength and the timing of these signals and then display the acquired measurements in graphical form. however, there are few limitations to measure these vital signs due to the measurement conditions (for example, a patient sweating during the ecg measurement) and the accuracy of the measuring device [31]. once the measurements are taken, they are sent to the data integration layer through the internet and may use cloud-based servers for further processing [32]. after processing, the results are analysed by doctors in the application service/presentation layer by means of a mobile app or web page. additionally, further algorithms based on artificial intelligence (ai) are now available and are integrated into the data integration layer in order to further aid doctors in the diagnostic process [28]. for example, in [29], an iomt-based detection system for the monitoring of heart-related diseases based on the deep belief network model and a higher order boltzmann machine is presented. this system uses iot devices, such as embedded sensors and a wearable watch, to measure vital signs, such as heart rate and blood pressure, and to record other physical activities. however, the authors do not provide any details on the types of sensor and wearable watch used in the collection of these data or how accurate these measurements are. after collection, the required data is transmitted to the healthcare centre to be processed using the higher order boltzmann deep belief neural network (hobdbnn), which learns the features of heart disease from previous analyses. to evaluate the diseaseprediction accuracy, the system is implemented using matlab, and the collected data is divided into two sets: 70 % of the total data is for training the network and 30 % for testing purposes. the hobdbnn performance is evaluated by different metrics, such as the sensitivity and specificity, precision and f-measure, [29] which are defined as 𝑆𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦/𝑅𝑒𝑐𝑎𝑙𝑙 = 𝑇𝑟𝑢𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 𝑇𝑟𝑢𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 + 𝐹𝑎𝑙𝑠𝑒 𝑛𝑒𝑔𝑎𝑡𝑖𝑣𝑒 , (1) 𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦 = 𝑇𝑟𝑢𝑒 𝑛𝑒𝑔𝑎𝑡𝑖𝑣𝑒 𝑇𝑟𝑢𝑒 𝑛𝑒𝑔𝑎𝑡𝑖𝑣𝑒 + 𝐹𝑎𝑙𝑠𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 , (2) 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = 𝑇𝑟𝑢𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 𝑇𝑟𝑢𝑒 𝑛𝑒𝑔𝑎𝑡𝑖𝑣𝑒 + 𝐹𝑎𝑙𝑠𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 , (3) 𝐹 𝑚𝑒𝑎𝑠𝑢𝑟𝑒 = 2 × 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 × 𝑅𝑒𝑐𝑎𝑙𝑙 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 + 𝑅𝑒𝑐𝑎𝑙𝑙 , (4) where true positive is the model’s outcome that shows correct prediction of positive class, true negative refers to the correct prediction of a negative class, false positive is the model’s outcome that shows incorrect prediction of positive class and false negative is the incorrect prediction of a negative class. these hobdbnn performance metrics are then compared with other optimised classifiers, such as genetic algorithm-based trained recurrent fuzzy neural networks, swarm-optimised convolution neural networks with a support vector algorithm, particle-optimised feed-forward back-propagated neural networks and particle swarm-optimised radial-basis function networks. the results demonstrate that the performance metrics of the proposed hobdbnn have better values than those achieved using the other above-mentioned methods. the overall prediction rate of the deep network of the proposed system is reported to be about 99.03 %. similarly, an ecg-based heart-disease recognition system is presented in [33]. this system measures the heart data by using a commercially available device called the pulse sensor amped, which consists of a simple optical heart-rate sensor with amplification and noise cancellation hardware components to collect noise-free heart-pulse readings. the collected data are then transmitted wirelessly to the mobile application via an arduino microcontroller. a monitoring algorithm is implemented in a mobile application to detect any variances from the normal heart rate. this mobile application also raises the alarm whenever an emergency occurs. the system is capable of predicting heart disease by using an intelligent classifier and a machine-learning algorithm, which are pre-trained using clinical data. the authors have reported a 100 % detection rate for monitoring the algorithm and an 85 % correct-classification rate for the classifier. in [34], an iomt-based low-power cardiovascular healthcare system with cross-layer optimisation from a sensing patch to a cloud platform is presented. it uses a wearable ecg patch with custom system-on-chip technology that is integrated with a wireless connectivity to connect with mobile devices and a cloud acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 177 platform. to measure and process the ecg signals, the sensing patch needs to be placed directly on the human body. the system performance is evaluated by first checking the signal denoising and compression capability and then by evaluating the correct disease prediction rate using the mobile device and the cloud platform. to evaluate the signal denoising and compression capability of the proposed system, it is tested on an mit-bih database (an open-source dataset that provides standard investigation material for the detection of heart arrhythmia [35]). in particular, various types of noise (a baseline drift noise, power-line noise and electromyography noise) are added to the dataset signals, and, under three different scenarios, the proposed system is evaluated: (i) signal denoising, (ii) signal compression and (iii) combined signal denoising and compression. in this context, some metrics are used, such as the denoised mean square error (mse), the denoised signal-to-noise ratio (snr, db), improvement in the mse in percentage, the percentage root mean square difference (prd), the signal compression ratio (cr) and the quality score (qs), which are defined as 𝑀𝑆𝐸 = ∑[𝑓𝑐 (𝑛) − 𝑓𝑟 (𝑛)] 2 𝑁𝑖 1 𝑁𝑖⁄ , (5) 𝑆𝑁𝑅 = 10 log [ ∑ 𝑓𝑐 2(𝑛) 𝑁𝑖 1 ∑ [𝑓𝑐 (𝑛) − 𝑓𝑟 (𝑛)] 2𝑁𝑖 1 ] , (6) 𝑃𝑅𝐷 = √∑[𝑓𝑖 (𝑛) − 𝑓𝑟 (𝑛)] 2 𝑁𝑖 1 , (7) where n is the number of samples of a signal, fc(n) is the noiseless reference signal, fr(n) is the reconstructed signal after denoising, fi(n) is the input signal and ni is the total length of the input signal. 𝐶𝑅 = 𝑁𝑖 /𝑀𝛿 , (8) 𝑄𝑆 = 𝐶𝑅 𝑃𝑅𝐷⁄ , (9) where mδ is the number of resolved coefficients after compression. in the case of signal denoising, the results show that the average improvement in snr is 12.63 db and the improvement in mse is 94.47 %. for signal compression, the results show that the average cr is 7.89, the average prd is 0.61 % and the average qs is 13.06. with regard to combined signal denoising and compression, the average improvement in mse and snr is 94.47 % and 12.63 db, respectively, and the average cr, prd and qs are 9.8, 6.14 % and 1.62, respectively. in this case, compressed and denoised signals are directly generated with only one iteration of the whole system, which can improve the system efficiency at the cost of sacrificing signal performance (prd increases and qs decreases). to test the disease-prediction accuracy executed on the cloud platform and mobile device, four kinds of ecg signal are analysed to detect arrhythmias in disease [36]: (i) a normal ecg signal, (ii) a left bundle-branch block (lbbb), (iii) a right bundle-branch block (rbbb) and (iv) paced beats (pb). the five-fold cross validation [36] results for classifying these four kinds of ecg signal, using the mobile device and the cloud platform, are presented. for a normal ecg, the correct classification rate is 96 %, for lbbb it is 98 %, for rbbb it is 100 % and for pb it is 94 %. the average correct classification rate of the proposed disease-prediction system executed on the cloud platform is 97 %, which is calculated using the mean average correct detection values of all five-fold results [36]. 3.2. iomt bms for internal body sounds auscultation is a process for examining internal body sounds, such as from the heart, lungs or other organs, for medical diagnoses. typically, a stethoscope is used to examine these sounds [46], which can help to detect abnormalities that occur in the human body and provide information about various diseases [46]. however, the auscultation process has some important limitations: it requires the doctor to have good hearing acuity and expertise in order to accurately detect abnormal sounds, the improper placement of the stethoscope on the body results in the improper acquisition of sound, the patient should be in a relaxed position for correct sound acquisition and the presence of background noise can affect the process. in this regard, aibased auscultation systems have been proved to be very helpful for professionals/doctors in determining abnormal sounds [47], [48]. however, iot-based auscultation systems allow doctors to remotely monitor their patients and record the sounds, and they offer features for sharing information with other professionals to obtain an immediate second opinion [49]. in this paper, some recently developed iomt devices related to the measurement of internal body sounds are discussed. for example, ekuore pro [37] is an iot-based wireless stethoscope that acquires and measures internal body sounds through devices placed on the body. it can be connected to a mobile app using wi-fi to show phonograms in real time and keep track of the patient’s medical history, which can be easily shared with professionals and doctors. however, the manufacturer has not provided any information on the accuracy of the system. another iomt device for the measurement of internal body sounds currently available on the market, stethee, is presented in [38]. it is an ai-based wireless stethoscope that uses aida technology (an aiand machine-learning-based solution to analyse data [39]) to analyse the heartbeat and lung sounds acquired by placing the device on the body. it uses the stethee app, which displays clinical information such as heart rate, average systole, average diastole and respiratory rate in only 20 seconds, and it also allows the live streaming of sound data so that it can be visualised in real time for the easy evaluation of vital signs. although the manufacturer claims that the device has a number of powerful features, but information about measurement methods for the calculation of parameters and their accuracy is not provided. similarly, the iomt-based hd steth system [40] has been introduced by the hd medical group for cardiac auscultation. hd steth is a medical stethoscope approved by the food and drug administration (fda), and it is composed of integrated ecg electrodes, four microprocessors, an ai-enabled detection system for detecting cardiovascular disease and a display screen to visualise phonocardiographs and electrocardiographs. it provides the real-time visualisation of cardiac waveforms via the bluetooth low energy (ble) mobile app, and patients can easily share data with specialists via a cloud platform for a remote diagnosis. this device has been patented with noise cancellation and smart amplification for high-fidelity auscultation. as with the previously mentioned devices, the manufacturer has not provided any information on measurement accuracy. another iot-based smart stethoscope, stethome [41], is a ce certified and intelligent medical device able to measure and classify abnormal sounds in the respiratory system, and it can remotely analyse other internal sounds in the body. stethome is able to connect to a smartphone via bluetooth, and the patient can place acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 178 the device on the points indicated by the mobile app for the recording to start automatically. the mobile app then stores the medical history in the cloud, notifying patients if there are any abnormal sounds, and sends the examination results to a doctor, who will take effective decisions accordingly. it uses ai algorithms to verify the examination process, which makes, in the manufacturer’s opinion, the stethome about 29 % more accurate than a specialist in the detection and classification of abnormal sounds. however, the accuracy evaluation process is not provided by the manufacturer, so there is no information on how the 29 % figure has been obtained. in [42], an iomt-based wireless digital stethoscope with mobile integration for sound auscultation is presented. the system first acquires the sound signals from the human body by using a traditional stethoscope chest piece that has an integrated microcontroller unit and a bluetooth communication device. the acquired data are then processed and finally transmitted to a mobile device for recording and listening to and for the visual display of sound waveforms. this system can be used for monitoring patients in remote locations, especially in quarantine units, and can also be utilised for remotely training healthcare staff through the broadcasting of the recorded signals. however, the authors have not provided any information on the measurement accuracy of the system. in [43], an iomt-based novel cardiac auscultation monitoring system based on wireless sensing for healthcare is presented. in this system, the cardiac-sound auscultation sensing unit consists of two main components. the first is the hky-06b heart-sound sensor, manufactured by huake electronic, which converts the weak heart vibration signal into electrical signals. it also has integrated micro-sound components made from polymer materials. it has the capability of detecting all kinds of heart and acoustic sounds from the body’s surface. the second component is the data acquisition module consisting of cc2540 system-on-chip technology with an external antenna [44], an 8051-based micro-controller [45] and other auxiliary components. this module’s main function is analogue-to-digital conversion and bluetooth transmission. it uses a ble bluetooth protocol to offer power efficiency and moderate the data transmission rate. the proposed system is used to monitor cardiovascular health, and the acquired information is sent to caregivers as well as medical practitioners using the iot network and an android mobile app. in particular, pre-processing, segmentation and clustering techniques are performed to gather any significant health information. the system also features a hilbert–huang transform to reduce interference signals and help extract features of the first heart sound, s1, and the second heart sound, s2. in healthy people, s1 and s2 are produced by the closure of the atrioventricular valves and semilunar valves, respectively. the detection rate of the proposed system for s1 and s2 is 88.4 % and 82.7 %, respectively, and the overall detection rate of s1 and s2 for irregular heart sounds is 86.66 %, as reported in the article. in [46], an iomt-based smartphone auscultation system is presented. it is a low-cost stethoscope connected to a mobile phone that can record lung sounds and detect abnormal sounds from recorded data. the system uses a support vector machine to identify the sound of wheezes and crackles by extracting features from the spectrogram of each sound signal. the system is trained using recorded data consisting of lung sounds from 155 patients suffering from wheezes or crackles. the system is validated by evaluating the performance of detection algorithms, taking into account the area-under-the-curve (auc) parameters. this auc is calculated by plotting the receiver-operating-characteristic (roc) curve between the false positive rate and the true positive rate, as shown in figure 3. for the crackle detector algorithm, the auc value is 0.87, and for the wheeze algorithm, the auc value is 0.71. in [49], the iot-based smartphone monitoring of a second heart sound split is presented. the heart sounds are recorded using a customised external microphone consisting of an acoustic stethoscope and a 3.5-mm mini-plug condenser microphone with adapter that connects wirelessly to a mobile app to record the heartbeat. the system detects s1 and s2 by converting the recorded heartbeat signal into a frequency domain using the fast fourier transform. s2 is then fed to a discrete wavelet transform (dwt) and a continuous wavelet transform to extract the aortic and pulmonic components. this system can be very useful for remotely monitoring s2. however, these authors have also not provided any information on the measurement accuracy of the system. 3.3. iomt bms for blood pressure high blood pressure (bp) is a serious issue that affects older people as well as young adults; it is important for patients to control their bp with repeated check-ups, otherwise serious conditions such as heart failure or strokes can occur. therefore, patients that suffer hypertension need a bp check-up several times a day. iomt-based bp measurement systems can help to make this task easier for the patients. an automatic bp measurement system using the oscillometric technique is presented in [50]. this system is capable of monitoring both systolic and diastolic pressure, which are used to define arterial bp. the values are continuously updated through wi-fi on a database that can be accessed remotely, where these data are compared with already existing data to improve the accuracy of the results. the authors have stated the accuracy of the system to be 7 mmhg. this accuracy has been calculated by means of standards or protocols (defined for bp measuring devices) that are based on the general consensus of several organisations, such as the us association for the advancement of medical figure 3. receiver operating characteristic (roc) curves for crackles and wheezes [46]. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 179 instrumentation (aami), the british hypertension society and the european society of hypertension (esh), which are active working groups on bp monitoring, as well as the international organization for standardization (iso) [51]. however, these protocols and the accuracy of the oscillometric bp instrument continue to be the subject of discussion in the scientific field. this is due, for example, to the bp oscillometric device’s tendency to provide inaccurate measurements for certain patient groups and to be prone to noise and artifacts. moreover, the difficulty in reproducibility of the adopted calibrating methods [52]-[54] allows monitors to pass validation tests when there are clinically significant differences in bp estimated values in some individuals [52]. in [55], the qardio arm system is used to develop a smart bp measurement system, in which the acquired oscillometric data are transferred to a smartphone app for analysis and visualisation. the accuracy of the system has been evaluated by comparing its results with the omron m3 device, as it has been clinically validated according to the existing esh international protocol. however, the same concerns about the accuracy and calibration of the oscillometric bp measurement expressed for the previous device exist for the qardio arm system. in [56], the omron heartguide is introduced. it is an fdaapproved iomt smartwatch for bp measurement. this device can measure bp by using an inflatable cuff within the smartwatch bracelet. the smartwatch sends the data to the data integration layer via the internet, and then sends it on to the application service/presentation layer, where it is available for the doctor to access it in real time. the measurement accuracy of the device is about 3 mmhg, but the validation of this device has also been carried out under the protocols for bp devices, with the same limitations reported above. similarly, the iomt-based instant blood pressure (ibp) auralife app is presented in [57] for bp measurement using a mobile phone and without the use of any external hardware. ibp auralife extracts the bp values from the photoplethysmogram (ppg) signal acquired with a flash led light and a mobile camera. the accuracy of the system is evaluated by using the aami/iso 81060-2:2013 protocol to compare ibp with other reference upper-arm cuffless bp monitors and oscillometric blood pressure cuff devices. a result analysis shows that the device’s systolic bp mean accuracy is 2.7 mmhg and diastolic bp mean accuracy is 2.6 mmhg. however, the system only delivers results at this level of accuracy for individuals whose bp lies in a specific range; therefore, it is not suitable for use by individuals whose bp falls outside of the systolic range of 83–178 mmhg or diastolic range of 58–107 mmhg. the manufacturer states that it is not recommended for medical use and not a substitute for cuff-based or other bp monitors because it provides an estimate of bp only. another device, the asus vivowatch bp, is reported in [58]. this device has an ecg sensor on the back that receives an ecg signal from the wrist and an optical sensor on the front for the measurement of ppg signals from the index finger. this data is then automatically processed, and the results are displayed to the user. however, the manufacturer has not provided any information on the accuracy of the device. 3.4. iomt bms for brain diseases many people are affected by brain diseases, such as brain tumours, dementia, headaches, brain strokes, chronic pains in the head, tourette’s syndrome, alzheimer’s, parkinson’s and epilepsy. the development of iomt-based bms in the field of brain-related diseases is a promising solution for the monitoring of patients and timely detection of such diseases. typical devices that are used in iomt-based bms for brain-related diseases are electroencephalogram (eeg) electrodes, smartwatches, galvanic skin response sensors and cameras. these devices are used to monitor brain-disease patients. for example, by using eeg electrodes, it is possible to measure eeg signals to monitor brain activities, with galvanic skin response sensors, the changes in sweat glands can be measured to monitor stress, and by using cameras, it is possible to monitor the daily physical activities of patient (e.g. neuro-degenerative disease patients). in this context, some iomt brain-related bms are used to monitor and measure vital signs (for example stress levels and eeg signals) and can generate an alarm in the case of a crisis. in [59], an iomt smart sensor for stress-level detection is presented. this is a novel stress-detection system called istress. it monitors stress levels by measuring parameters such as the rate of body motion, body temperature and levels of sweat during physical activity using sensors for temperature and humidity and also a three-axis accelerometer. the collected data are then processed using a neural network based on a mamdani-type fuzzy logic controller, which is based on a fuzzy technique. the proposed system is very efficient in terms of power consumption and allows the real-time remote monitoring of stress levels by transmitting the collected data to the cloud, thus helping to improve the detection of the patient’s health status. the system classifies stress into three levels: low, normal and medium. the outputs of the sensors are fed to a fuzzy logic controller designed in matlab to detect stress levels. the authors report that this system has a stress detection rate of 97 %. in [60], a high-definition camera is used to analyse the motion of the patients with neuro-degenerative diseases (nd). the system is based on remote video monitoring that measures the quantity and quality of two clinically relevant motor symptoms (impairment in step length and arm-swing angle). the system has been evaluated by mean absolute error (mae), which gives an indication of how close the measurements are to the ground truth. for this evaluation test, a video of a healthy individual in a walking position is recorded in two scenarios: (i) just walking and (ii) sitting on a chair, followed by standing up and walking. the camera is setup to capture the lateral view for the correct detection of the required nd parameters. a total of 23 valid step lengths and 10 arm-swing angles are recorded in both cases. the ground-truth measurements are marked/annotated using the kinovea software package, which provides a set of tools to capture, slow down, study, compare, annotate and measure the technical performance of a video [61]. the authors state that the system is able to measure nd parameters with a tolerance ranging from 2 % to 5 %. in [62], an iot-based system is reported that predicts the parkinson’s brain disorder. the system uses wearable iot-based deep brain simulation (dbs) to collect patient brain activities and assess the condition of cells to predict brain functionality changes. dbs is a smart device that collects brain data by placing electrodes on individuals to conduct continuous monitoring. by means of an heuristic tubu-optimised sequence modular neural network (htsmnn), it is possible to predict the changes present in the human brain and its functions. to validate the proposed system, the authors have used the dataset in [63] that contains parkinson’s disease-related information. the performance of the system is analysed using mae, mse, precision, recall, classification accuracy (ca) and the auc, which are defined as acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 180 𝑀𝐴𝐸 = 1 𝑁 ∑ 𝑦𝑖 − ỹ𝑖 𝑁𝑖 1 , (10) 𝑀𝑆𝐸 = 1 𝑁 ∑(𝑦𝑖 − �̃�𝑖 ) 2 , 𝑁𝑖 1 (11) 𝐶𝐴 = 𝑇𝑃 + 𝑇𝑁 𝑇𝑃 + 𝑇𝑁 + 𝐹𝑃 + 𝐹𝑁 , (12) where n is the number of parkinson’s features and y is the actual and �̃� the predicted output; tp is the true positive, tn the true negative, fn the false negative and fp the false positive, while precision and recall have already been defined in equations (2) and (4). the above system parameters are calculated by first calculating the system deviation to identify the errors present in the parkinson’s disease recognition process. the deviation is computed by considering the difference between the actual and predicted output. the htsmnn system deviation value is compared with other methods, such as particle-swarm-optimised neural networks, particle-swarm-optimised radial basis neural networks, genetic algorithm-based extreme machine-learning networks and tubu-optimised deep neural networks. after computing the above parameters, it is reported by the authors that the system ensures an mae equal to 0.284, an mse equal to 0.273 and a ca equal to 98.07 %. in [64], an iomt-based bms is presented using the deep learning approach, called stress-lysis, which is used to measure stress levels. the deep learning system is developed and tested with three different datasets containing activities of daily living (adl) and physical activities. adl are collected with an accelerometer and humidity and temperature sensors worn on the wrist. a physical-activity monitoring dataset is developed by collecting 18 different activities from nine volunteers wearing three inertial measurement sensors and a heart-rate monitor. the system learns the stress-level parameters obtained by the wrist band, such as skin temperature, heart rate and sweat during physical activity. the authors have conducted the validation of this proposed solution by analysing the collected dataset using python and tensorflow. the dataset consists of 2,000 samples, of which 1,334 samples are for training purposes and 667 are for testing. the results are displayed in the form of a loss function that demonstrates that the correct classification rate is in the range of 98.3 % to 99.7 %. in [9], a seizure-detection iomt system is demonstrated by using a dwt, hjorth parameters and a k-nn classifier. this system is based on an iomt device called neuro-thing, which is capable of accurately detecting seizure-related diseases. in this method, eeg electrodes are used to acquire eeg signals, which contain information on the physiological state of the brain to understand and monitor brain function. these eeg signals are decomposed in sub-bands using dwt, and then hjorth parameters are extracted from the decomposed signals, which are classified using the k-nn method. the device is capable of sending the information to the iot cloud, where it can be accessed by doctors/physicians so that they can take effective decisions about the disease. in order to validate the proposed method, the authors perform system-level simulations, implemented in a simulink environment, where dwt and the k-nn classifier are developed by user-defined functions. iomt implementation is done using thingspeak, which is an open data platform that enables iot applications to gather and analyse data in the cloud. in addition, the system is validated by experimental results in which open-source eeg data [11] is utilised to validate the classification capability of the k-nn model by calculating sensitivity and specificity, as defined in equations (1) and (2). the reported results show 100% classification accuracy for normal vs. ictal eeg and 97.9% for normal and interictal vs. ictal eeg. 3.5. iomt bms for blood sugar disease diabetic or blood sugar disease occurs when the human body is not able to properly process blood sugar [65]. generally, blood sugar is measured by determining the concentration of glucose in the blood. most devices are based on electrochemical technology, which uses electrochemical strips to perform measurements. there are some limitations to obtaining accurate measurements due to the variance in strip manufacturing and the use of old or out-of-date strips. other limitations arise from environmental factors, such as temperature or altitude (in hilly areas), or from patient factors, such as improper handwashing [66]. patients suffering from this disease usually require their blood glucose levels to be checked regularly and to manage their diet to keep the effects of this disease under control. recent research focuses on using iomt to improve the sharing of measurement data with physicians and then giving prompt feedback to patients. an iomt system with a novel framework to measure and monitor glucose levels is presented in [65]. the system is used for remotely powered implantable glucose monitoring, in which the signal, retrieved from the interaction of radio frequency signals with biological tissue, is first characterised and then monitored. a low-power bluetooth protocol is used for the transmission of measurement data to the user’s mobile. however, the authors do not discuss the accuracy of the proposed system. in [67], an iomt-based system for glucose monitoring is presented. the article presents a non-invasive blood glucose measurement system based on optical detection and an optimised regression model. a system for light absorbance at a wavelength of 940 nm with a prediction model is designed, and the technique is validated through measurements taken from human fingertips. the evaluation of the method is performed by comparing the achieved results with referenced blood glucose concentrations using an sd-check one-touch glucometer. the results are evaluated by calculating the mean absolute difference, which is found to be about 5.82 mg/dl, while the mean absolute relative difference (mard) is 5.20 %, the average error (avge) is 5.14 % and rmse is 7.50 mg/dl. the test samples of 43 healthy people and diabetic patients are collected for a clarke error grid analysis, which is used to quantify the clinical accuracy of blood glucose devices with reference values [68]. the overall results show that better measurements have been achieved using the proposed approach than using the non-invasive measurement methods presented in [69]-[71]. in [72], iglu 2.0 is presented. this is a new iomt-based wearable device that is used for measuring blood glucose levels. the device uses infrared spectroscopy and iomt paradigms for the remote access of data by doctors/users. analysis of the optimised regression model is performed, and the system is validated on healthy, pre-diabetic and diabetic patients. the robust regression models of serum glucose levels are deployed as the mechanism for measurement for this proposed solution. in particular, a total of 50 different samples of capillary glucose and acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 181 37 samples of serum glucose are taken from pre-diabetic, diabetic and healthy people for the testing and analysis of the proposed iglu 2.0 device. the obtained results are then compared with the reference values for serum glucose obtained from a laboratory. in particular, the reference values for capillary glucose are measured using an invasive glucometer sd check, which is the gold standard for validation purposes. in terms of capillary blood glucose, avge is found to be about 6.07 % and mard is 6.08 %, whereas for serum glucose, avge is 4.88 % and mard is 4.86 %. in [73], a reliable iot-based embedded healthcare system that uses the alaris 8100 infusion pump, keil lpc-1768 board and iot cloud to monitor diabetic patients is proposed. the infusion pump delivers medical liquid (insulin) to the patient on a timely basis, and the keil lpc-1768 board is responsible for delivering the control commands and daily patient readings and providing a secure connection layer. the system is capable of storing patient records in the cloud, and a secure hash algorithm and secure socket shell are employed in this system to achieve the reliability components of the proposed scheme. the article reports that the system has a 99.3 % probability of continuing to operate normally. also the authors claim that the proposed system design is reliable, secure and authentic in relation to security and privacy. however, the metrological aspects of the proposed system are not discussed. 4. metrological iomt-bms challenges there are several metrological issues related to iot-based bms that should be addressed when considering their large-scale implementation in the healthcare sector. as reported in the previous sections, several devices have been proposed by researchers in this field, but the majority of researchers do not focus on the device’s metrological characterisation, and they provide questionable validation, calibration and/or accuracy parameters. the treatment and monitoring of diseases are based on measurements provided by the devices used in iomt bms. if iomt bms are not investigated from a metrological perspective, there can be no certainty about their capacity for providing reliable, accurate, precise and repeatable measurements of vital signs, with the serious risk of delivering incorrect information, leading to the misdiagnosis or incorrect treatment of diseases [73]. therefore, it is important to consider all aspects relating to the measurement accuracy of the devices (measurement nodes) used in the system. in addition, it is essential that the measurement devices used in iomt bms are properly calibrated [75] using suitable reproducible procedures and references. there should be appropriate guidelines for the common user on the calibration process of the iomt device or properly accredited laboratories that provide these services, either onsite or remotely. in practice, calibrations are usually made by switching the device on and off or zeroing or resetting the device, which is not recommended [73]. due to the presence of various measurement parameters (i.e. pressure flow, temperature, sound pressure), it is difficult to calibrate these devices. moreover, one of the major challenges is that there is no general consensus among different laboratories (from different disciplines, such as pressure flow, temperature, sound pressure) on how to regulate calibration traceability on a single platform for the different kinds of measurement system [76]. 5. conclusion iomt bms play an important role in the diagnosis of diseases, such as abnormal blood pressure, heart attacks, brain tumours, alzheimer’s, parkinson’s and epilepsy as well as in healthcare monitoring, the monitoring of disease progression and biomedical research. the rapid growth of and increasing demand for iomt bms that suit the modern lifestyle make it essential for these systems to be accurate, fast, user friendly and comfortable for the wearer and to provide stability and accuracy even in harsh environments. based on these common requirements, scientists are trying to improve these bms and develop new solutions. the presented overview aims to stimulate research in this field and offer some highlights of iot-based bms for specific diseases. the paper has also highlighted some important challenges related to metrology in iomt that need to be addressed. such issues will lay the groundwork for the development of new multidisciplinary approaches to the design of improved iomt systems and, thus, guarantee the continuous monitoring of human health, delivering accurate and reliable measurements. references [1] e. balestrieri, f. lamonaca, i. tudosa, f. picariello, d. luca carnì, c. scuro, f. bonavolontà, v. spagnuolo, g. grimaldi, a. colaprico, an overview on internet of medical things in blood pressure monitoring, proc. of the ieee int. symp. on medical measurements and applications (memea), istanbul, turkey, 25628 june 2019, pp. 1-6. doi: 10.1109/memea.2019.8802164 [2] d. luca carnì, v. spagnuolo, g. grimaldi, f. bonavolontà, a. liccardo, r. s. lo moriello, a. colaprico., a new measurement system to boost the iomt for the blood pressure monitoring, proc. of the ieee int. symp. on measurements & networking (m&n), catania, italy, 8-10 july 2019, pp. 1-6. doi: 10.1109/iwmn.2019.8805016 [3] d. l. carni , d. grimaldi, p. f. sciammarella, f. lamonaca, v. spagnuolo, setting-up of ppg scaling factors for spo2% evaluation by smartphone, proc. of the ieee int. symp. on medical measurements and applications, benevento, italy, 15-18 may 2016, pp. 430-435. doi: 10.1109/memea.2016.7533775 [4] d. luca carnì, d. grimaldi, f. lamonaca, l. nigro, from distributed measurement systems to cyber-physical systems: a design approach, int. j. of computing 16 (2017) pp. 66-73. [5] f. lamonaca, c. scuro, d. grimaldi, r. s. olivito, p. f. sciammarella, d. l. carnì, a layered iot-based architecture for a distributed structural health monitoring system, acta imeko 8 (2019) pp. 45-52. doi: 10.21014/acta_imeko.v8i2.640 [6] e. balestrieri, p. daponte, l. d. vito, f. picariello, s. rapuano, i. tudosa, a wi-fi internet-of-things prototype for ecg monitoring by exploiting a novel compressed sensing method, acta imeko 9 (2020) pp. 38-45. doi: 10.21014/acta_imeko.v9i2.787 [7] i. tudosa, p. daponte, l. de vito, s. rapuano, f. picariello, f. lamonaca, a survey of measurement applications based on iot, proc. of the workshop on metrology for industry 4.0 and iot, metroind 4.0 and iot, brescia, italy, 16-18 april 2018, pp. 157162. doi: 10.1109/metroi4.2018.8428335 [8] e. balestrieri, l. d. vito, f. lamonaca, f. picariello, research challenges in measurements for internet of things systems, acta imeko 7 (2018) pp. 82-94. doi: 10.21014/acta_imeko.v7i4.675 [9] i. ahmed, f. lamonaca, recent developments in iomt based biomedical measurement systems: a review, imeko tc4 2020, palermo, italy, 14 – 16 september 2020, pp 23-28. https://doi.org/10.1109/memea.2019.8802164 https://doi.org/10.1109/iwmn.2019.8805016 https://doi.org/10.1109/memea.2016.7533775 http://dx.doi.org/10.21014/acta_imeko.v8i2.640 http://dx.doi.org/10.21014/acta_imeko.v9i2.787 https://doi.org/10.1109/metroi4.2018.8428335 http://dx.doi.org/10.21014/acta_imeko.v7i4.675 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 182 [10] m. a. sayeed, s. p. mohanty, e. kougianos, h. zaveri., a fast and accurate approach for real-time seizure detection in the iomt, proc. of the ieee int. smart cities conf., kansas city, usa, 1619 september 2018, pp. 1-5. doi: 10.1109/isc2.2018.8656713 [11] r. g. andrzejak, k. lehnertz, f. mormann, c. rieke, p. david, c. e. elger, indications of nonlinear deterministic and finitedimensional structures in time series of brain electrical activity: dependence on recording region and brain state, phys. rev. 64 (2001) p. 061907 doi: 10.1103/physreve.64.061907 [12] h. c. koydemir, a. ozcan, wearable and implantable sensors for biomedical applications, annual rev. of analytical chemistry, 11 (2018) pp. 127-146. doi: 10.1146/annurev-anchem-061417-125956 [13] p. saccomandi, e. schena, c. m. oddo, l. zollo, s. silvestri, e. guglielmelli, microfabricated tactile sensors for biomedical applications: a review, biosensors 4 (2014) pp. 422-448. doi: 10.3390/bios4040422 [14] y. xu, x. hu, s. kundu, a. nag, n. afsarimanesh, s. sapra, s. c. mukhopadhyay, t. han, silicon-based sensors for biomedical applications: a review, sensors 19 (2019) p. 2908. doi: 10.3390/s19132908 [15] j. yunas, b. mulyanti, i. hamidah, m. mohd said, r. e. pawinanto, w. a. f. wan ali, a. subandi, a. a. hamzah, r. latif, b. yeop majlis, polymer-based mems electromagnetic actuator for biomedical application: a review, polymers 12 (2020) p. 1184. doi: 10.3390/polym12051184 [16] i. ahmed, u. zabit, a. salman, self-mixing interferometric signal enhancement using generative adversarial network for laser metric sensing applications, ieee access 7 (2019) pp. 174641-174650. doi: 10.1109/access.2019.2957272 [17] i. ahmed, u. zabit., fast estimation of feedback parameters for a self-mixing interferometric displacement sensor, proc. of ccode, islamabad, pakistan, 8-9 march 2017, pp. 407-411. doi: 10.1109/c-code.2017.7918966 [18] d. l. carnì, d. grimaldi, a. nastro, v. spagnuolo, f. lamonaca, blood oxygenation measurement by smartphone, ieee instrumentation & measurement magazine 20 (2017) pp. 43-49. doi: 10.1109/mim.2017.7951692 [19] y. kurylyak, k. barbe, f. lamonaca, d. grimaldi, w. van moer, photoplethysmogram-based blood pressure evaluation using kalman filtering and neural networks, proc. of the ieee int. symp. on medical measurements and applications (memea 2013) gatineau, qc, canada, 4-5 may 2013 pp. 170-174. doi: 10.1109/memea.2013.6549729 [20] f. lamonaca, k. barbe , y. kurylyak , d. grimaldi , w. v. moer, a. furfaro, v. spagnuolo, application of the artificial neural network for blood pressure evaluation with smartphones, proc. of the ieee int. conf. on intelligent data acquisition and advanced computing systems, berlin, germany, 12-14 september 2013, pp. 408-412. doi: 10.1109/idaacs.2013.6662717 [21] f. lamonaca, d. l. carnì, d. grimaldi, a. nastro, m. riccio, v. spagnolo, blood oxygen saturation measurement by smartphone camera, proc. of the ieee int. symp. on medical measurements and applications (memea 2015), pisa, italy, 7 – 9 may 2015, pp. 359-363. doi: 10.1109/memea.2015.7145228 [22] g. polimeni, a. scarpino, k. barbé, f. lamonaca, d. grimaldi., evaluation of the number of ppg harmonics to assess smartphone effectiveness, proc. of the ieee int. symp. on medical measurements and applications, lisbon, portugal, 11-12 june 2014, pp. 433-438. doi: 10.1109/memea.2014.6860101 [23] f. lamonaca, y. kurylyak, d. grimaldi, v. spagnuolo, reliable pulse rate evaluation by smartphone, proc. of the ieee int. workshop on medical measurements and applications (memea2012), budapest, hungary, 18 – 19 may 2012, pp. 234237. doi: 10.1109/memea.2012.6226672 [24] heres arantes junqueira, types of wearable technology. online [accessed 21 may 2021] https://www.researchgate.net/figure/different-types-ofwearable-technology_fig5_322261039 [25] r. nayak, l. wang, r. padhye, electronic textiles for military personnel, in: electronic textiles: smart fabrics and wearable technology. t. dias (editor), woodhead publishing, 2015, pp. 239-256. [26] s. vishnu, s. r. j. ramson, r. jegan, internet of medical things (iomt) an overview, proc. of icdcs, coimbatore, india, 5-6 march 2020, pp. 101-104. doi: 10.1109/icdcs48716.2020.243558 [27] e. balestrieri, l. de vito, f. picariello, i. tudosa, a novel method for compressed sensing-based sampling of ecg signals in medical-iot era, proc. of the ieee int. symp. on medical measurements and applications (memea), istanbul, turkey, 2628 june 2019, pp. 1-6. doi: 10.1109/memea.2019.8802184 [28] f. ahmed, an internet of things (iot) application for predicting the quantity of future heart attack patients, int. j. comput. appl. 164 (2017) pp. 36-40. doi: 10.5120/ijca2017913773 [29] z. al-makhadmeh, a. tolba, utilizing iot wearable medical device for heart disease prediction using higher order boltzmann model: a classification approach, measurement 147 (2019) pp. 19. doi: 10.1016/j.measurement.2019.07.043 [30] g. bucci, f. ciancetta, e. fiorucci, a. fioravanti, a. prudenzi, an internet-of-things system based on powerline technology for pulse oximetry measurements, acta imeko 9 (2020) pp. 114-120. doi: 10.21014/acta_imeko.v9i4.724 [31] e. balestrieri, p. daponte, l. d. vito, f. picariello, s. rapuano, oscillometric blood pressure waveform analysis: challenges and developments, proc. of the ieee int. symp. on medical measurements and applications (memea), istanbul, turkey, 2628 june 2019, pp. 1-6. doi: 10.1109/memea.2019.8802175 [32] g. joyia, a. farooq, s. rehman, r. m. liaqat, internet of medical things (iomt): applications, benefits and future challenges in healthcare domain, j. commun. 12 (2017) pp. 240-247. doi: 10.12720/jcm.12.4.240-247 [33] a. f. otoom, e. e. abdallah, y. kilani, a. kefaye, m. ashour, effective diagnosis and monitoring of heart disease, int. j. of software engineering and its applications 9 (2015) pp. 143-156. doi: 10.14257/ijseia.2015.9.1.12 [34] c. wang, y. qin., h. jin, i. kim, j. g. d. vergara, c. dong, y. jaing, q. zhou, j. li, z. he, z. zou, l. r. zheng, x. wu, y. wang, a low power cardiovascular healthcare system with cross-layer optimization from sensing patch to cloud platform, ieee trans. on biomedical circuits and systems 13 (2019) pp. 314-329. doi: 10.1109/tbcas.2019.2892334 [35] goldberger, a., amaral, l., glass, l., hausdorff, j., ivanov, p. c., mark, r., stanley, h. e. (2000). physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals. circulation 101 (23), pp. e215–e220. doi: 10.13026/c2108f [36] jason brownlee, a gentle introduction to k-fold crossvalidation, 23 may 2018. online [accessed 28 june 2021] https://machinelearningmastery.com/k-fold-cross-validation/ [37] ekuore, ekuore pro electronic stethoscope. online [accessed 28 june 2021] https://www.ekuore.com/wireless-stethoscope/ [38] medicine, stethee pro. online [accessed 28 june 2021] https://www.stethee.com/ [39] tclab, artificial intelligence for document automation. online [accessed 28 june 2021] https://www.tclab.it/en/aida/ [40] hd medical group, intelligent stethoscope with integrated ecg. online [accessed 28 june 2021] https://doi.org/10.1109/isc2.2018.8656713 https://doi.org/10.1103/physreve.64.061907 https://doi.org/10.1146/annurev-anchem-061417-125956 https://doi.org/10.3390/bios4040422 https://doi.org/10.3390/s19132908 https://doi.org/10.3390/polym12051184 https://doi.org/10.1109/access.2019.2957272 https://doi.org/10.1109/c-code.2017.7918966 https://doi.org/10.1109/mim.2017.7951692 https://doi.org/10.1109/memea.2013.6549729 https://doi.org/10.1109/idaacs.2013.6662717 https://doi.org/10.1109/memea.2015.7145228 https://doi.org/10.1109/memea.2014.6860101 https://doi.org/10.1109/memea.2012.6226672 https://www.researchgate.net/figure/different-types-of-wearable-technology_fig5_322261039 https://www.researchgate.net/figure/different-types-of-wearable-technology_fig5_322261039 https://doi.org/10.1109/icdcs48716.2020.243558 https://doi.org/10.1109/memea.2019.8802184 https://doi.org/10.5120/ijca2017913773 https://doi.org/10.1016/j.measurement.2019.07.043 http://dx.doi.org/10.21014/acta_imeko.v9i4.724 https://doi.org/10.1109/memea.2019.8802175 https://doi.org/10.12720/jcm.12.4.240-247 https://doi.org/10.14257/ijseia.2015.9.1.12 https://doi.org/10.1109/tbcas.2019.2892334 https://doi.org/10.13026/c2108f https://machinelearningmastery.com/author/jasonb/ https://machinelearningmastery.com/k-fold-cross-validation/ https://www.ekuore.com/wireless-stethoscope/ https://www.stethee.com/ https://www.tclab.it/en/aida/ acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 183 https://hdmedicalgroup.com/wpcontent/uploads/2020/11/hd-steth-data-sheet.pdf [41] stehthome. online [accessed 28 june 2021] https://stethome.com/ [42] d. k. degbedzui, m. tetteh, e. e. kaufmann, g. a. mills, bluetooth-based wireless digital stethoscope with mobile integration, biomedical engineering: applications, basis and communications 30 (2018) p. 1850010. doi: 10.4015/s1016237218500102 [43] h. ren, h. jin, c. chen, h. ghayvat, w. chen, a novel cardiac auscultation monitoring system based on wireless sensing for healthcare, ieee j. of translational engineering in health and medicine 6 (2018) pp. 1-12. doi: 10.1109/jtehm.2018.2847329 [44] texas instruments, 2.4-ghz bluetooth low energy system-onchip. online [accessed 28 june 2021] https://www.ti.com/lit/ds/symlink/cc2540.pdf [45] farnell, atmel 8051 microcontroller family product selection guide. online [accessed 28 june 2021] http://www.farnell.com/datasheets/46220.pdf [46] d. chamberlain, r. kodgule, y. thorat, v. das, v. miglani, d. ganelin, a. dalal, t. sahasrabudhe, a. lanjewar, r. fletcher smart phone-based auscultation platform, european respiratory j. 48 (2016) suppl. 60. doi: 10.1183/13993003.congress-2016.oa2000 [47] d. g. mcnamara, value and limitations of auscultation in the management of congenital heart disease, pediatr. clin. north america 37 (1990) pp. 93-113. doi: 10.1016/s0031-3955(16)36834-1 [48] g. xavier, c. a. melo-silva, c. e. v. g. d. santos, v. m. amado, accuracy of chest auscultation in detecting abnormal respiratory mechanics in the immediate postoperative period after cardiac surgery, j bras. pneumol. 45 (2019) pp. 1-8. doi: 10.1590/1806-3713/e20180032 [49] s. r. thiyagaraja, j. vempati, r. dantu, t. sarma, s. dantu, smart phone monitoring of second heart sound split, proc. of the 36th annual int. conf. of the ieee eng. in medicine and biology society, chicago, usa, 26-30 august 2014, pp. 2181-2184. doi: 10.1109/embc.2014.6944050 [50] a. varghese, m. raghvan, n. s. hegde, n. t. prathiba, m. ananda, iot based automatic blood pressure system, int. j. of science & research (2015) pp. 1-3. online [accessed 28 june 2021] https://www.ijsr.net/conf/rise2017/ijsr1.pdf [51] g. s. stergiou, b. alpert, s. mieke, r. asmar, n. atkins, s. eckert, g. frick, b. friedman, t. graßl, t. ichikawa, j. p. ioannidis, p. lacy, r. mcmanus, a. murray, m. myers, p. palatini, g. parati, d. quinn, j. sarkis, a. shennan, t. usuda, j. wang, c. o. wu, e. o'brien, a universal standard for the validation of blood pressure measuring devices, hypertension 71 (2018) pp. 368-374. doi: 10.1161/hypertensionaha.117.10237 [52] e. balestrieri, s. rapuano, instruments and methods for calibration of oscillometric blood pressure measurement devices, ieee trans. on instrumentation and measurement 59 (2010) pp. 2391-2404. doi: 10.1109/tim.2010.2050978 [53] e. balestrieri, p. daponte and s. rapuano, automated noninvasive measurement of blood pressure: standardization of calibration procedures, proc. of the ieee int. workshop on medical measurements and applications, ottawa, canada, 9-10 may 2008, pp. 124-128. doi: 10.1109/memea.2008.4543012 [54] e. balestrieri, p. daponte, s. rapuano, towards accurate nibp simulators: manufacturers' and researchers' contributions, proc. of the ieee int. symp. on medical measurements and applications (memea), gatineau, canada, 4-5 may 2013, pp. 91-96. doi: 10.1109/memea.2013.6549713 [55] qardio. [accessed 28 june 2021] https://www.getqardio.com/ [56] manualslib, omron heartguide bp8000-m instruction manual page 67. online [accessed 28 june 2021] https://www.manualslib.com/manual/1501475/omronheartguide-bp8000-m.html?page=67#manual [57] instant blood pressure app. online [accessed 28 june 2021] https://www.instantbloodpressure.com/ [58] asus. online [accessed 28 june 2021] https://www.asus.com [59] l. rachakonda, p. sundaravadivel, s. p. mohanty, e. kougianos, m. ganapathiraju, a smart sensor in the iomt for stress level detection, proc. of the ieee int. symp. on smart electronic systems (ises) hyderabad, india, 17-19 december 2018, pp. 141145. doi: 10.1109/ises.2018.00039 [60] b. abramiuc, s. zinger, p. h. n. de with, n. de vries-farrouh, m. m. van gilst, b. bloem, s. overeem, home video monitoring system for neurodegenerative diseases based on commercial hd cameras, proc. of the 5th icce, berlin, germany, 6-9 september 2015, pp. 489-492. doi: 10.1109/icce-berlin.2015.7391318 [61] kinovea software. online [accessed 25 june 2021] https://www.kinovea.org/ [62] a. ali al zubi, a. alarifi, m. al-maitah, deep brain simulation wearable iot sensor device based parkinson brain disorder detection using heuristic tubu optimized sequence modular neural network, measurement 161 (2020), pp. 107887. doi: 10.1016/j.measurement.2020.107887 [63] g. m. mashrur e. elahi, s. kalra, l. zinman, a. genge, l. korngut, y.-h. yang. texture classification of mr images of the brain in als using m-cohog: a multi-center study, comput. med. imaging graph. (2020), pp. 101659. doi: 10.1016/j.compmedimag.2019.101659 [64] l. rachakonda, s. p. mohanty, e. kougianos, p. sundaravadivel, stress-lysis: a dnn-integrated edge device for stress level detection in the iomt, ieee trans. on consumer electronics 65 (2019) pp. 474-483. doi: 10.1109/tce.2019.2940472 [65] m. ali, l. albasha, h. al-nashash, a bluetooth low energy implantable glucose monitoring system, proc. of the 8th european radar conf., manchester, uk, 12-14 october 2011, pp. 377-380. [66] b. h. ginsberg, factors affecting blood glucose monitoring: sources of errors in measurement, j. of diabetes science and technology 3 (2009) pp. 903-913. doi: 10.1177/193229680900300438 [67] p. jain, s. pancholi, a. m. joshi, an iomt based non-invasive precise blood glucose measurement system, proc. of ises (formerly inis), rourkela, india, 16-18 december 2019, pp. 111116. doi: 10.1109/ises47678.2019.00034 [68] w. l. clarke, d. cox, l. a. gonder-frederick, w. carter, s. l. pohl, evaluating clinical accuracy of systems for self-monitoring of blood glucose, diabetes care 10 (1987) pp. 622-628. doi: 10.2337/diacare.10.5.622 [69] k. song, u. ha, s. park, j. bae, h. yoo, an impedance and multiwavelength near-infrared spectroscopy ic for non-invasive blood glucose estimation, ieee j. of solid-state circuits 50 (2015), pp. 1025-1037. doi: 10.1109/jssc.2014.2384037 [70] n. demitri, a. m. zoubir, measuring blood glucose concentrations in photometric glucometers requiring very small sample volumes, ieee trans. on biomedical engineering 64 (2017) pp. 28-39. doi: 10.1109/tbme.2016.2530021 [71] g. acciaroli, m. vettoretti, a. facchinetti, g. sparacino, c. cobelli, reduction of blood glucose measurements to calibrate subcutaneous glucose sensors: a bayesian multiday framework, ieee trans. on biomedical engineering 65 (2018) pp. 587-595. doi: 10.1109/tbme.2017.2706974 [72] a. m. joshi, p. jain, s. p. mohanty, n. agrawal, a new wearable for accurate non-invasive continuous serum glucose measurement https://hdmedicalgroup.com/wp-content/uploads/2020/11/hd-steth-data-sheet.pdf https://hdmedicalgroup.com/wp-content/uploads/2020/11/hd-steth-data-sheet.pdf https://stethome.com/ https://doi.org/10.4015/s1016237218500102 https://doi.org/10.1109/jtehm.2018.2847329 https://www.ti.com/lit/ds/symlink/cc2540.pdf http://www.farnell.com/datasheets/46220.pdf https://doi.org/10.1183/13993003.congress-2016.oa2000 https://doi.org/10.1016/s0031-3955(16)36834-1 https://doi.org/10.1590/1806-3713/e20180032 https://doi.org/10.1109/embc.2014.6944050 https://www.ijsr.net/conf/rise2017/ijsr1.pdf https://doi.org/10.1161/hypertensionaha.117.10237 https://doi.org/10.1109/tim.2010.2050978 https://doi.org/10.1109/memea.2008.4543012 https://doi.org/10.1109/memea.2013.6549713 https://www.getqardio.com/ https://www.manualslib.com/manual/1501475/omron-heartguide-bp8000-m.html?page=67#manual https://www.manualslib.com/manual/1501475/omron-heartguide-bp8000-m.html?page=67#manual https://www.instantbloodpressure.com/ https://www.asus.com/ https://doi.org/10.1109/ises.2018.00039 https://doi.org/10.1109/icce-berlin.2015.7391318 https://www.kinovea.org/ https://doi.org/10.1016/j.measurement.2020.107887 https://doi.org/10.1016/j.compmedimag.2019.101659 https://doi.org/10.1109/tce.2019.2940472 https://doi.org/10.1177/193229680900300438 https://doi.org/10.1109/ises47678.2019.00034 https://doi.org/10.2337/diacare.10.5.622 https://doi.org/10.1109/jssc.2014.2384037 https://doi.org/10.1109/tbme.2016.2530021 https://doi.org/10.1109/tbme.2017.2706974 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 184 in iomt frameworks, ieee trans. on consumer electronics 66 (2020) pp. 327-335. doi: 10.1109/tce.2020.3011966 [73] z. a. al-odat, s. k. srinivasan, e. m. al-qtiemat, s. shuja, a reliable iot-based embedded health care system for diabetic patients, int. j. on advances in internet technology (2019) pp 5060. online [accessed 28 june 2021] http://www.iariajournals.org/internet_technology/inttech_v12_ n12_2019_paged.pdf [74] b. karaböce, challenges for medical metrology, ieee instrumentation & measurement magazine 23 (2020) pp. 48-55. doi: 10.1109/mim.2020.9126071 [75] e. balestrieri, s. rapuano, instruments and methods for calibration of oscillometric blood pressure measurement devices, ieee trans. on instrumentation and measurement 59 (2010) pp. 2391-2404. doi: 10.1109/tim.2010.2050978 [76] m. do céu ferreira, the role of metrology in the field of medical devices, int. j. metrol. qual. eng. 2 (2011) pp. 135-140. doi: 10.1051/ijmqe/2011101 https://doi.org/10.1109/tce.2020.3011966 http://www.iariajournals.org/internet_technology/inttech_v12_n12_2019_paged.pdf http://www.iariajournals.org/internet_technology/inttech_v12_n12_2019_paged.pdf https://doi.org/10.1109/mim.2020.9126071 https://doi.org/10.1109/tim.2010.2050978 https://doi.org/10.1051/ijmqe/2011101 a morphological and chemical classification of bronze corrosion features from an iron age hoard (tintignac, france): the effect of metallurgical factors acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 10 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 a morphological and chemical classification of bronze corrosion features from an iron age hoard (tintignac, france): the effect of metallurgical factors giorgia ghiara1, christophe maniquet2, maria maddalena carnasciali1, paolo piccardo1 1 department of chemistry and industrial chemistry (dcci), university of genoa, via dodecaneso 31, 16146 genova, italy 2 inrap limoges, 18 allée des gravelles, 87280 limoges, france section: research paper keywords: corrosion morphology; sn bronze; microstructure; tentacle like corrosion; mic citation: giorgia ghiara, christophe maniquet, maria maddalena carnasciali, paolo piccardo, a morphological and chemical classification of bronze corrosion features from an iron age hoard (tintignac, france): the effect of metallurgical factors, acta imeko, vol. 11, no. 4, article 11, december 2022, identifier: imeko-acta-11 (2022)-04-11 section editor: tatjana tomic, industrija nafte zagreb, croatia received april 14, 2022; in final form july 14, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: giorgia ghiara, e-mail: giorgia.ghiara@gmail.com 1. introduction archaeological objects constitute our main source of information to study the corrosion behaviour of copper-based alloys over long timespans. ancient metals buried in soil or left in waterlogged environments for centuries give the most reliable data on how to model and thus predict the corrosion behaviour of modern alloys [1]-[4]. when studying the overall corrosion process, it is straightforward that the environment itself must be considered. much research already studied the variety of environments and features occurring [5]-[10] on sn-bronzes, pointing out the importance of water (the electrolyte in the corrosion process) and the nature and concentration of dissolved salts inside the solution. the morphology, composition, physical and chemical features of the corroded layer derive from the interaction with these corrosive agents [11]-[13]. the rate and morphology of the attack is thus affected by i) the anodic or cathodic processes that promote the growth of salts or the formation of complexing ions; ii) the evolution of the moving equilibria in the anodic or cathodic direction; ii) the conductivity of the solution [14], [15]. corrosion of soil can be more challenging since, other than water, climatic parameters as rainfall, wind, sunlight, temperature, and microbiological activity influence the composition and the properties of the system and allow for different types or mechanisms to occur, one after the other or even simultaneously [16]. nevertheless, the environment is not the only parameter that scientists must consider when dealing with corrosion processes of sn-bronzes. the composition and the microstructural features of the matrix are prominent if the alloys are exposed to the same environmental conditions. focusing on the composition, corrosion properties as nobility, passivity or cathodic and anodic processes are related to the influence of the alloying elements and abstract a categorization of corrosion morphologies of archaeological sn bronzes was carried out on archaeological iron age objects. the objects come from a celtic deposit located in central france (tintignac, corrèze) and are dated between 2nd and 3rd cent bc. being samples of corroded metals taken from a single find spot, parameters connected to the features of the alloy and known to influence the corrosion morphologies were thoroughly considered. global processes were highlighted, and corrosion mechanisms were characterised with a multi-analytical protocol (sem-eds, micro-raman spectroscopy, image analyses) according to the detected morphology. elaboration of the results was carried out with a multicomponent approach. results show the presence of 5 different morphologies correlated to the alloys characteristics of the objects. alloy composition, microstructure, degree of deformation and grain size were found to influence the corrosion products formed and the morphology of the attack. in particular, the ‘tentacle like corrosion’, associated to a microbial attack was the most susceptible to the effect of metallurgical features: their occurrence is connected to a more massive presence of fe and pb in the alloy, a homogeneous deformation and a larger grain size. mailto:giorgia.ghiara@gmail.com acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 their quantity inside the matrix [14]-[16]. from a historical point of view typical alloying elements are as, zn and sn, the latter providing the better corrosion resistance [17]-[20]. furthermore, the quantity of the alloying element inside the solid solution can be beneficial or detrimental to the corrosion resistance of the material. research on the performances of single-phase cu-sn alloys and how the growth of a passive film on the surface is promoted by large supplementations of sn to pure cu [17]-[19] were undertaken in the past. also, how a high zn content in brasses promotes a dezincification mechanism is a common phenomenon [20]. however, corrosion can be triggered also according to metallurgical features as heterogeneities, impurities, secondary phases, defects, or grain orientation [14]. the presence of impurities or defects can cause higher corrosion rates by creating preferential areas of corrosion [16] as heterogeneities inside the solid solution or intermetallic phases can lead to a selective type of corrosion. the presence of detrimental secondary phases was also described in recent studies [21]. generally, the action of each parameter is reflected in a specific corrosion morphology that is categorized accordingly [7], [10], [22]-[26]. various corrosion morphologies are presented in literature and are generally correlated to the work of robbiola et al. [7]. in their work, the authors defined two types of corrosion features for archaeological tin-bronzes in soil, according to the preservation of the original surface that allows to define the original shape of the artefact. these models were obtained on a statistical study of sn-bronzes with a sn range of composition from 4 to 23 wt. % and coming from the same site of excavation. both corrosion morphologies derive from an electrochemical process of oxidation of the metal (anode) and reduction of the oxygen (cathode). after the buildup of a corrosion layer, the mechanism described further developed with the mobility of cationic and anionic species, that interact with each other within the metal-soil system [7]. however, the influence of metallographic features was less considered and typical metallurgical parameters as alloy composition, microstructure, fabrication and/or finishing techniques were not considered. the study wants to fill the gap on archaeological materials by considering the metallurgical parameters to play an important role in the corrosion mechanism of artifacts buried in soil. the archaeological objects come from a recently excavated deposit (tintignac, corrèze) located in the central part of france, from which it was possible to obtain the necessary information to properly interpret the corrosion behavior without external contaminations. 2. materials and methods 2.1. archaeological context a celtic deposit was discovered around tintignac (naves, department of corrèze) in the southwest of central france and dated between 2nd and 3rd cent bc (figure 1). different bronze and iron objects related to warfare were deposited in a pit in the north-western area of a ritual or cultic place (fanum). most of the objects were intentionally destroyed before the deposition. the square pit measured around 1 m² and 30 cm depth and around 60 metallic objects were discovered: defensive armour (shields and helmets), war trumpets (karnykes), and a wide variety of other artifacts. to minimize the number of variables in the study, binary bronzes with no secondary phases are taken into consideration and only composition and microstructural features are analysed. 2.2. methodology small size fragments suitable for metallurgical investigations were sampled. of each fragment/object, one or more (1 to 3) samples were taken. after a preliminary visual observation to verify the surface characteristics and preservation, samples were mounted in cold resin and subsequently polished with decreasing grain-size diamond suspensions up to 1 μm, in agreement with the astm e3-01 standard procedure. the protocol was designed to characterise the objects through metallurgical investigations of the metallic matrix and spectroscopic analyses of the corrosion layers. objects were studied and categorized according to the morphology, composition, and microstructural features. 2.3. analytical techniques a preliminary characterisation was performed prior to metallographic etching by light optical microscopy (lom; mef4 m; leica microsystems, buffalo grove, il, usa) using brightfield (bf) and dark-field (df) contrast methods. the latter is particularly suitable for detecting corrosion products and allows for an exhaustive detection of the mineralized areas [27]. the microstructural features of the metallic matrix and the morphology of the corrosion process were documented by lom and scanning electron microscopy (sem; zeiss evo40; carl zeiss, oberkochen, germany). chemical analyses of the metallic substrate and the corroded layers were performed with energydispersion x-ray spectroscopy (edxs; cambridge inca 300 with pentafet edxs detector (oxford instruments, oxfordshire, u.k.) sensitive to light elements, z> 5) connected to the sem. the eds was previously calibrated on a cobalt figure 1. different layers of the celtic deposit discovered around tintignac (naves, department of corrèze, france) and dated between 2nd and 3rd cent bc. (a) upper level, (b) middle level; (c) lower level. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 3 standard. this procedure allowed to obtain reliable values for all elements with atomic weight higher than 11 (z ≥ 11), while for lighter elements like oxygen the analysis was considered as semiquantitative. amounts below 0.2 wt.% were considered as semiquantitative measurements and were evaluated only when the identification peaks were clearly visible in the acquisition spectrum. the reported compositions of the alloys correspond to the average of at least five measurements on non-corroded areas. all results on the alloys were normalized and presented in average weight percent. afterwards, chemical etching was performed on the samples with a solution of fecl3 (5 g) diluted in hcl (50 ml) and ethanol (200 ml) to reveal their microstructural features. to estimate the grain size, the procedure using the grain boundary intersection count was carried out according to the astm e-112 standard. the degree of deformation of the alloy was calculated following equations from [28], [29]. the shape factor of the inclusions (sf) can be computed by calculating their width/length ratio and by correlating it to the stress absorbed by the material. the deformation applied, resulting in a reduction of thickness (d%), is described as: 𝐷% = 𝑡ℎ𝑖𝑐𝑘0 − 𝑡ℎ𝑖𝑐𝑘end 𝑡ℎ𝑖𝑐𝑘0 ∙ 100 % , (1) where 𝑡ℎ𝑖𝑐𝑘0 and 𝑡ℎ𝑖𝑐𝑘end are the initial and final thickness respectively. the equation was then further implemented correlating the sf of the inclusions to the deformation applied: 𝐷% = 𝑠𝑓 − √𝑠𝑓 3 𝑠𝑓 ∙ 100 % , (2) 𝑡ℎ𝑖𝑐𝑘0 = 𝑡ℎ𝑖𝑐𝑘end ∙ 𝑠𝑓 2 3 . (3) to determine the nature of the corrosion products, microraman spectroscopy (μrs, renishaw raman system 2000, renishaw, inc., hoffman estates, il, usa) was also employed. the instrument was supplied with a charge coupled device (ccd) peltier-cooled as detector and excited using a 632.8 nm he-ne laser at 1 cm-1. to estimate the thickness of the corroded layers and the percentage of sound metal on the sample an image analysis software (fiji-imagej, version 1.49b) was used on 100 x lom micrographs [30]. a statistical procedure was also performed using the principal component analysis (pca) to evidence correlations between the variables and possible clustering of the samples. the original data matrix was decomposed into two smaller matrices: the loadings, which can be interpreted as the weights for each original variable in the pc computation, and the scores, which contain the original data in the newly rotated coordinate system [31]. also, a biplot was created with both the loadings (variables) and scores (samples) positions on the new coordinate system. 3. results and discussion 3.1. metallurgical characterisation all objects are made of sn-bronze with sn as the major alloying element and minor elements as as, ni, co, fe and pb. figure 2 describes the number of objects and the frequency distribution according to the element considered (sn, fe, as, pb, ni, co). sn disperses from 6 % to 16 % in wt., with the highest occurrence (more than the 80 % of the samples) found for the interval 10 – 14 % in wt. confirming the high technological skills of the celtic culture [32]. minor elements as as, ni, co, fe and pb concentrations are below 1 wt.% (even less than 0.2 wt. %), with variations of the shape of the curve according to the element. fe, pb, co and ni are show very high frequency in the range below 0.2 wt. %, which is close to the limit of detection (lod) of the eds. this outcome should be considered as semiquantitative, and more sensitive analytical techniques are needed to confirm the distribution curve proposed in this study. the element as showed on the other hand experimental values modelled with a gaussian-like curve, with a distribution probability centred at 0.3 wt. %. a microstructural investigation was performed through lom and sem investigations, and figure 3 displays the features observed. they are characterised figure 2. frequency distribution of major and minor alloying elements detected on the whole number of samples: (a) sn; (b) as; (c) fe; (d) pb; (e) ni; (f) co. figure 3. microstructures observed on the objects. lom micrographs of: (a) polygonal grains typical of recrystallization annealing; (b) twinning and slip lines; sem-bse micrographs highlighting the composition of inclusions: (c) pb; (d) cu2-xfexs2. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 by a homogeneous microstructure with the presence of an α phase (figure 3a). polygonal grains, typical of recrystallization annealing after deformation are observed, with twinning and slip lines consistent with a last step of cold deformation (figure 3b). some residual heterogeneities of the solid solution appeared in some samples, suggesting that the annealing temperature was not high enough to remove residues of the heterogeneity after casting (above 2/3 of the melting temperature of the alloy). inclusions are made of fe and cu sulfides cu2-xfexs2 and pure pb islands (given the immiscibility of lead with the cu-sn solid solution) (figure 3b and c). following equations 1 to 3, the d % was calculated. a deformation between 15% and 90% was statistically associated to the microstructure and a frequency plot (not visible) evidences a distribution of the d % more pronounced towards higher values (peak at 80 %). this outcome must however be considered as a function of the grain size and the percentage of sn inside the solid solution. the grain size varies between 6 µm and 50 µm for m1 (not visible), according to the percentage of sn in the alloy. 3.2. corrosion morphology the corrosion features observed on the range of samples are typologically divided into 5 groups corresponding to alphabetical roman letters (a, b, c, d, e, see figure 4, table 1). they are classified according to i) nature and features of the layers (e.g., composition, heterogeneity, thickness), ii) features of penetration (e.g., degree, way). they are resumed in table 1. the complexity of each corrosion morphology is connected to a specific corrosion mechanism. it is a slow process that follows a logarithmic trend [5] (not considering localized types of corrosion) and the presence of sn as the major alloying element helps create a mixed sn and cu passive oxide, with a protectiveness degree correlated to the amount of sn in the alloy [17]-[19]. the classical archaeological corrosion (type a or ii, table 1) which has been already mentioned by many authors [7], [33]-[36] is also found this study. from the micro-raman analysis (figure 5a) corrosion products as cuprite (cu2o), cassiterite (sno2), a mixed form of sn and cu oxides (spectrum black), malachite (cu2co3(oh)2-spectrum red), and azurite (cu3(co3)2(oh)2 were detected. impurities as si, al are detected by sem analyses and are responsible for the small shifts of the peaks of some of the compounds [33]. the penetration is intergranular and follows the short-circuits of diffusion [15] where grain boundaries are the anode, and the matrix is the cathode. in those areas a dissolution of sn is promoted according to the lower standard electrode potential [37] and precipitates in the form of hydroxides and oxides. the formed layer is porous and permeable and thus oxygen can diffuse inward, allowing for the oxidation of cu. a mixed layer of cu and sn oxides is therefore formed (figure 4a, figure 5 -black spectrum and figure 6a inner layer of corrosion products or il). basic cu carbonates are formed at the outer layer (ol) due to the presence in the soil of co2 or carbonate ions in quantities above 200 ppm [38]. a decuprification phenomenon is also observed from the eds analyses, as well as diffusion processes of minor elements through the corrosion layers [39]-[41]. figure 6a displays the line-scan profile performed on a representative sample evidencing the interaction of the system alloy/oxide/environment. calculations on the sn:cu atomic ratio along the profile indicate that values are increasing in the il to a maximum of 2.7, which implies that every 3 atoms of sn in the oxide are balanced by 1 atom of cu. as, pb and fe enrich the cu-sn mixed oxide up to 3-4 times their relative value in the alloy (e.g., from 0.3 wt. % to 1 wt. % for as). this suggests that the diffusion coefficients of those elements are table 1. description of the corrosion morphologies identified in the study. corrosion morphology description penetration corrosion products type a so called classical archaeological bronzes corrosion mechanism in presence of water. intergranular corrosion. red, yellowish-orange, green and blue are the predominant colors short-circuit of diffusion: e.g., grain boundaries, slip bands, mechanical twins. interdiffusion of ions from both the metallic substrate (cu, pb, as, ni, fe) and the soil (si, al, fe) cuprite (cu2o); cassiterite (sno2); malachite (cu2co3(oh)2); azurite (cu3(co3)2(oh)2; mixed forms of tin and copper oxides. type b archaeological corrosion mechanism in which intergranular corrosion is visible in the inner part of the corrosion layers or just on one side of the sample. compact and thick layers of different colours are visible on the other side or on the entire surface. predominant colours are beside those from the classical corrosion morphology: brownish green, yellow, brown, grey. corrosion penetrates inside as a compact and thick layer or inside cracks or preferential areas of the sample sometimes with a waterfall effect. uncommon corrosion features identified as sno2*2h2o and sn(oh)4 as well as cuprite (cu2o); cassiterite (sno2); malachite (cu2co3(oh)2); azurite (cu3(co3)2(oh)2; mixed forms of tin and copper oxides type c globular shapes corrosion products inside compact layers of different thickness. predominant colours are beside those from the classical corrosion morphology: brownish green, yellow, brown, white, grey. corrosion penetrates inside the matrix similar to a macro pitting mechanism on entire areas of the sample. corrosion features in globular shapes identified as sno2*2h2o and sn(oh)4; cuprite (cu2o); cassiterite (sno2); malachite (cu2co3(oh)2); azurite (cu3(co3)2(oh)2; mixed forms of tin and copper oxides. type d 1-1.5 μm wide tunnel shaped structures. the tunnels are never completely straight but bent, dividing and crossing each other, giving the picture of roots or tentacles. not following any microstructural features as grain or twinning boundaries or slip bands due to cold deformation. it penetrates directly into the crystal without any regard for the short-circuit of diffusion. mixed forms of tin and copper oxides are detected with a predominance of tin-based oxides. also, sno2*2h2o and sn(oh)4 are detected. type e mixtures of previously described corrosion morphologies in which many parameters influenced the corrosion mechanism during relatively long timespans. type a, b, c, d type a, b, c, d acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 5 high enough to migrate inside the corroded layers. fe shows high concentrations inside the ol (up to more than 10 times the content in the alloy), probably related to its diffusion from the soil. also type b corrosion was already detected elsewhere [7] (figure 4b, table 1). a compact layer is found at the interface with the metallic matrix. figure 5b resumes some of the corrosion products found for this peculiar corrosion morphology: an il of cu2o and sn(oh)4, (spectrum green), followed by sno2•2h2o. malachite (cu2co3(oh)2) or azurite (cu3(co3)2(oh)2 form in the ol. the sn oxides, which are thermodynamically stable and have a low solubility limit as doping elements [42], [43], slow down the process. a more severe decuprification phenomenon is detected and values of sn:cu from the interface metal/oxide to the edge respectively from 0.2 to 2. however, no enrichment of minor elements was detected in the corrosion layers. variation of the b type of corrosion, the type c, shows preferential areas of penetration. the corrosion layer is present along cracks sometimes with a “waterfall” effect (figure 4c, table 1). corrosion products are the same as previously mentioned for type b. the sn:cu value from the interface metal/oxide to the edge increases as displayed by the line-scans (figure 6b) and as previously seen, up to 3, which could be possibly connected to a further evolution of the burial context (possibly lack of oxygen) [15]. pitting can potentially explain the phenomenon, a localized mechanism in which cathodes and anodes are different areas of the sample [14]. however, it is very difficult to detect what triggered the process, either local ruptures of the protective sn-rich passivation film due to foreign bodies, macro heterogeneities of the film or areas of accumulation of aggressive ions as cl[15]. the type d or “tentacle-like” corrosion [44] does not follow the above-described mechanism but penetrates directly into the crystal without any regard to the short-circuit of diffusion (figure 4d, table 1). the morphology shows a tunnel-like structure entering the crystal with apparently irregular directions. this type of corrosion and its features has led to the hypothesis of a microbial influence in the mechanism and the attribution to microbiologically influenced corrosion (mic) was proven and discussed more in detail elsewhere [45]-[47]. a figure 4. corrosion morphologies associated to different corrosion mechanisms and defined as: (a) type a; (b) type b; (c) type c; (d) type d; (e) type e. figure 5. raman spectra collected with a 632.8 laser on (a) type a corrosion morphology (b) type b corrosion morphology; (c) type d corrosion morphology. parameters of the analysis: acquisition time: 10 s; number of accumulations: from 1 to 4; power: 25% transmittance. vibrational bands attribution of: 5a: mixture of cu2o and sno2; malachite; 5b: sn(oh)4; sno2*2h2o with shifts due to the soil acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 6 decuprification process did not occur, as displayed by the linescan in figure 6c, since the kα value of cu does not decrease linearly but fluctuates along the corrosion layer. also, higher amounts of pb are found along the profile towards the ol. accordingly, the sn:cu ratio fluctuates from the metallic interface to the edge with extrema of 0.3 at the interface metal/oxide and a peak of 2 at the il. this suggests that bacteria promote the formation of sn products at the il and a mixed form of cu and sn oxides at the ol, influencing the overall corrosion mechanism [47]. when all types of corrosion morphologies are observed on the same artefact with amplitudes which may vary from one area to another, type e was defined (figure 4e, table 1).this type of corrosion was found only on a limited set of samples and is related to changes in the environmental context could lead to the formation of a mixture of the corrosion morphologies. however, the information concerning this corrosion morphology is still under investigation considering the lack of information on the evolution of the environmental conditions. 3.3. metallurgical parameters on the corrosion process the correlations between the alloy’s microstructural features and the occurrence of typical corrosion morphologies are given by the pca biplot in figure 7. as displayed in the graph, correlations between the variables are visible. the distribution of the a type of corrosion referred to the variable sn is rather figure 6. line-scan profiles of different sites of interest based on the kα energies of cu, sn, fe, as and pb of the different morphologies: a) type a; b) type c; c) type d. il=inner layer of the oxidation patina; ol=outer layer of the oxidation patina. figure 7. biplot of the variables influencing the corrosion morphologies considered according to the position of the samples in the newly rotated space. roman numbers indicate the quadrant. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 7 stochastic. this outcome agrees with other archaeological studies where this type of corrosion was associated to objects in which the sn content varied enormously, from 5 to 20 in wt. % [5]-[8], [10], [22], as in our case (figure 8). type b and type c corrosion morphologies show a shift in the biplot towards the iii quadrant, in the sn direction. this is consistent with the frequency distribution of the sn for such morphology (figure 8), which exhibits a narrow range of composition (from 11 to 15 in wt. % sn). this could be related to the passivating properties of the element that could shift the corrosion potential towards higher values. thus, the composition of the alloy is a rate-controlling factor for the electrochemical corrosion mechanism, as previously stated [17]-[19]. sn and cu are not variables that influence type d morphology since the samples move along a line that connects cu and sn (obviously negatively correlated since the increase of one deplete the alloy of the other) as confirmed by the data (figure 7 and figure 8). minor elements (as, co, fe, ni and pb) do not influence the occurrence of types a, b and c. a trend for corrosion morphology d is discernible from the pca analysis along a line from the ii to the iv quadrant. this morphology is affected by the presence of the minor elements fe and pb. this means the higher the concentration of these elements in the alloy (up to a 1.2 wt. % of pb and 1 wt. % of fe), the higher the probability to find ‘tentacle like’ corrosion (mic) features. it is known that pb and fe in sufficient quantity (above 1 %, in our case) modify the corrosion mechanism towards a selective type, culminating with their dissolution [15]. however, it is still not clear how their presence can affect such a morphology since it is influenced by bacterial colonization. we can only infer that their occurrence promotes the microbial attack. we can hypothesize that: i) their dissolution is less harmful for the survival of bacteria compared to sn and cu [48], [49]; ii) they can exploit these elements for their metabolism using them as an alternative to carbon as an energy source [50], [51]. the degree of deformation and the grain size affects all types of microstructures to a different extent (figure 9 and figure 10). in particular, the degree of deformation seems to positively influence the occurrence of corrosion a, b and c, while it is detrimental for type d. samples from d type are negatively correlated to d % (as they are positioned in the ii and iii quadrant). this indicates the higher the degree of deformation, the lower the occurrence of type d. figure 9 describes the degree of deformation according to the morphology observed. type d has a frequency distribution centered around 35 %, while for the other morphologies it shifts towards high values (> 65 %). however, the d % must be considered as a function of the grain size and all results must be evaluated accordingly. this parameter affects these morphologies differently as suggested by figure 11 coherently with the outcomes obtained from the d %. a bivariate plot grain size of vs number of samples is displayed in figure 10. it is evident that the smaller the grain size, the higher the occurrence of corrosion types a, b, and c. the higher the number of polygonal grains, the higher the possibility to find an intergranular penetration since defects and impurities are localized in the areas with higher gibbs free energy [15]. on the contrary, type d shows an opposite trend. figure 10d shows a normal distribution of grain size for type d with a peak at around 35-40 µm, which is twice as high compared to the average grain size found for all the samples. this outcome is very interesting but still under study since, to our knowledge, no figure 8. frequency distribution of the sn wt % according to the corrosion type detected in the whole range of objects: (a) type a; (b) type b; (c) type c; (d) type d. figure 9. frequency distribution of the degree of deformation according to the corrosion type detected in the whole range of objects: (a) type a; (b) type b; (c) type c; (d) type d. figure 10. frequency distribution of the grain size according to the corrosion type detected in the whole range of objects: (a) type a; (b) type b; (c) type c; (d) type d. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 8 information is available on the relationship between the grain size, d %, and the occurrence of microbial corrosion on cubased alloys. on stainless steels, it seems that the smaller the grain size, the more effective the penetration of the corrosion [52], [53], which is in contradiction with this study. it is indeed difficult to compare the mechanism on a different material with the one found for cu alloys since the microstructural features are different (e.g., the presence of grain boundaries precipitates). it is only possible to suppose that on a given surface the number of impurities (localized at the grain boundaries) is lower for larger grains, and this could become a key parameter for the survival of the colonies and the occurrence of a bacteria mediated corrosion mechanism. in this perspective, concomitant analyses on both archaeological bronzes with mic morphology and experimental alloys are needed to better understand the effect of the grain size. 4. conclusion the main aim of this work is to isolate typical metallurgical parameters that influence the occurrence of specific corrosion morphologies. a categorization of the objects following specific types of composition, microstructure and corrosion was proposed. the nature of the alloy and the microstructural features act as rate controlling factors for the electrochemical corrosion mechanism. results show that metallurgical factors do not play a major role in the corrosion morphology of type a, where neither the composition, nor the thermomechanical treatments influence its occurrence. on the other hand, a high correlation of type b and type c to the alloy composition suggests a higher degree of protection due to the sn tendency to naturally passivate. the most interesting results come from the type d corrosion morphology (tentacle like or mic morphology). it is affected by: i) the presence of specific minor elements in the alloy (i.e. fe and pb in quantities up to 1%; ii) the presence of microstructure bearing a large grain size (40 µm, twice as high compared to the other morphologies) and a low degree of deformation (peak at 35%). the possible causes for this trend are still doubtful and more research will be carried out in the future to understand this point. acknowledgement the authors would like to thank the scientific team in charge of the ‘project tintignac’, in the person of the major of the city of naves-en-corrèze for allowing us to sample the objects; m. drieux-daguerre of the laboratory materia viva, toulouse (fr); b. armbruster from cnrs-umr 5608, toulouse (fr). references [1] m. kibblewhite, g. tóth, t. hermann, predicting the preservation of cultural artefacts and buried materials in soil, sci. tot. environm., 529 (2015), pp. 249-263. doi: 10.1016/j.scitotenv.2015.04.036 [2] j. f. d. stott, g. john, a. a. abdullahi, corrosion in soils, reference module in materials science and materials engineering, elsevier, 2018. doi: 10.1016/b978-0-12-803581-8.10524-7 [3] m. c. bernard, s. joiret, understanding corrosion of ancient metals for the conservation of cultural heritage, electrochim. acta, 54(22) (2009), pp. 5199-5205. doi: 10.1016/j.electacta.2009.01.036 . [4] f. king, a natural analogue for the long-term corrosion of copper nuclear waste containers—reanalysis of a study of a bronze cannon, appl. geochem., 10(4) (1995), pp. 477-487. doi: 10.1016/0883-2927(95)00019-g . [5] l. robbiola, r. portier, a global approach to the authentication of ancient bronzes based on the characterization of the alloy– patina–environment system, j cult. herit., 7(1)1 (2006), pp. 1-12. doi: 10.1016/j.culher.2005.11.001 . [6] r. f. tylecote, the effects of soil conditions on the long-term corrosion of buried tin-bronzes and copper, j. archaeol. sci., 6 (1979), pp. 345-368. doi: 10.1016/0305-4403(79)90018-9 . [7] l. robbiola, j.-m. blengino, c. fiaud, morphology and mechanisms of formation of natural patinas on archaeological cusn alloys, corros. sci., 40 (1998), pp. 2083-2111. doi: 10.1016/s0010-938x(98)00096-1 [8] l. he, j. liang, x. zhao, b. jiang, corrosion behavior and morphological features of archeological bronze coins from ancient china, microchem. j, 99 (2) (2011), pp. 203-212. doi: 10.1016/j.microc.2011.05.009 . [9] j. redondo-marugán, j. piquero-cilla, m. t. doménech-carbó, b. ramírez-barat, w. al sekhaneh, s. capelo, a. doménech-carbó, characterizing archaeological bronze corrosion products intersecting electrochemical impedance measurements with voltammetry of immobilized particles, electrochim. acta, 246 (2017), pp. 269-279. doi: 10.1016/j.electacta.2017.05.190 . [10] g. m. ingo, c. riccucci, c. giuliani, a. faustoferri, i. pierigè, g. fierro, m. pascucci, m. albini, g. di carlo, surface studies of patinas and metallurgical features of uncommon high-tin bronze artefacts from the italic necropolises of ancient abruzzo (central italy), appl. surf. sci. 470 (2019), pp. 74–83. doi: 10.1016/j.apsusc.2018.11.115 . [11] n. souissi, e. sidot, l. bousselmi, e. triki, l. robbiola, corrosion behaviour of cu–10sn bronze in aerated nacl aqueous media – electrochemical investigation, corros. sci. (2007), 49, pp. 3333– 3347. doi: 10.1016/j.corsci.2007.01.013 [12] f. ammeloot, c. fiaud, e. m. m. sutter, characterization of the oxide layers on a cu-13sn alloy in a nacl aqueous solution without and with 0.1 m benzotriazole. electrochemical and photoelectrochemical contributions, electrochim. acta, 44(15) (1999), pp. 2549-2558. doi: 10.1016/s0013-4686(98)00391-0 . [13] e. sidot, n. souissi, l. bousselmi, e. triki, l. robbiola, study of the corrosion behaviour of cu–10sn bronze in aerated na2so4 aqueous solution, corros. sci., 48(8) (2006), pp. 2241-2257. doi: 10.1016/j.corsci.2005.08.020 [14] r. francis, the corrosion of copper and its alloys: a practical guide for engineers, nace international, houston, 2010. [15] l. l. shreir, r. a. jarman, g. t. burstein, (eds), corrosion, butterworth-heinemann, oxford, 1963. [16] r. w. revie, h. h. uhlig (eds), corrosion and corrosion control. an introduction to corrosion science and engineering (4th edition), john wiley and sons, hoboken (nj), (2008), isbn: 9780-471-73279-2 figure 11. sem elemental mapping of type d corrosion morphology with distribution of the chosen elements along the scanning area. https://doi.org/10.1016/j.scitotenv.2015.04.036 https://doi.org/10.1016/b978-0-12-803581-8.10524-7 https://doi.org/10.1016/j.electacta.2009.01.036 https://doi.org/10.1016/0883-2927(95)00019-g https://doi.org/10.1016/j.culher.2005.11.001 https://doi.org/10.1016/0305-4403(79)90018-9 https://doi.org/10.1016/s0010-938x(98)00096-1 https://doi.org/10.1016/j.microc.2011.05.009 https://doi.org/10.1016/j.electacta.2017.05.190 https://doi.org/10.1016/j.apsusc.2018.11.115 https://doi.org/10.1016/j.corsci.2007.01.013 https://doi.org/10.1016/s0013-4686(98)00391-0 https://doi.org/10.1016/j.corsci.2005.08.020 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 9 [17] m. j. hutchison, j. r. scully, patina enrichment with sno2 and its effect on soluble cu cation release and passivity of high-purity cusn bronze in artificial perspiration, electrochim. acta, 283 (2018), pp. 806-817. doi: 10.1016/j.electacta.2018.06.125 . [18] d. j. horton, h. ha, l. l. foster, h. j. bindig, j. r. scully, tarnishing and cu ion release in selected copper-base alloys: implications towards antimicrobial functionality, electrochim. acta, 169 (2015), pp. 351-366. doi: 10.1016/j.electacta.2015.04.001 [19] m. j. hutchison, p. zhou, k. ogle, j. r. scully, enhanced electrochemical cu release from commercial cu-sn alloys: fate of the alloying elements in artificial perspiration, electrochim. acta, 241(2017), pp. 73-88. doi: 10.1016/j.electacta.2017.04.092 [20] e. sarver, y. zhang, m. edwards, review of brass dezincification corrosion in potable water systems, corr. rev., 28 (3-4) (2010), pp. 155-196. doi: corrrev.2010.28.3-4.155 [21] l. c. tsao, c. w. chen corrosion characterization of cu–sn intermetallics in 3.5 wt.% nacl solution, corros. sci. 63 (2012), pp. 393-398. doi: 10.1016/j.corsci.2013.11.010 [22] g. m. ingo, c. riccucci, g. guida, m. pascucci, c. giuliani, e. messina, g. fierro, g. di carlo, micro-chemical investigation of corrosion products naturally grown on archaeological cu-based artefacts retrieved from the mediterranean sea, appl. surf. sci. 470 (2019), pp. 695–706. doi: 10.1016/j.apsusc.2018.11.144 . [23] h. wei, w. kockelmann, e. godfrey, d. a scott, the metallography and corrosion of an ancient chinese bimetallic bronze sword, j cult. herit. 37 (2019), pp. 259–265. doi: 10.1016/j.culher.2018.10.004 [24] o. oudbashi, s. m. emami, a note on the corrosion morphology of some middle elamite copper alloy artefacts from haft tappeh, south-west iran, stud. conserv., 55(1) (2010), pp. 20-25. doi: 10.1179/sic.2010.55.1.20 [25] d. a. scott, an examination of the patina and corrosion morphology of some roman bronzes, j am. inst. conserv., 33(1) (1994), pp. 1-23. doi: 10.1179/019713694806066419 [26] i. g. sandu, o. mircea, v. vasilache, i. sandu, influence of archaeological environment factors in alteration processes of copper alloy artifacts, microscopy res. techn., 75(12) (2012), pp. 1646-1652. doi: 10.1002/jemt.22110 . [27] m. r. pinasco, m. g. ienco, p. piccardo, g. pellati, e. stagno, metallographic approach to the investigation of metallic archaeological objects, annali di chimica: j anal., environm. cult. herit. chem., 97(7) (2007), pp. 553-574. doi: 10.1002/adic.200790037 . [28] p. piccardo, m. pernot, studio analitico strutturale di alcuni vasi celtici in bronzo, la metallurgia italiana, 11 (1997), pp. 43-52. [29] m. mödlinger, p. piccardo, manufacture of eastern european decorative tin-bronze discs from twelfth century bc, archaeol. anthropol. sci. 5 (2013), pp. 299-309. doi: 10.1007/s12520-012-0111-6 [30] j. schindelin, i. arganda-carreras, e. frise, v. kaynig, m. longair, t. pietzsch, s. preibisch, c. rueden, s. saalfeld, b. schmid, j. y. tinevez, d. j. white, v. hartenstein, k. eliceiri, p. tomancak, a. cardona, fiji: an open-source platform for biological-image analysis, nat. methods 9 (7) (2012), pp. 676-682. doi: 10.1038/nmeth.2019 . [31] y. mori, m. kuroda, n makino, nonlinear principal component analysis and its applications, springer, singapore, 2016. doi: 10.1007/978-981-10-0159-8 [32] c. maniquet, b. armbruster, m. pernot, t. lejars, m. drieuxdaguerre, l. espinasse, p. mora, aquitania, 27 (2011), pp. 63-150 [33] p. piccardo, b. mille, l. robbiola, tin and copper oxides in corroded archaeological bronzes in dillmann, p., beranger, g., piccardo, p., matthiesen. h., corrosion of metallic heritage artefacts—investigation, conservation and prediction of longterm behaviour, woodhead, cambridge, 2007, pp. 239-262 doi: 10.1533/9781845693015.239 [34] k. tronner, a.g. nord, g.c. borg, corrosion of archaeological bronze artefacts in acidic soil, water, air, soil pollut., 85 (1995), pp. 2725-2730. doi: 10.1007/bf01186246 [35] g.m. ingo, t. de caro, c. riccucci, e. angelini, s. grassini, s. balbi et al., large scale investigation of chemical composition, structure and corrosion mechanism of bronze archeological artefacts from mediterranean basin, appl phys a, 83 (2006), pp. 513-520. doi: 10.1007/s00339-006-3550-z . [36] a. doménech-carbó, m.t. doménech-carbó, i. martínezlázaro, electrochemical identification of bronze corrosion products in archaeological artefacts. a case study, microchim acta, 162 (2008), pp. 351-359. doi: 10.1007/s00604-007-0839-3 . [37] p. atkins, l. jones, chemical principles: the quest for insight (3rd ed.), w.h. freeman and company, new york, 2005, isbn 9781319154196 [38] d. a. scott, copper and bronze in art: corrosion, colorants, and conservation: corrosion, colorants, conservation, j paul getty museum, new york, 2002. [39] x. deng, q. zhang, e. zhou, c. ji, j. huang, m. shao, m. ding, x. xu, morphology transformation of cu2o sub-microstructures by sn doping for enhanced photocatalytic properties, j alloys comp, 649 (2015), pp. 1124-1129. doi: 10.1016/j.jallcom.2015.07.124 . [40] y. du, n. zhang, c. wang, photo-catalytic degradation of trifluralin by sno2-doped cu2o crystals, catalysis commun., 11 (2010), pp. 670-674. doi: 10.1016/j.catcom.2010.01.021 . [41] n. budhiraja, sapna, v. kumar, m. tomar, v. gupta, s. k. singh, investigation on physical properties of sn-modified cubic cu2o nanostructures, j supercond nov magn, 32, 2019, pp. 1671–1679. doi: 10.1007/s10948-018-4858-6 [42] c. b. fitzgerald, m. venkatesan, a. p. douvalis, s. huber, and j. m. d. coey, sno2 doped with mn, fe or co: room temperature dilute magnetic semiconductors, j appl. phys., 95 (2004), pp. 7390-7392. doi: 10.1063/1.1676026 [43] z. junying, y. qu, w. qianghong, room temperature ferromagnetism of ni-doped sno2 system, modern appl sci 4(11) (2010) doi: 10.5539/mas.v4n11p124 [44] p. piccardo, m. mödlinger, g. ghiara, s. campodonico, v. bongiorno, investigation on a “tentacle-like” corrosion feature on bronze age tin-bronze objects, appl phys a, 113 (4) (2013), pp. 1039-1047. doi: 10.1007/s00339-013-7732-1 . [45] g. ghiara, c. grande, s. ferrando, p. piccardo, the influence of pseudomonas fluorescens on corrosion products of archaeological tin-bronze analogues, jom, 70(1) (2018), pp. 8185. doi: 10.1007/s11837-017-2674-2 [46] g. ghiara, l. repetto, p. piccardo, the effect of pseudomonas fluorescens on the corrosion morphology of archaeological tin bronze analogues, jom, 71(2) (2019), pp. 779-783. doi: 10.1007/s11837-018-3138-z [47] g. ghiara, r. spotorno, s.p. trasatti, p. cristiani, effect of pseudomonas fluorescens on the electrochemical behaviour of a single-phase cu-sn modern bronze, corros. sci., 139 (2018), pp. 227-234. doi: 10.1016/j.corsci.2018.05.009 . [48] b. little, p. wagner, f. mansfeld, an overview of microbiologically influenced corrosion, electrochim. acta, 37(12) (1992), 2, pp. 185-2194. doi: 10.1016/0013-4686(92)85110-7 https://doi.org/10.1016/j.electacta.2018.06.125 https://doi.org/10.1016/j.electacta.2015.04.001 https://doi.org/10.1016/j.electacta.2017.04.092 https://doi.org/10.1515/corrrev.2010.28.3-4.155 https://doi.org/10.1016/j.corsci.2013.11.010 https://doi.org/10.1016/j.apsusc.2018.11.144 https://doi.org/10.1016/j.culher.2018.10.004 https://doi.org/10.1179/sic.2010.55.1.20 https://doi.org/10.1179/019713694806066419 https://doi.org/10.1002/jemt.22110 https://doi.org/10.1002/adic.200790037 https://doi.org/10.1007/s12520-012-0111-6 https://doi.org/10.1038/nmeth.2019 https://doi.org/10.1007/978-981-10-0159-8 https://doi.org/10.1533/9781845693015.239 https://doi.org/10.1007/bf01186246 https://doi.org/10.1007/s00339-006-3550-z https://doi.org/10.1007/s00604-007-0839-3 https://doi.org/10.1016/j.jallcom.2015.07.124 https://doi.org/10.1016/j.catcom.2010.01.021 https://doi.org/10.1007/s10948-018-4858-6 https://doi.org/10.1063/1.1676026 https://doi.org/10.5539/mas.v4n11p124 https://doi.org/10.1007/s00339-013-7732-1 https://doi.org/10.1007/s11837-017-2674-2 https://doi.org/10.1007/s11837-018-3138-z https://doi.org/10.1016/j.corsci.2018.05.009 https://doi.org/10.1016/0013-4686(92)85110-7 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 10 [49] b. j. little, j. s. lee, microbiologically influenced corrosion, john wiley and sons, hoboken, 2007. [50] d. enning, j. garrelfs, corrosion of iron by sulfate-reducing bacteria: new views of an old problem, appl. environm. microbiol., 80 (4), (2014), pp. 1226-1236. doi: 10.1128/aem.02848-13 . [51] s. m. tiquia-arashiro, lead absorption mechanisms in bacteria as strategies for lead bioremediation, appl. microbiol. biotechnol., 102(13) (2018), pp. 5437-5444. doi: 10.1007/s00253-018-8969-6 [52] j. r. ibars, d. a. moreno, c. ranninger, mic of stainless steels: a technical review on the influence of microstructure, intern. biodeter. biodegrad., 29 (1992), pp. 343-355. doi: 10.1016/0964-8305(92)90051-o [53] x. shi, k. yang, m. yan, w. yan, y. shan, study on microbiologically influenced corrosion resistance of stainless steels with weld seams, front. mater., (2020). doi: 10.3389/fmats.2020.00083 https://doi.org/10.1128/aem.02848-13 https://doi.org/10.1007/s00253-018-8969-6 https://doi.org/10.1016/0964-8305(92)90051-o https://doi.org/10.3389/fmats.2020.00083 probability theory as a logic for modelling the measurement process acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 5 acta imeko | www.imeko.org june 2023 | volume a | number b | 5 probability theory as a logic for modelling the measurement process giovanni battista rossi1, francesco crenna1, marta berardengo1 1 measurement and biomechanics lab – dime – università degli studi di genova,via opera pia 15 a, 16145 genova, italy section: research paper keywords: measurement theory; epistemology; philosophy of probability; measurement modelling; probabilistic models citation: giovanni battista rossi, francesco crenna, marta berardengo, probability theory as a logic for modelling the measurement process, acta imeko, vol. 12, no. 2, article 13, june 2023, identifier: imeko-acta-12 (2023)-02-13 section editor: eric benoit, université savoie mont blanc, france received july 2, 2022; in final form march 16, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: giovanni battista rossi, e-mail: g.b.rossi@unige.it 1. introduction the problem of the nature of probability has become a topic in measurement science for over twenty years, as a part of the debate on the expression and evaluation of measurement uncertainty, significantly raised by the publication of the guide to the expression of uncertainty in measurement (gum) [1] and of its long lasting and still ongoing revision process [2]. in the debate, the opposition between the bayesian and the frequentist schools of thoughts in statistics soon emerged, which involves the consideration of the nature of probability. in this regard, some authors pursue an explicit adoption of a bayesian paradigm for the overall context of uncertainty evaluation [3], [4], others instead suggest maintaining a more open attitude [5], [6], even when expressing a preference for the bayesian view [7], or to include the frequentist approach [8], when appropriate. here the focus is put on measurement modelling and probability is regarded as a logical and mathematical tool for developing such models, in such a way as to account for uncertainty. alternative choices could be done, for example those based on the evidence theory [9], [10], but here only probability is discussed and investigated. measurement modelling has been recently the subject of investigation, not only in respect to practical issues [11], but also to theoretical and foundational aspects [12], [13]. yet the “nature” of probability in such modelling seems to have not been discussed explicitly, which is instead the goal of this communication. basically, it is here suggested that probability can be regarded as an appropriate logic for developing models of measurement when uncertainty must be accounted for. therefore, in section 2 deterministic measurement modelling will be firstly addressed. then, in section 3, the logical approach to probability here proposed will be presented. its application to probabilistic measurement modelling will be addressed in section 4 and conclusions will be drawn in section 5. 2. deterministic measurement modelling 2.1. generic modelling issues it is here suggested that probability can be understood as a logic for developing measurement models. the notion of model thus needs reviewing. to establish some terminology, let us consider a system as a set of entities with relations among them [14]. a model can be thus understood as an abstract system, capable of describing a class of real systems. for example, if we abstract the problem of the nature of probability has been drawn to the attention of the measurement community in the comparison between the frequentist and the bayesian views, in the expression and the evaluation of measurement uncertainty. in this regard, it is here suggested that probability can be interpreted as a logic for developing models of measurement capable of accounting for uncertainty. this contributes to regard measurement theory as an autonomous discipline, rather than a mere application field for statistics. following a previous work in this line of research, where only measurement representations, through the various kinds of scales, were considered, here the modelling of the measurement process is discussed and the validity of the approach is confirmed, which suggests that the vision of probability as a logic could be adopted for the entire measurement theory. with this approach, a deterministic model can be turned into probabilistic by simply shifting from a deterministic to a probabilistic semantic. mailto:g.b.rossi@unige.it acta imeko | www.imeko.org june 2023 | volume a | number b | 5 consider the height of the inhabitants of a generic town, the model, 𝑀, can be expressed by a function, ℎ: 𝑈 → 𝑋, that associate to each inhabitant his/her height, on a proper height scale. for maximum simplicity, in the following illustrative examples height will be considered as a purely ordinal property, and 𝑋 the set of the numbers expressing height on an ordinal scale. therefore, the model can be synthetically expressed by the triple: 𝑀 = (𝑈, 𝑋, ℎ). (1) let us now introduce the distinction between deterministic and probabilistic models. a typical statement related to model 𝑀 is: ℎ(𝑢) = 𝑥 , (2) with 𝑢 ∈ 𝑈 and 𝑥 ∈ 𝑋. yet the truth of this statement is undefined, till a specific town, 𝑇, is considered. with reference to 𝑇, instead, if 𝐴 denotes the set of its inhabitants, 𝑋𝐴 the set of their height values and ℎ𝐴 the corresponding height function, the model is now specialised to 𝑇, that is: 𝑀(𝑇) = (𝐴, 𝑋𝐴, ℎ𝐴). (3) suppose for example that in 𝑇 there are just 3 inhabitants, 𝐴 = {𝑎, 𝑏, 𝑐}, that 𝑋 = {1,2}, and ℎ𝐴 = {(𝑎, 2), (𝑏, 1), (𝑐, 1)}, then (𝐴, 𝑋𝐴 , ℎ𝐴) = ({𝑎, 𝑏, 𝑐}, {1,2}, {(𝑎, 2), (𝑏, 1), (𝑐, 1)}). (4) the structure in equation (4) provides a sematic, that is a criterion of truth for the deterministic model 𝑀, since it allows us to ascertain the truth of any statement involved in the model. for example, ℎ(𝑎) = 2 is true, whilst ℎ(𝑏) = 2 is false. the general truth criterion is thus, for town 𝑇, 𝑢 ∈ 𝐴, 𝑥 ∈ 𝑋𝐴: ℎ(𝑢) = 𝑥 ↔ (𝑢, 𝑥) ∈ ℎ𝐴. (5) let us call 𝑇 an instance of the model 𝑀: then a model is deterministic if for any instance of the model all the statements concerning the model are either true or false. conversely, we will call probabilistic a model where for at least one of its instances there is at least one statement concerning the model for which its state of truth cannot be ascertain, but only a probability can be assigned to it. the transition from a deterministic to a probabilistic description will be discussed in section 3. 2.2. modelling the measurand measurement modelling concerns both the measurand and the measurement process. the modelling of the measurand aims at ensuring that the property of interest can be measured and it is thus closely related to the measurability issue [15]. at a foundational level, this implies assuming that the quantity1 under consideration can be measured on an appropriate measurement scale, i.e., that it possesses the required empirical properties. for example, (empirical) order is required for an ordinal scale, whilst order and difference are needed in the case of an interval scale. at a more operational level, modelling the measurand may account for the interactions it has with the environment and with the measuring system, to ensure that they do not hinder measurement, to compensate them, if possible, and to account for them in the uncertainty budget. 1 “quantity” here stands for “measurable property”. 2 note that here the term “object” has to be understood as “the carrier of the property to be measured” irrespectively of it being a concrete object, here only the first aspect, that is the possess of proper empirical properties, is briefly discussed. this is typically the scope of the so-called representational theory and it can be summarised by one or more representation theorems. for example, taking again the case of height of persons, still considering it as an ordinal property, an operation of empirical comparison needs considering, that allows us to determine, for any pairs of persons, 𝑢 and 𝑣, whether 𝑢 is taller than 𝑣, 𝑢 ≻ℎ 𝑣, or 𝑣 is taller than 𝑢, 𝑣 ≻ℎ 𝑢, or they are equally tall, 𝑢 ∼ℎ 𝑣. one such operation, provided that is transitive, ensures that the function “height of persons”, introduced in the previous section, exists. the corresponding model is now: 𝑀′ = (𝑈, 𝑋, ≽, 𝑚) , (6) where ℎ, height, has been replaced by the more general symbol 𝑚, “measure”, and the subscript ℎ has been dropped accordingly. then, the corresponding representation theorem reads: 𝑢 ≽ 𝑣 ↔ 𝑚(𝑢) ≥ 𝑚(𝑣) . (7) yet, although the existence of the function ℎ is mathematically ensured by the properties of the empirical relation ≽ℎ, its actual experimental determination requires a measurement process, which is to be modelled now. 2.3. modelling the measurement process for modelling the measurement process, the approach proposed in reference [16] is here followed and only very briefly recalled. it is suggested that measurement can be parsed in two phases, called observation and restitution. in the observation phase the “object”2 carrying the property to be measured interacts with the measurement system, in such a way that an observable output, called instrument indication, is produced, based on which a measurement value can be assigned to the measurand. the successive phase, where the result is produced, based on the instrument indication and accounting for calibration results (calibration curve), is here called restitution. this approach is essentially in agreement with others recently proposed in the literature [17]-[20]. for example, in the case of persons’ height, the measuring device may consist of a platform, on which the subject to be measured must stand erect, and of an ultrasonic sensor, placed at a fixed height over the head of the subject. the instrument generates a signal whose intensity, 𝑦, is proportional to the distance of the sensor from the top of the head of the subject, which constitutes the instrument indication. let us call 𝜑 the function that describes this phase: thus, if 𝑎 is the object to be measured, 𝑦 = 𝜑(𝑎). (8) calibration requires the pre-constitution of a reference (measurement) scale, 𝑅 = {(𝑠1, 𝑥1), (𝑠2, 𝑥2), … , (𝑠𝑛 , 𝑥𝑛)}, which includes a set of standards, 𝑆 = {𝑠1, 𝑠2, … , 𝑠𝑛 }, and their corresponding measurement values, 𝑋 = {𝑥1, 𝑥2, … , 𝑥𝑛 }. calibration can be done by inputting such standards to the measuring system and recording their corresponding indications, thus forming the function 𝜑𝑠 = {(𝑠1, 𝑦1), (𝑠2, 𝑦2), … , (𝑠𝑛 , 𝑦𝑛 )}, which is a subset of 𝜑, defined above. based on this information, it is possible to obtain a calibration function, 𝑓 = like a workpiece, or an event, such as a sound or a shock, or even a person, in the case of psychometrics [13]. acta imeko | www.imeko.org june 2023 | volume a | number b | 5 {(𝑥1, 𝑦1), (𝑥2, 𝑦2), … , (𝑥𝑛 , 𝑦𝑛 )}, that establishes a correspondence between the value of each standard and the corresponding output of the measuring device. calibration allows us to perform measurement, since, once the instrument indication has been obtained, it is possible to assign to the measurand, in the restitution phase, the value of the standard that would have produced the same output, that is �̂� = 𝑓 −1(𝑦) ≜ 𝑔(𝑦). (9) lastly, we obtain a description of the overall measurement process, by combining observation and restitution: �̂� = 𝛾(𝑎) = 𝑔(𝜑(𝑎)) = 𝑓 −1(𝜑(𝑎)). (10) this equation constitutes a basic deterministic model of the measurement process. in [20], a more detailed model was presented, where the generation of instrument indication was more deeply investigated. yet the structure of that model is compatible with the one just recalled, that will be used in the following, for the sake of simplicity. let us now show how this model can be turned into probabilistic by just shifting from a deterministic to a probabilistic semantic, which is the main goal of this communication. prior to doing so, the present approach to considering probability theory as a logic must be presented, with a special focus on the notions of probabilistic function, with associated operations of inversion and composition, which are necessary for treating equation (10). 3. probability as a logic for models formulated through first-order languages 3.1. probabilistic semantic let us consider models formulated in a first order language, 𝑳, that is a language whose elementary propositions concern properties of, or relations among, individuals, whilst more complex ones can be formed by combining the elementary ones through logical operators, such as conjunction, ∧, disjunction, ∨, or negation, ¬ [21]. such a language is rich enough for our purposes, as it will appear in the following. once a statement is made, it is of interest to assess its truth or falsity. this is the object of semantic, and the basis of a deterministic semantic, for statements of our interest, has already been presented in previous section 2.1. the purpose of the proposed theory is to replace a deterministic semantic with a probabilistic one [22]. as we have seen, a deterministic model, 𝑀𝑑 , may be expressed by a structure, 𝐻 = (𝐶, 𝑅), where 𝐶 = 𝐴1 × 𝐴2 … × 𝐴𝑝 is a cartesian product of sets and 𝑅 = (𝑅1, 𝑅2, … , 𝑅𝑞 ), where each 𝑅𝑖 is a 𝑚𝑖 -ary relation on 𝐶, expressed in the language 𝑳. the truth of a generic statement, 𝜙, concerning 𝑀𝑑 , can be assessed in the following way: • if 𝜙 is an elementary proposition, it is true if for some 𝑅𝑖 ∈ 𝑅, 𝜙 ∈ 𝑅𝑖 , • if instead it is the combination of elementary propositions, through logical operators, it is true if it satisfies the truth condition of the operators combined with the truth state of the elementary propositions involved. a probabilistic model, 𝑀𝑝, instead is constituted by a finite collection of structures, 𝐸 = {𝐻1 , 𝐻2 , … 𝐻𝑀 }, all associated to the same collection of sets, 𝐶, and a probability distribution 𝑃(𝐻𝑖 ) over 𝐸, such that 𝑃(𝐸) = ∑ 𝑃(𝐻𝑖 ) = 1 𝑀 𝑖=1 . (11) such structures constitute a set of possible realisations of the same basic underlying structure and are sometimes suggestively called “possible worlds”. then, the probability of any statement 𝜙 associated to 𝑀𝑝 is 𝑃(𝜙) = 𝑃{𝐻 ∈ 𝐸|𝜙} = ∑ 𝑃(𝐻𝑖 ) 𝐻𝑖∈𝐸|𝜙 , (12) where 𝐻 ∈ 𝐸|𝜙 denotes a structure where 𝜙 is true, {𝐻 ∈ 𝐸|𝜙} is the subset of 𝐸 that includes all the structures in which 𝜙 is true and the sought probability is the sum of the probabilities of such structures. to apply this approach to measurement, its application to probabilistic m-ary relations and to probabilistic functions must be investigated, with a special focus on probabilistic inversion. 3.2. probabilistic relations if 𝑅(𝑢1, 𝑢2, … 𝑢𝑚 ) is an m-ary relation and 𝐸 is a finite collection of structures, 𝐻𝑖 (𝐶, 𝑅𝑖 ), where the truth of 𝑅 can be ascertained, we obtain: 𝑃(𝑅(𝑎1, 𝑎2, … 𝑎𝑚 )) = 𝑃{𝐻 ∈ 𝐸|𝑅(𝑎1, 𝑎2, … 𝑎𝑚 )} = ∑ 𝑃(𝐻𝑖 )𝐻𝑖∈𝐸|𝑅(𝑎1,𝑎2,…𝑎𝑚) . (13) probabilistic relations were treated in detail in reference [22] and thus are not pursued further here. 3.3. probabilistic functions considering a function 𝑓: 𝐴 → 𝐵, the associated structure is 𝐻 = (𝐴 × 𝐵, 𝑓), and the generic statement 𝑣 = 𝑓(𝑢) denotes a binary relation on 𝐴 × 𝐵 such that ∀𝑢 ∈ 𝐴, ∃𝑣 ∈ 𝐵(𝑣 = 𝑓(𝑢) ), and ∀𝑢 ∈ 𝐴∀𝑣, 𝑧 ∈ 𝐵(𝑣 = 𝑓(𝑢) ∧ 𝑧 = 𝑓(𝑢) → 𝑣 = 𝑧). let us then consider a finite collection, 𝐸, of such structures and an associated probability distribution on 𝐸. then the probability that the above statement holds true for a pair (𝑎, 𝑏), 𝑎 ∈ 𝐴, 𝑏 ∈ 𝐵, can be calculated by: 𝑃(𝑓(𝑎) = 𝑏) = 𝑃{𝐻 ∈ 𝐸|𝑓(𝑎) = 𝑏} = ∑ 𝑃(𝐻𝑖 )𝐻𝑖|𝑓(𝑎)=𝑏 . (14) 3.4. probabilistic inversion consider now the probabilistic inverse to the function 𝑓 in the previous subsection, i.e., 𝑔: 𝐵 → 𝐴. let us consider first the possibility of calculating directly the probability associated to each value of 𝑔 from the knowledge of the corresponding direct function 𝑓, through the very definition of inverse function, by establishing the following rule: 𝑃(𝑔(𝑏) = 𝑎) ∝ 𝑃{𝐻 ∈ 𝐸|𝑓(𝑎) = 𝑏} = ∑ 𝑃(𝐻𝑖 )𝐻𝑖|𝑓(𝑎)=𝑏 . (15) after imposing the closure condition ∑ 𝑃(𝑔(𝑏) = 𝑢)𝑢∈𝐴 =1, we obtain the rule: 𝑃(𝑔(𝑏) = 𝑎) = ∑ 𝑃(𝐻𝑖)𝐻𝑖|𝑓(𝑎)=𝑏 ∑ 𝑃(𝑓(𝑢)=𝑏)𝑢∈𝐴 . (16) let us briefly discuss the relationship between probabilistic inversion, as here presented, and the bayes-laplace rule. to do that, let now 𝑢 and 𝑣 be two variables that denotes generic elements of 𝐴 and 𝐵, respectively, and let 𝑎 and 𝑏 be two specific elements of 𝐴 and 𝐵, respectively. then we can form the atomic statements 𝜙 = (𝑢 = 𝑎) and 𝜓 = (𝑣 = 𝑏), that means, for example, that, in some circumstance, the element 𝑎 ∈ 𝐴 acta imeko | www.imeko.org june 2023 | volume a | number b | 5 occurred and the element 𝑏 ∈ 𝐵 occurred. then function 𝑓 induces a conditional probability measure on 𝐴 × 𝐵, defined by: 𝑃(𝜓|𝜙) = 𝑃((𝑣 = 𝑏)|(𝑢 = 𝑎) ) = 𝑃(𝑏 = 𝑓(𝑎)). (17) then the (inverse) conditional probability 𝑃(𝜙|𝜓), equals the probability of the inverse function, and can be calculated through the bayes-laplace rule, with a uniform prior, that is 𝑃(𝜙|𝜓) = 𝑃((𝑢 = 𝑎)|(𝑣 = 𝑏)) = 𝑃((𝑣 = 𝑏)|(𝑢 = 𝑎) ) 𝑃((𝑣 = 𝑏)) = 𝑃(𝑎 = 𝑔(𝑏)) (18) therefore, in this context, the bayes laplace rule can be interpreted as a procedure for calculating the inverse of a probabilistic function. consequently, its use in measurement can be presented just as a step in measurement modelling, as it will be shown in the next section, without taking any commitment to bayesian statistics, with its philosophical and epistemological implications [23]. 3.5. composition of probabilistic functions lastly, let 𝑓, 𝑔, and ℎ be three probabilistic functions, 𝑓: 𝐴 → 𝐵, 𝑔: 𝐵 → 𝐶 and ℎ: 𝐴 → 𝐶, where for 𝑢 ∈ 𝐴, ℎ(𝑢) = 𝑔(𝑓(𝑢)). then the probability of statements concerning ℎ can be assessed through the rule: 𝑃(𝑤 = ℎ(𝑢)) = ∑ 𝑃 𝑣∈𝐵 (𝑤 = 𝑔(𝑣))𝑃(𝑣 = 𝑓(𝑢)) (19) where 𝑤 ∈ 𝐶. let us now apply the above rules to the probabilistic modelling of measurement processes. 4. probability as a logic for measurement modelling 4.1. modelling the measurand in section 2.2 a deterministic model was developed, based on equations (6) and (7). such model implies that empirical relations appearing in it are uncertainty-free. if, instead, the intrinsic uncertainty of the measurand, which basically corresponds to the “definitional uncertainty” in the vim, needs considering, such model must be turned into probabilistic. this can be done, by applying equation (13), to equation (7), which ultimately yields 𝑃(𝑢 ≽ 𝑣) = 𝑃(ℎ(𝑢) ≥ ℎ(𝑣)), (20) as proved in reference [22] and similar results can be obtained for all the scales of practical interest. 4.2. modelling the measurement process the overall modelling of the measurement process has been outlined in section 2.3, where it was suggested that the overall measurement process can be described by the measurement function 𝛾: 𝐴 → 𝑋, characterised by equation (10). therefore, a proper structure for the measurement process is 𝑀" = (𝐴 × 𝑌 × 𝑋, 𝜑, 𝑓, 𝛾). (21) yet, this description does not include the modelling of the measurand and does not allow to account for the associated intrinsic or definitional uncertainty, as previously discussed. this is acceptable in practice when such uncertainty is considered 3 see reference [22] for additional details of this representational side of the question. negligible. yet in the general case, models 𝑀′ and 𝑀" must be merged, yielding (for a purely ordinal quantity) the structure: 𝑁 = (𝐴 × 𝑌 × 𝑋, ≽, 𝑚, 𝜑, 𝑓, 𝛾). (22) as anticipated, this overall model can be interpreted either as deterministic of probabilistic, after interpreting the relations, variable and/or functions involved accordingly. recalling the previously presented equations, we obtain for a generic probabilistic statement concerning the measurement function 𝛾, in model 𝑀′: 𝑃(�̂� = 𝛾(𝑎)) = 𝑃 (�̂� = 𝑓 −1(𝜑(𝑎))) = ∑ 𝑃(𝑦=𝑓(𝑥)) ∑ 𝑃(𝑦=𝑓(𝑤)) 𝑤 𝑃(𝑦 = 𝜑(𝑎)) 𝑦 . (23) on the other hand, if we want to account for intrinsic uncertainty also, we should refer to model 𝑁 and consider 𝑚 as a probabilistic function as well. note, in this regard, that the function 𝜑: 𝐴 → 𝑌, only depends (at least ideally) on the way in which the object 𝑎 realises and manifests the quantity, 𝑥, of interest. let us call it 𝑥𝑎 = 𝑚(𝑎). therefore, 𝑦 = 𝜑(𝑎) = 𝑓(𝑚(𝑎)). (24) 4.3. a very simple numerical illustrative example let us finally illustrate the entire procedure by a very simple numerical example, concerning the (purely ordinal) height of three subjects, call them john (𝑎), paul (𝑏) and evelyn (𝑐). suppose john is definitely taller than the other two, so that 𝑃(𝑎 ≻ 𝑏) = 𝑃(𝑎 ≻ 𝑐) = 1.0. let instead paul be almost as tall as evelyn”, with 𝑃(𝑏~𝑐) = 0.6, 𝑃(𝑏 ≻ 𝑐) = 0.1 and 𝑃(𝑐 ≻ 𝑏) = 0.3. then it is easy to check that a proper function 𝑚: {𝑎, 𝑏, 𝑐} → {1,2} will have3: 𝑃(𝑚(𝑎) = 1) = 0.0; 𝑃(𝑚(𝑎) = 2) = 1.0; 𝑃(𝑚(𝑏) = 1) = 0.9; 𝑃(𝑚(𝑏) = 2) = 0.1; 𝑃(𝑚(𝑐) = 1) = 0.7; 𝑃(𝑚(𝑐) = 2) = 0.3. let us now consider the calibration function, 𝑓: 𝑋 → 𝑋 and let 𝑋 = {1,2} and the probability of 𝑓 be such that: 𝑃(𝑓(1) = 1) = 0.8; 𝑃(𝑓(1) = 2) = 0.2; 𝑃(𝑓(2) = 1) = 0.1; 𝑃(𝑓(2) = 2) = 0.9. then the probability of the inverse function 𝑔 is such that: 𝑃(𝑔(1) = 1) = 8/9; 𝑃(𝑔(1) = 2) = 1/9; 𝑃(𝑔(2) = 1) = 2/11; 𝑃(𝑔(2) = 2) = 9/11. the observation function 𝜑, is obtained by composing 𝑓 and 𝑚, according to equation (19), which yields: 𝑃(𝜑(𝑎) = 1) = 0.10; 𝑃(𝜑(𝑎) = 2) = 0.90; 𝑃(𝜑(𝑏) = 1) = 0.73; 𝑃(𝜑(𝑏) = 2) = 0.27; 𝑃(𝜑(𝑐) = 1) = 0.59; 𝑃(𝜑(𝑐) = 2) = 0.41. lastly, the measurement function 𝛾 outcomes from the composition of 𝑔 with 𝜑, yielding: 𝑃(𝛾(𝑎) = 1) = 0.251; 𝑃(𝛾(𝑎) = 2) = 0.749; 𝑃(𝛾(𝑏) = 1) = 0.698; 𝑃(𝛾(𝑏) = 2) = 0.302; 𝑃(𝛾(𝑐) = 1) = 0.599; 𝑃(𝛾(𝑐) = 2) = 0.401. 5. conclusion the problem of the interpretation of probability in measurement has been considered and it was suggested to regard acta imeko | www.imeko.org june 2023 | volume a | number b | 5 probability theory as a logic for developing probabilistic models. a remarkable feature of this approach is that after modelling measurement through the relations holding among the transformations involved, the model can be treated as either deterministic or probabilistic, depending upon the chosen sematic. alternative approaches can be considered, such as the fuzzy logic or the possibility theory [9], [10]. all these approaches have their merits and limitations, and the choice may be done depending upon the assumptions made in the development of the model. the logicistic approach here developed may overcome some reservations about probability theory, related to the limits of the frequentistic and the subjectivistic approaches, and may thus contribute to a wider use of the probabilistic approach. references [1] bipm, iec, ifcc, iso, iupac, iupap, oiml 1993 guide to the expression of uncertainty in measurement (iso, geneva, switzerland) corrected and reprinted, 1995, isbn 92-67-10188 [2] w. bich, m. cox, c. michotte, towards a new gum – an update, metrologia, 53, 2016, s149-s159. doi: 10.1088/0026-1394/53/5/s149 [3] w. bich, from errors to probability density functions. evolution of the concept of measurement uncertainty, ieee trans. instr. meas., 61, 2012, pp. 2153-2159. doi: 10.1109/tim.2012.2193696 [4] i. lira, the gum revision: the bayesian view toward the expression of measurement uncertainty, european journal of physics, 37, 2016, 025803. doi: 10.1088/0143-0807/37/2/025803 [5] d. r. white, in pursuit of a fit-for-purpose uncertainty guide, metrologia, 53, 2016, s107-s124. doi: 10.1088/0026-1394/53/4/s107 [6] a. possolo, a. l. pintar, plurality of type a evaluations of uncertainty, metrologia, 54, 2017, pp. 617-632. doi: 10.1088/1681-7575/aa7e4a [7] a. possolo, c. elster, evaluating the uncertainty of input quantities, in measurement models, metrologia, 51, 2014, pp. 339353. doi: 10.1088/0026-1394/51/3/339 [8] w. f. guthie, h.-k. liu, a. l. rukhin, b. toman, c.-m. jack wang, n.-f. zhang, three statistical paradigms for the assessment and interpretation of measurement uncertainty, in f. pavese. a. b. forbes (eds.) data modelling for metrology and testing in measurement science, boston, birkhauser, 2009. [9] e. benoit, expression of uncertainty in fuzzy scales based measurements, measurement, 46, 2013, pp. 3778-3782. doi: 10.1016/j.measurement.2013.04.006 [10] s. salicone, m. prioli, measuring uncertainty within the theory of evidence, switzerland, springer, 2018. [11] jcgm gum-6, developing and using measurement models, 2020 [12] l. pendrill, quality assured measurement, switzerland, springer, 2019. [13] l. mari, m. wilson, a. maul, measurement across the sciences, switzerland, springer, 2021. [14] a. backlund, the definition of system, kybernetes 29, 2000, pp. 444-451. doi: 10.1108/03684920010322055 [15] g. b. rossi, measurability, measurement, 40, 2007, pp. 545-562. doi: 10.1016/j.measurement.2007.02.003 [16] g. b. rossi, toward an interdisciplinary probabilistic theory of measurement, ieee trans. instrumentation and measurement, 61, 2012, pp. 2097-2106. doi: 10.1109/tim.2012.2197071 [17] k. d. sommer, b. r. l. siebert, systematic approach to the modelling of measurement for uncertainty evaluation, metrologia, 43, 2006, s200-s210. doi: 10.1088/1742-6596/13/1/052 [18] a. giordani, l. mari, a structural model of direct measurement, measurement, 145, 2019, pp. 535-550. doi: 10.1016/j.measurement.2019.05.060 [19] r. z. morawski, an application oriented mathematical meta model of measurement, measurement, 46, 2013, pp. 3753-3765 doi: 10.1016/j.measurement.2013.04.004 [20] g. b. rossi, f. crenna, a formal theory of the measurement system, measurement, 116, 2018, pp. 644-651. doi: 10.1016/j.measurement.2017.10.062 [21] g. rigamonti, corso di logica, torino, boringhieri, 2005. [in italian] [22] g. b. rossi, f. crenna, a first-order probabilistic logic with application to measurement representations, measurement 79, 2016, pp. 251-259. doi: 10.1016/j.measurement.2015.04.024 [23] g. b. rossi, f. crenna, beyond the opposition between the bayesian and the frequentistic views in measurement, measurement, 151, 2020, 107157. doi: 10.1016/j.measurement.2019.107157 https://doi.org/10.1088/0026-1394/53/5/s149 https://doi.org/10.1109/tim.2012.2193696 https://doi.org/10.1088/0143-0807/37/2/025803 http://dx.doi.org/10.1088/0026-1394/53/4/s107 https://doi.org/10.1088/1681-7575/aa7e4a https://doi.org/10.1088/0026-1394/51/3/339 http://dx.doi.org/10.1016/j.measurement.2013.04.006 https://doi.org/10.1108/03684920010322055 https://doi.org/10.1016/j.measurement.2007.02.003 https://doi.org/10.1109/tim.2012.2197071 https://doi.org/10.1088/1742-6596/13/1/052 about:blank about:blank about:blank https://doi.org/10.1016/j.measurement.2015.04.024 https://doi.org/10.1016/j.measurement.2019.107157 microsoft word article 4 editorial jena.docx acta imeko  august 2013, volume 2, number 1, 5 – 6  www.imeko.org    acta imeko | www.imeko.org  august 2013 | volume 2 | number 1 | 5  introduction to the acta imeko issue devoted to selected  papers presented in the 14 th  joint international imeko tc1 +  tc7 + tc13 symposium  gerhard linß  ilmenau university of technology, department of quality assurance and industrial image processing, faculty of mechanical engineering,  gustav‐kirchhoff‐platz 2, 98693 ilmenau, germany      keywords: imeko tc1 + tc7 + tc13 symposium, intelligent quality measurements, jena, germany  citation: gerhard linß, “introduction to the acta imeko issue devoted to selected papers presented in the 14 th  joint international imeko tc1 + tc7 + tc13  symposium”, acta imeko, vol. 2, no. 1, article 4, august 2013, identifier: imeko‐acta‐02(2013)‐01‐04  editors: paolo carbone, university of perugia, italy; gerhard linß, ilmenau university of technology, germany  copyright: © 2013 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: gerhard linß, e‐mail: gerhard.linss@tu‐ilmenau.de    1. introduction  the 14th joint international imeko tc1 + tc7 + tc13 symposium, which took place in jena, germany, had the title “intelligent quality measurements – theory, education and training”. the 14th joint international imeko tc1 + tc7 + tc13 symposium was intended to reflect innovative solutions for intelligent quality measurements in both theory and application. international researchers from 12 countries presented their exciting work in fundamentals of measurement science, mathematical models in measurement, new education and training methods and applications for intelligent quality measurements, for measurements in medicine and measurements in biology. the symposium aimed to bring researchers and developers from various fields together to share their new thoughts, findings and applications. the response from the academic community has been great, with more than 70 submissions received. the authors have contributed towards new knowledge and understanding, and have provided research results and applications that will be of important value to researchers, students and industry alike. the involved competence network spectronet green vision connected specialists for visual quality control with digital image processing and spectral imaging in research and industry, nutrition and health, transportation, environment, security and administration (www.spectronet.de). additionally the 56th international scientific colloquium, which was held at the ilmenau university of technology from 12th to 16th september 2011 has had an unbroken tradition of more than 50 years and is the "flagship" event of the university, having an excellent reputation. in 2011 the international scientific colloquium was again organised by the faculty of mechanical engineering. the title of the conference is "innovation in mechanical engineering – shaping the future" (www.iwk.tu-ilmenau.de). we are grateful to all the contributors who presented their valuable work to the research community in jena 2011. the journal papers presented in this acta imeko issue were chosen by the imeko tc1 + tc7 + tc13 board from the papers which were presented at the 14th imeko tc1 + tc7 + tc13 symposium. after the imeko symposium in jena more than 17 authors were recommended for providing an updated and extended version of their jena papers for publication in a special issue of the journal measurement or an issue in acta imeko in february 2012. 12 authors accepted this invitation and papers begun arriving us in spring of 2012. in the next step the received extended papers underwent a normal reviewing process. 14 reviewers were involved in the reviewing process and helped to optimize the final manuscripts with constructive references and recommendations. abstract  this  editorial  article  is  a  brief  introduction  to  the  acta  imeko  issue  devoted  to  selected  papers  presented  in  the  14 th   joint  international  imeko  tc1  +  tc7  +  tc13  symposium  “intelligent  quality  measurements  ‐  theory,  education  and  training”.  this  symposium took place in jena, germany from august 31 st  to september 2 nd  2011 in conjunction with the 56 th  iwk ilmenau university  of technology and the  11  th  spectronet collaboration forum.  acta imeko | www.imeko.org  august 2013 | volume 2 | number 1 | 6  at the result we had seven extended and positive ranked papers, which show a significant update to take into account progress since the symposium submission and the discussions at the symposium in jena. four of the positive ranked papers discuss the role of mathematical models in measurement and so these papers are published in the measurement journal (2013). the other three positive ranked papers discuss new education and training methods and applications for intelligent quality measurements and on this thematic basis these papers are published in this issue of acta imeko. 2. about tc1, tc7 and tc13  tc1 is concerned with all matters of education and training of professional scientists and engineers for measurement and instrumentation including curricula, syllabuses and methods of teaching as well as the nature and scope of measurement and instrumentation as an academic discipline. tc1 of imeko was established in 1967 (www.imeko.org/tc1). tc7, the committee established in 1973 under the name measurement theory and in 1993 redesignated as measurement science, is concerned with the development of measurement science (www.imeko.org/tc7). tc13 is concerned with measurement of whole body, organ and cellular function, medical imaging and medical information systems (www.imeko.org/tc13). 3. the journal papers  the three selected journal papers discuss several applications for intelligent quality measurements in highly topical industrial fields, new education and training methods and the combination of image processing with classical quality assurance methods. the focus of paper [1] lays on the realization, how to combine image processing with classical quality assurance methods. two industrial applications were used to describe the problem to gain the importance of this combination. very often the technical realization of sensor systems and data processing are completely separated to quality inspection tasks. so special trainings as well as special parts in the lectures were developed and structured for closing this known gap. paper [2] discusses two analysis activities in the construction material industry, which could be solved by intelligent image processing algorithms for saving time and costs. one of the tasks was the optical identification of recycled aggregates of construction and demolition waste (cdw) as basis of an innovative sorting method on the field of processing of cdw and another task was the optical analysis of samples from mineral aggregates. paper [3] discusses new problems of inspection planning arising from the improvement in measurement technology. the paper describes essential demands, ideas and conceptual approaches to multistructured quality inspections. the background is the fact that the development and control of more and more complex and extensive technical systems yields to measurement-technology requirements in an increasing degree. 4. conclusions  we are grateful to all the contributors who provided their extended papers for this issue of acta imeko and the issue of measurement. it was a great pleasure to act as guest editor for this issue of acta imeko. particularly i must thank the authors, the reviewers for contribution, evaluation and recommendation and specially paul regtien for his support, help and copyediting and publishing process. references  [1] m. rosenberger, m. schellhorn, g. linß, "new education strategy in quality measurement technique with image processing technologies chances, applications and realisation", acta imeko, vol. 2 (2013), no. 1, pp. 56-60. [2] k. anding, d. garten, e. linß, "application of intelligent image processing in the construction material industry", acta imeko, vol. 2 (2013), no. 1, pp. 61-73. [3] k. weissensee, "new demands on inspection planning and quality testing for microand nanostructured components", acta imeko, vol. 2 (2013), no. 1, pp. 74-78. microsoft word article 14 121-757-1-pb.docx acta imeko december 2013, volume 2, number 2, 78 – 85 www.imeko.org acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 78 capacitive facial activity measurement ville rantanen1, pekka kumpulainen1, hanna venesvirta2, jarmo verho1, oleg špakov2, jani lylykangas2, akos vetek3, veikko surakka2, jukka lekkala1 1 sensor technology and biomeasurements, department of automation science and engineering, tampere university of technology, korkeakoulunkatu 3, fi-33720 tampere, finland 2 research group for emotions, sociality, and computing, tampere unit for computer-human interaction, school of information sciences, university of tampere, kanslerinrinne 1, fi-33014, tampere, finland 3 media technologies laboratory, nokia research center, otaniementie 19, fi-02150 espoo, finland section: research paper keywords: capacitive measurement, distance measurement, facial activity measurement, facial movement detection, hierarchical clustering, principal component analysis citation: ville rantanen, pekka kumpulainen, hanna venesvirta, jarmo verho, oleg špakov, jani lylykangas, akos vetek, veikko surakka, jukka lekkala, capacitive facial activity measurement, acta imeko, vol. 2, no. 2, article 14, december 2013, identifier: imeko-acta-02 (2013)-02-14 editor: paolo carbone, university of perugia received may 31st, 2013; in final form november 20th, 2013; published december 2013 copyright: © 2013 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: this work was funded by nokia research center, the finnish funding agency for technology and innovation, and the finnish cultural foundation. corresponding author: ville rantanen, e-mail: ville.rantanen@tut.fi 1. introduction measuring facial movements has many possible applications. human-computer and human-technology interaction (hci and hti) can use information of voluntary facial movements for the interaction [1]-[7]. other applications, for example in behavioural science and medicine, can also benefit from the automated analysis of human facial movements and expressions [8]-[15]. in the context of hci, the use of facial movements has been studied already for a decade. the first implementations were based on measuring electromyographic (emg) signals that reflect the electrical activity of the muscles [1]. the measurement system by barreto et al. [1] utilised only bioelectric signals for pointing and selecting objects, but later on the emg measurement was adopted as a method to indicate selections in hci when gaze is used for pointing [2], [3], [4], [6]. recently, a capacitive detection method has been introduced as an alternative for the facial emg measurement [5]. it provides a contactless alternative that measures facial movement instead of the electrical activity of the muscles that the emg measures. studies about pointing and selecting with the capacitive method in combination with head-mounted, video-based gaze tracking have been also published [7], [16], [17], [18]. facial action coding system (facs) is a vision-based method that characterises facial actions based on the activity of different facial muscles [19], [20]. each facial expression has certain activated muscles that can have different levels of contraction. facs and the detection of active muscles have been used as a basis for automatically analysing facial expressions, for example, for the use of behavioural science and medicine [9], [10], [11], [13], [14], [15]. these studies describe automated implementations of facs by using vision-based abstract a wide range of applications can benefit from the measurement of facial activity. the current study presents a method that can be used to detect and classify the movements of different parts of the face and the expressions the movements form. the method is based on capacitive measurement of facial movements. it uses principal component analysis on the measured data to identify active areas of the face in offline analysis, and hierarchical clustering as a basis for classifying the movements offline and in real-time. experiments involving a set of voluntary facial movements were carried out with 10 participants. the results show that the principal component analysis of the measured data could be applied with almost perfect performance to offline mapping of the vertical location of the facial activity of movements such as raising and lowering eyebrows, opening mouth, raising mouth corners, and lowering mouth corners. the presented classification method also performed very well in classifying the same movements both with the offline and the real-time implementations. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 79 methods in the analysis. however, facial emg can also register facial actions and provide information that is highly similar to the one provided by facs [11]. emg has also been shown to be suitable for measuring emotional reactions from the face [8]. this has been done long before emg was first applied in the hci context to detect voluntary facial movements. the presented method applies capacitive measurement principles to measure the activity of the face. it has several advantages over the other methods that can be used for the task. compared to emg measurements, the presented method allows the measurement of more channels simultaneously. it is contactless and it does not require the attachment of electrodes to the face. attached electrodes significantly limit the maximum number of measurable channels and they may also affect facial movements that are being targeted with the measurement [11], [21]. when compared to vision-based detection of facial activity, the capacitive method allows easier integration of the measurement to mobile, head-worn equipment, is unaffected by environmental lighting conditions, and can be carried out with computationally less intensive signal processing. for the current study, a wireless, wearable prototype device was constructed, and analysis of data from controlled experiments was done to identify the location of facial activity and to classify it during different voluntary movements. voluntary facial movements have previously been detected by identifying transient peaks from the signals [5]. the presented method provides a more robust way to analyse the facial activity based on multichannel measurements. 2. methods 2.1. capacitive facial movement detection the measurement method for measuring facial activity is based on the capacitive detection of facial movements that was introduced in [5]. it applies the same measurement principle as capacitive push buttons and touchpads, and a single measurement channel requires only a single electrode that produces an electric field. the produced field can be used to detect conducting objects in its proximity by measuring the capacitance because the capacitive coupling between the electrode and the object changes as the object moves. in principle, the distance between the target and the electrode is measured. 2.2. prototype device the wearable measurement prototype device is shown in figure 1. the device was constructed as a headset that should fit most adults. the earmuffs of the headset house the necessary electronics, and the extensions seen in front of the face include the electrodes for the capacitive measurement channels. the device contains 22 electrodes in total, 11 for both sides of the face. the top extensions have 4 electrodes each, the middle ones have 3 each, and the lowest ones have 4. the electrodes are printed circuit board pieces with a size of 12 x 20 mm. they are connected to the measurement electronics with thin coaxial cables that shield the signals. the capacitive measurements are carried out with a programmable controller for capacitance touch sensors (ad7147 by analog devices). the sampling frequency was dictated by technical limitations and it was set to the maximum possible, 29 hz. the device is battery-powered and a bluetooth module (rn-41 by roving networks) provides the connectivity to the computer. the device has the possibility for additional measurements such as inertial measurements via a 3d gyroscope and a 3d accelerometer. the operation of the device is controlled by atmel’s atmega168p microcontroller. 2.3. experiments ten participants (five male and five female, ages 22-33, mean age 27) were briefly trained to perform a set of voluntary facial movements. the participants were chosen to be inexperienced in carrying out the movements to avoid more easily measured overly expressive movements that experienced participants might perform. the movements were: lowering the eyebrows, raising the eyebrows, closing the eyes, opening the mouth, raising the mouth corners, lowering the mouth corners, and relaxation of the face. the relaxation was included to help the participant relax during the experiments while doing the other movements. the movements were instructed to be performed according to the guidelines of facs [20]. the participants were instructed to activate only the needed muscles during each of the movements. after a brief practise period and verification that the participant made the movements correctly, the device was worn by the participant as shown in figure 1: the top extensions targeted the eyebrows, the middle ones the cheek areas, and the bottom ones the jaw and mouth area. the distance of each of the measurement electrodes from the facial tissue was adjusted to be as close as possible without the electrodes touching the face during the movements. this way the distance was approximately 1 cm for all electrodes. in the experiments, synthesized speech was used to give instructions to the participants to perform each individual movement. after putting on the device, two repetitions of each of the movements were carried out in a controlled practise session to familiarise the participants with the experimental procedure. the actual procedure consisted of ten repetitions of each movement carried out in randomised order. participants were given 10 seconds to complete each repetition. a mirror was used throughout the experiments to provide visual feedback of the facial movements to the participants. figure 1. the wearable measurement device. the numbers represent the extension pieces that house the measurement electrodes. the actual electrode locations are on the pieces facing the face. top middle left right bottom acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 80 2.4. data processing 2.4.1. signal processing principle figure 2 shows a diagram of the pre-processing that was applied to the signals prior to further data processing. first the capacitive signals were converted to signals proportional to the physical distance between the facial tissue and the measurement electrode. the conversion normalises the sensitivity of the measurement to the distance. the capacitance measurement was modelled with the equation of a parallel plate capacitor: , d a c   (1) where ε is the permittivity of the substance between the plates, a the plate area, and d the distance between the plates. one plate is formed by the measurement electrode and the other by the targeted facial tissue. while the surface profile of the facial tissue is often not a plate, each unique profile can be considered to have such an equivalent plate that the equation 1 can be applied. since the relationship between the capacitance and the distance is inversely proportional, the sensitivity of the capacitance to the distance is dependent on the distance itself. the absolute distance is not of interest, and a measure proportional to the distance can be calculated as , 11 bs p ccc d   (2) where cs is the capacitance value and cb is the base level of the capacitance channel. each channel has a unique base level that is affected by the length of the electrode cable and the surroundings of the electrode determined by its position on the extension. for the conversion, the base levels of all the capacitance channels were measured when the measurement electrodes were directed away from conducting objects. smoothing and baseline removal were applied to the distance signals computed with equation 2. these two steps were different when locating the facial activity and when classifying it. the differences are explained below in the corresponding sections. after processing the signals, only the first 4.5 seconds of the signals during each repetition of the movements were considered when calculating the results. the remaining 5.5 seconds of each 10-second repetition was neglected from further analysis because all the participants had already finished the instructed movements by then, and they sometimes carried out other movements to relax during that remaining time. 2.4.2. locating facial activity the smoothing applied to the distance signals when locating the facial activity was done with a moving median filter with a length of 35 samples (approximately 1.2 seconds). this was done to remove noise. further, the baselines of the signals were removed by subtracting the signal means during each repetition of the instructed movements. the baseline removal normalises the signal sequences so that they represent the relative changes in the physical distance. principal component analysis (pca) was carried out to the processed signal sequences to find out the locations of the facial activity during the instructed facial movements. pca is a linear method which transforms a data matrix with multiple variables, measurement channels in this case, to a set of uncorrelated components [22]. these principal components are linear combinations of the original variables and they describe the major variations in the data. pca decomposes the data matrix x, which has m measurements and n channels, as the sum of the outer product of vectors ti and pi and residual matrix e: ,2211 eptptptx  t kk tt  (3) where k is the number of principal components used. if all possible components are used, the residual reduces to zero. vectors ti are called scores, and pi are eigenvectors of the covariance matrix of x and are called loadings. the principal components in the equation 3 are ordered according to the corresponding eigenvalues. to localise facial activity, the first principal component and its loadings were considered. the first principal component describes the major data variations, and, thus, the location of the most significant facial activity can be identified by analysing the loadings of the corresponding measurement channels. for the analysis, the loadings were normalised by dividing their absolute values with the sum of the absolute values of all channels. as a result, the sum of the normalised values is equal to 1. to present the results, the vertical location of each repetition of the movements was mapped to the part of the face that introduced two of the three most significant relative loadings of the first principal component (m-out-of-n detector). for calculating percentages of successful mappings, the correct source of activity was considered to be the top extension channels for the lowering and raising eyebrows as well as the closing eyes movement, the bottom extension channels for the opening mouth and lowering mouth corners movements, and the middle or bottom extension channels for the raising mouth corners movement. median loadings of the 10 repetitions of each movement were calculated for each participant and channel separately to verify the decisions about the correct sources of activity. 2.4.3. classifying facial activity smoothing causes a delay. therefore distance signals without smoothing were used when classifying the facial activity. baseline removal was carried out to the distance signals figure 2. a block diagram of the signal processing. raw signal conversion to distance smoothing filter baseline removal processed signal acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 81 directly. figure 3 presents the algorithm used for solving the baseline for its removal. the baseline calculation was based on a median filter. the median can perform well in the task because the signals were expected to have longer baseline sequences than the ones resulting from facial activity. the median filter applies a logic that only selects part of the samples as baseline points for the median calculation. the selection is based on a constant false alarm rate (cfar) processor that calculates an adaptive threshold based on the noise characteristics of the processed signal [23], [24]. the distance signal was first pre-processed with a filter that implements a differentiator, a single-pole low-pass filter with a time constant of 20 ms, and a full-wave rectifier. this makes the input suitable for the cfar processor. the current sample is used as a test sample for the processor. the implemented version of the processor uses samples before the test sample, referred to as reference samples, for calculating the threshold. the processor also leaves out samples closest to the test sample as guard samples to reduce the information overlap between the test and reference samples. samples closer than 1 second to the test sample were considered guard samples and the following 14 seconds were considered as the reference samples. the mean of the reference samples was then calculated and multiplied by a sensitivity parameter to obtain the adaptive threshold. the sensitivity parameter was chosen to be 0.5 in this case. the threshold was then fed to a comparator with the pre-processed test sample to find out if the test sample did not exceed it. the respective samples of the input signal were included in the median calculation by the selective median filter that had a length of 15 seconds. finally, the baseline is calculated with a 2-second moving average filter from the median filtered signal to smooth step-wise transitions in the baseline level. a method to classify facial movements based on the processed multichannel data was implemented. the classification method was based on hierarchical clustering. it used the ward’s linkage which forms clusters by minimising the increase of total within-cluster variance about the cluster centre [25], [26]. a fixed number of 14 clusters were chosen for the clustering based on the different events that the data represents (6 movements and the baseline). this selection allows 2 clusters for each event on average, which allows some deviation of the data when performing repetitions of the same movement and elongation of the data points during a movement, because the ward’s method is known not to be good at handling elongated clusters and outliers [26]. the work-flow of the classification is presented in figure 4. the clustering first takes multichannel data with signal baselines removed and the labels of the data (information about the instructed movements) as an input. the data are first clustered and then cross tabulated against their 6 possible labels. based on the tabulation, the clusters are identified so that first the clusters that represent the baseline data are identified. a cluster is identified as a baseline cluster if it contains data points with at least 5 different labels from the 6 possible (m-out-of-n detector). other clusters are identified based on the label that has the largest number of samples in the cluster. in the offline classification, the data are then classified to represent the movement its cluster was identified with. a real-time classification can further be made based on previously identified clusters. first, the cluster centre points are calculated. then each new data sample is classified to represent the movement that the cluster nearest to it was identified with. thus, the real-time classification only requires the calculation of euclidean distances to the cluster centres for each new data sample. all the collected data were included in the offline classification. the real-time implementation of the classification was evaluated so that a randomly chosen repetition of each movement was included in the identification of the clusters, and the remaining 9 repetitions were used as test data to evaluate the performance of the method. to present the results of the classification, the percentages of the data points that were classified as baseline were calculated. a high percentage would indicate problems in separating the movements from the baseline. from the data points that were not classified as the baseline, the percentages of correctly classified ones were calculated. data points were considered to be correctly classified if they were classified as the movement that the participant was at that time instructed to perform. figure 3. a block diagram of the baseline calculation for the baseline removal when classifying facial activity. figure 4. a block diagram of the classification of the facial movements based on the data. input pre-processor filter comparator cfar processor selective median filter smoothing filter baseline data labels multichannel data hierarchical clusteringcrosstabulation cluster identification input identify baseline clusters identify other clusters input calculate cluster centrepoints find nearest cluster real-time classified data offline classified data acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 82 3. results figures 5-8 show examples of the signals the measurement channels registered during the experiments and how the conversion from capacitance signals to ones that are proportional to the distance normalises the signals. 3.1. locating facial activity examples of the detected facial activity presented as the loadings of the first principal component are shown in figures 9 and 10. the performance of locating the different movements based on principal component analysis is presented in table 1. out of the 6 included voluntary movements, opening mouth and raising mouth corners are located correctly in all the repetitions with all participants. lowering eyebrows and raising eyebrows are located correctly in almost all the repetitions with all participants. only a single repetition of each is incorrectly located. lowering mouth corners is correctly located except for 4 repetitions with a single participant. closing eyes has a limited success rate in the mapping with 7 out of 10 participants. the locations of the three measurement channels that registered the most significant activity during the experiments figure 5. raw capacitance signals from the 10 repetitions of the raising eyebrows movement with one participant. the different sides of the face are represented on the left and on the right. the top, middle, and bottom graphs represent the measurements from the corresponding extensions. the colours represent the different channels as shown in figure 1: red, green, blue, and grey starting from the centre of the face. signal baselines are aligned for the illustration. figure 6. signals after the conversion to distance signals from the 10 repetitions of the raising eyebrows movement with one participant. signal baselines are aligned for the illustration. figure 7. raw capacitance signals from the 10 repetitions of the opening mouth movement with one participant. figure 8. signals after the conversion to distance signals from the 10 repetitions of the opening mouth movement with one participant. figure 9. the facial activity as represented by the loadings of the first principal component during the raising eyebrows movement with one participant. each graph represents the loadings of the 10 repetitions from the measurement channel of the corresponding physical location. figure 10. the facial activity as represented by the loadings of the first principal component during the opening mouth movement with one participant. table 1. the percentages of successful mapping of the vertical location of the movements. the last row shows the means and standard deviations for the movements. participant lowering eyebrows raising eyebrows closing eyes opening mouth raising mouth corners lowering mouth corners 1 100 90 50 100 100 100 2 100 100 60 100 100 100 3 100 100 90 100 100 100 4 100 100 100 100 100 100 5 100 100 30 100 100 100 6 100 100 100 100 100 100 7 100 100 40 100 100 100 8 90 100 80 100 100 100 9 100 100 100 100 100 60 10 100 100 80 100 100 100 mean 99 ± 3 99 ± 3 73 ± 26 100 ± 0 100 ± 0 96 ± 13 left right c ( a. u. ) 0 1 2 3 4 5 6 t (s) 0 1 2 3 4 5 6 t (s) left right d p ( a. u. ) 0 1 2 3 4 5 6 t (s) 0 1 2 3 4 5 6 t (s) left right c ( a. u. ) 0 1 2 3 4 5 6 t (s) 0 1 2 3 4 5 6 t (s) left right d p ( a. u. ) 0 1 2 3 4 5 6 t (s) 0 1 2 3 4 5 6 t (s) left right left right acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 83 according to the median loadings verified that the decisions regarding the correct sources of activity were justified according to the used order statistic, the median. the three most significant channels included incorrect locations only with one participant during the raising eyebrows movement and with 4 participants during the closing eyes. 3.2. classifying facial activity examples of classified data are shown in figures 11 and 12. table 2 shows the percentages of samples that were classified as baseline. a paired t-test (significance level 0.05) did not reveal statistically significant differences between the percentages of the offline and the real-time classification. in the case of the closing eyes, the percentages show that the movement could not be classified as a movement but was classified as the baseline. the percentages for the other movements reflect the durations of the movements because the participants were not given any instructions about how long to hold them. the results of the offline and real-time classification methods are shown in tables 3 and 4, and there are no statistically significant differences between the different methods according to a paired t-test (significance level 0.05). 4. discussion the facial activity was mostly correctly located, but in a limited number of cases locating gave incorrect results. this can be a result of several factors. firstly, the participants could not always carry out the movements exactly as instructed, but some unintentional activity of other muscles was included. secondly, the measurement and the applied data processing both are slightly sensitive to the movement of the prototype device on the head. this may result in false detection of activity when the device moves instead of the facial tissue. thirdly, including only one principal component may limit the performance when locating the activity. the amount of the variance explained by the first principal component was not analysed, but if it were it could be used to provide an estimate of the certainty in locating the activity. more principal components could be added to the analysis to reduce the uncertainty. finally, the mentioned error sources are also affected by the noise in the measurement. the noise is dependent on the distance of the measurement electrodes from the target. the current implementation normalises the signal levels, but the normalisation also scales the noise so that measurements with the facial tissue further away from the measurement electrode include more noise than table 2. the average percentages and standard deviations of data points that were classified as baseline ones in the offline and real-time implementations of the classification. the number of samples is 1310 and 1179 for the two implementations, respectively. lowering eyebrows raising eyebrows closing eyes opening mouth raising mouth corners lowering mouth corners offline 48 ± 18 50 ± 19 99 ± 2 34 ± 15 41 ± 17 34 ± 12 realtime 53 ± 14 58 ± 14 99 ± 1 39 ± 15 49 ± 21 41 ± 12 table 3. the percentages and standard deviations of correctly classified data points in the offline classification. the dashes mean that all the samples were classified as the baseline. participant lowering eyebrows raising eyebrows closing eyes opening mouth raising mouth corners lowering mouth corners 1 98 100 79 99 95 2 72 100 0 70 100 98 3 85 100 97 36 100 4 100 100 91 100 90 5 100 100 100 100 100 6 96 100 0 81 95 100 7 93 100 0 88 96 100 8 100 75 100 100 100 9 100 100 100 90 100 10 100 100 100 100 100 mean 94 ± 9 98 ± 8 0 ± 0 91 ± 11 91 ± 20 98 ± 3 table 4. the percentages and standard deviations of correctly classified data points in the real-time implementation of the classification. participant lowering eyebrows raising eyebrows closing eyes opening mouth raising mouth corners lowering mouth corners 1 95 100 84 94 98 2 100 100 100 100 100 3 87 100 60 83 100 4 100 100 99 100 87 5 100 100 0 100 98 100 6 94 100 0 92 78 100 7 94 100 78 98 100 8 96 83 90 100 100 9 98 100 0 98 89 100 10 100 100 100 100 100 mean 96 ± 4 98 ± 5 0 ± 0 90 ± 13 94 ± 8 99 ± 4 figure 11. classified data points after the baseline removal from the 10 repetitions of the raising eyebrows movement with one participant. the data points that were classified as the baseline are black, and the correctly classified data points are shown in colour. figure 12. classified data points after the baseline removal from the 10 repetitions of the opening mouth movement with one participant. left right d p ( a. u. ) 0 1 2 3 4 5 6 t (s) 0 1 2 3 4 5 6 t (s) left right d p ( a. u. ) 0 1 2 3 4 5 6 t (s) 0 1 2 3 4 5 6 t (s) acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 84 when the tissue is closer. the smoothing could be more carefully considered to find the most suitable method for noise removal in this case. while the discussed factors may affect the performance, the reason for the limited performance with the closing eyes movement can be considered to be the small movement that it causes to the facial tissue at the measured locations. it should be noted that the presented method for locating the activity only implements a rough mapping of the simple movements. since the exact locations of the facial movements when certain muscle activation occurs varies between individuals, determining the precise location of the movements may not even provide additional value without first characterising the individual’s facial behaviour. thus, the classification was introduced to differentiate between the movements, and it could be applied also to more complex expressions. the classification was based on using hierarchical clustering to identify clusters formed from the measured data. applying principal component analysis in real-time for the task was also considered. however, as a statistical method, it requires numerous samples to compute the principal components reliably. this causes delays dependent on the chosen window length. the processing of the implemented classification, however, does not impose additional delays since it only requires the calculation of distances between points after the clusters have been identified offline. the percentages of data points that were classified as baseline show that the closing eyes movement is problematic also in the classification. the data points during the movement can be expected to be close to the baseline if at all visible in the data. the example graphs of the classified data points (figures 11 and 12) show the changes from the baseline that are required for the classification to identify the data point as something else. the graphs also show that the delay for this is acceptable, even if the absolute delay cannot be calculated because no information about the true onset of the movements was extracted in this study. the performances in classifying the data points correctly during the different movements show that the offline and the real-time versions both perform very well. this is a good result as the real-time version only used data from a single repetition of each movement for identifying the clusters compared to all the 10 repetitions in the offline one. incorrectly performed movements, movement of the device on the head, and noise are possible sources for the errors also in the classification. in addition, the transition phases at the beginnings and the ends of the movements when the data points are close to the baseline can be expected to be more susceptible to incorrect classification. the number of clusters chosen for the classification obviously affects how many movements and variations of the movements can be distinguished from one another. in this study, the number was chosen to be relatively small and the selection was based on the number of the included movements. the identification of the clusters used the information about the movement that the participant was instructed to perform to label each data point. selecting a larger number of clusters would make it possible to identify variations of the movements, but it would also require more information for the labelling. one alternative would be to visually inspect video recordings to provide the labels. this could be done after the clustering to label each cluster rather than providing a label for each data point one by one. this study only considered simple voluntary facial movements. since complex facial expressions, even the spontaneous ones related to emotions, are formed by combinations of simple movements, they can be expected to be classified in the same way and as easily as the simple movements. they will just span a different volume in the multidimensional space of the measured data points. however, the movement ranges of facial tissue during spontaneous expressions are often more limited than in the simple movements of this study. this may introduce challenges in classifying some of the expressions. 5. conclusions a new method for mobile, head-worn facial activity measurement and classification was presented. the capacitive method and the prototype constructed for studying it were shown to perform well both in locating different voluntary facial movements to the correct areas on the face and in classifying the movements. locating the movements with principal component analysis does not require a calibration of the measurement for the user, and the presented classification only required one repetition of each movement for identifying the movements before the classification could be carried out in real-time. the presented facial activity measurement method has clear benefits when compared to the computationally more intensive vision-based methods and the emg that requires attachment of electrodes on the face. future research on the method should include verifying that the classification works with more complex expressions, i.e. with combinations of activity at different locations on the face. furthermore, determining the intensity level of the activity of different facial areas could provide additional information. it could be studied how different activation levels can be distinguished from one another with the presented method, and whether even the smallest facial muscle activations can be distinguished. acknowledgement the authors would like to thank nokia research center and the finnish funding agency for technology and innovation (tekes) for funding the research. ville rantanen would like to thank the finnish cultural foundation for personal funding of his work, nokia foundation for the support, and imeko for the györgy striker junior paper reward that was received at the xx imeko world congress. references [1] a. b. barreto, s. d. scargle, and m. adjouadi, “a practical emgbased human-computer interface for users with motor disabilities”, journal of rehabilitation research & development 37 (2000), pp.53–64. [2] t. partala, a. aula, and v. surakka, “combined voluntary gaze direction and facial muscle activity as a new pointing technique”, proc. of ifip interact’01, tokyo, japan, july 2001, pp.100-107. [3] v. surakka, m. illi, and p. isokoski, “gazing and frowning as a new human-computer interaction technique”, acm transactions on applied perception 1 (2004), pp. 40–56. [4] c. a. chin, a. barreto, j. g. cremades, and m. adjouadi, “integrated electromyogram and eye-gaze tracking cursor control system for computer users with motor disabilities”, journal of rehabilitation research & development 45 (1) (2009), pp. 161-174. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 85 [5] v. rantanen, p.-h. niemenlehto, j. verho, and j. lekkala, “capacitive facial movement detection for human-computer interaction to click by frowning and lifting eyebrows”, medical and biological engineering and computing 48 (2010), pp. 39–47. [6] j. navallas, m. ariz, a. villanueva, j. san agustin, r. cabeza, “optimizing interoperability between video-oculographic and electromyographic systems”, journal of rehabilitation research & development 48 (3) (2011), pp. 253–266. [7] o. tuisku, v. surakka, t. vanhala, v. rantanen, and j. lekkala, “wireless face interface: using voluntary gaze direction and facial muscle activations for human-computer interaction”, interacting with computers 24 (2012), pp. 1–9. [8] u. dimberg, “facial electromyography and emotional reactions”, psychophysiology 27 (5) (1990), pp. 481–494. [9] m. pantic and l. j. rothkrantz, “automatic analysis of facial expressions: the state of the art”, ieee transactions on pattern analysis and machine intelligence 22 (2000), pp. 1424–1445. [10] b. fasel, j. luettin, “automatic facial expression analysis: a survey”, pattern recognition 36 (1) (2003), pp. 259–275. [11] j. f. cohn, p. ekman, “measuring facial action”, in: j. a. harrigan, r. rosenthal, k. r. scherer (eds.), the new handbook of methods in nonverbal behavior research, oxford university press, oxford, uk, 2005,ch. 2, pp. 9–64. [12] e. l. rosenberg, “introduction”, in: p. ekman, e. l. rosenberg (eds.), what the face reveals: basic and applied studies of spontaneous expression using the facial action system (facs), 2nd edition, oxford university press, new york, ny, usa, 2005, pp. 3–18. [13] j. f. cohn, a. j. zlochower, j. lien, t. kanade, “automated face analysis by feature point tracking has high concurrent validity with manual facs coding”, in: p. ekman, e. l. rosenberg (eds.), what the face reveals: basic and applied studies of spontaneous expression using the facial action system (facs), 2nd edition, oxford university press, new york, ny, usa, 2005, ch. 17, pp. 371–392. [14] m. pantic, m. s. bartlett, “machine analysis of facial expressions”, in: k. delac, m. grgic (eds.), face recognition, itech education and publishing, vienna, austria, 2007, pp. 377-416. [15] j. hamm, c. g. kohler, r. c. gur, and r. verma, “automated facial action coding system for dynamic analysis of facial expressions in neuropsychiatric disorders”, journal of neuroscience methods 200 (2011), pp. 237–256. [16] v. rantanen, t. vanhala, o. tuisku, p.-h. niemenlehto, j. verho, v. surakka, m. juhola, and j. lekkala, “a wearable, wireless gaze tracker with integrated selection command source for human-computer interaction”, ieee transactions on information technology in biomedicine 15 (2011), pp. 795–801. [17] o. tuisku, v. surakka, y. gizatdinova, t. vanhala, v. rantanen, j. verho, and j. lekkala, “gazing and frowning to computers can be enjoyable”, proc. of the third international conference on knowledge and systems engineering (kse), hanoi, vietnam, oct. 2011, pp. 211–218. [18] v. rantanen, o. tuisku, j. verho, t. vanhala, v. surakka, and j. lekkala, “the effect of clicking by smiling on the accuracy of head-mounted gaze tracking”, proc. of the symposium on eyetracking research & applications (etra ’12), santa barbara, ca, usa, march 2012, pp. 345–348. [19] p. ekman and w. v. friesen, facial action coding system: a technique for the measurement of facial movement, consulting psychologists press, palo alto, ca, usa, 1978. [20] p. ekman, w. v. friesen, and j. c. hager, facial action coding system: the manual. a human face, salt lake city, ut, usa, 2002. [21] a. j. fridlund and j. t. cacioppo, “guidelines for human electromyographic research”, psychophysiology 23 (5) (1986), pp. 567–589. [22] j. e. jackson, a user’s guide to principal components, wiley series in probability and mathematical statistics, john wiley & sons, new york, ny, usa, 1991. [23] m. i. skolnik, introduction to radar systems, 3rd edition, mcgraw-hill, new york, ny, usa, 2001. [24] p.-h. niemenlehto, “constant false alarm rate detection of saccadic eye movements in electro-oculography”, computer methods and programs in biomedicine 96 (2) (2009), pp. 158-171. [25] j. h. ward, “hierarchical grouping to optimize an objective function”, journal of the american statistical association 58 (301) (1963), pp. 236–244. [26] e. rasmussen, “clustering algorithms”, in: w. b. frakes, r. baeza-yates (eds.), information retrieval: data structures and algorithms, 1st edition, prentice hall, upper saddle river, nj, usa, 1992. position control for the msl kibble balance coil using a syringe pump acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 7 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 position control for the msl kibble balance coil using a syringe pump rebecca j. hawke1, mark t. clarkson1 1 measurement standards laboratory (msl), lower hutt, new zealand section: research paper keywords: kibble balance; pressure balance; position control; volume control citation: rebecca j. hawke, mark t. clarkson, position control for the msl kibble balance coil using a syringe pump, acta imeko, vol. 11, no. 4, article 16, december 2022, identifier: imeko-acta-11 (2022)-04-16 section editor: andy knott, national physical laboratory, united kingdom received july 11, 2022; in final form october 5, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by the new zealand government. corresponding author: rebecca hawke, e-mail: rebecca.hawke@measurement.govt.nz 1. introduction following the revision of the si, kibble balances around the world may now be used to realise the unit of mass.[1] in a kibble balance, the weight of a mass is balanced by the electromagnetic force on a current-carrying coil of wire suspended in a magnetic field. at msl we are developing a kibble balance where the coil is connected to the piston of pressure balance 1 (see figure 1) in a twin pressure balance arrangement [2], [3]. the piston-cylinder unit of pressure balance 1 provides a repeatable axis for the motion of the coil in the magnetic field. the twin pressure balance arrangement serves as a high-sensitivity force comparator, [4]. in a kibble balance, the position of the coil must be precisely controlled in both weighing and calibration modes. in weighing mode the coil should remain stationary at a set position while the current is adjusted to maintain a force balance. in the msl kibble balance, stability in position to within 1 µm would correspond to an uncertainty in realised mass of 3.5 parts in 109, [5]. in calibration mode, the coil must be moved such that a measurable voltage is induced, which typically requires velocities between 1.3 mm s-1 and 3 mm s-1, [6]. in the msl kibble figure 1. schematic of the msl kibble balance design based on a twin pressure balance. the coil connects to the piston of pressure balance 1 such that the coil and magnet are coaxial with the piston-cylinder unit. pressure balance 2 provides a reference pressure for a differential pressure sensor to determine changes in balance forces. differential pressure sensor mass annular gap laser beam for coil position measurement coil permanent magnet pressure balance 1 pressure balance 2 pistoncylinder abstract the position of the coil in a kibble balance must be finely controlled. in weighing mode, the coil remains stationary in a location of constant magnetic field. in calibration mode, the coil is moved in the magnetic field to induce a voltage. the msl kibble balance design is based on a twin pressure balance where the coil is attached to the piston of one of the pressure balances. here we investigate how the piston (and therefore coil) position may be controlled through careful manipulation of the gas column under the piston. we demonstrate the use of a syringe pump as a programmable volume regulator which can provide fall rate compensation as well as controlled motion of the piston. we show that the damped harmonic oscillator response of the pressure balance must be considered when moving the coil. from this initial investigation, we discuss the implications for use in the msl kibble balance. mailto:rebecca.hawke@measurement.govt.nz acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 balance, control of the vertical position of the coil equates to control of the piston position in pressure balance 1. however, in a pressure balance the piston naturally falls as gas leaks through the annular gap between piston and cylinder. fall rate compensation is therefore necessary in both weighing and calibration modes in the absence of mechanical controls such as arresting stops or a direct motor drive. to assist in control of the coil position in the msl kibble balance, we propose careful manipulation of the gas column under the piston in pressure balance 1. the pressure balance maintains a constant pressure, so our options are to adjust the gas volume (e.g. with a mass flow controller) or to shift the gas column in space (e.g. with a volume regulator). to test this second approach to coil position control, we investigate the use of a syringe pump as an automated volume regulator. the layout of this paper is as follows. in section 2 we describe the experimental apparatus, and in section 3 we propose a theoretical model for the system. results for fall rate compensation, conventional constant velocity travel, and an oscillatory motion are presented in section 4. finally, in section 5 we discuss the potential for this technique to be used in the msl kibble balance. 2. experimental apparatus the experimental apparatus is illustrated in figure 2 and was similar to that used in [8]. the pressure balance was a pneumatic dhi/fluke 50 kpa/kg piston-cylinder module with an effective area of 196 mm2. the medium was zero grade nitrogen gas and we adjusted the load to give a working pressure near 100 kpa absolute. we used a manual volume controller to set the initial height of the piston. the pressure balance was operated in a vacuum chamber evacuated to a pressure of around 0.1 pa. ambient temperature outside the chamber was in the range 21 – 23 °c during measurements. to control the vertical position of the piston, we used a custom ‘direct drive’ model of the cetoni nemesys s syringe pump. this pump has inbuilt position encoding. cetoni highprecision glass syringes of 1 ml, 5 ml and/or 25 ml capacity were connected to the pressure balance via 1 m of 1/8” flexible ptfe tubing (1.6 mm id) and a minimal length of ¼” swagelok stainless steel tubing and fittings. we estimate the smallest volume of the gas column was ~30 ml including the cavity within the piston. to enable rapid, independent tracking of the syringe volume, we monitored the position of the syringe plunger using an ids ueye usb camera with a microscope lens attached. we converted the image pixels to volume in ml for each syringe using the encoder values at eight positions spanning the range of travel. to determine the phase shift or delay between syringe plunger motion and piston motion we used hardware triggering of the image capture and piston position measurements. however, triggering increased the measurement noise, so data shown here were captured using the camera’s free-run mode. we measured the piston position using both a capacitance sensor and a linear variable differential transducer (lvdt) with hardware triggering at either 20 or 50 ms intervals. we determined the calibration curve for the capacitor using a dial gauge and gauge blocks and transferred the calibrated position to the lvdt readings. the results presented use the lvdt readings. 3. model a gas pressure balance comprises a loaded piston floating on a volume of gas, and behaves as a damped harmonic oscillator, [8], [9]. a pressure balance connected to a syringe has a mechanical analogue in the accelerometer (or seismometer). figure 3 illustrates this model, where the loaded piston ‘mass’ is attached to a syringe plunger ‘platform’ by a gas pressure ‘spring’, with ‘dashpot’ damping. moving the platform moves the mass in a linear motion with coupled oscillations described by the equation of motion: −�̈� ′ = �̈�𝑟 + 𝑐 𝑚 �̇�𝑟 + 𝜔0 2𝑧𝑟 , (1) where 𝑧𝑟 is the relative distance between mass 𝑚 and platform, 𝑧′ is the displacement of the platform, 𝑐 is the damping coefficient, and 𝜔0 is the resonant frequency. dots indicate derivatives with respect to time. in a pressure balance, the resonant frequency depends on the volume of the gas column under the piston. in the experimental configuration here, the resonant frequency can be varied from ~0.5 hz to 1.2 hz. both the resonant frequency and the ratio 𝑐 𝑚 may be obtained from the damped natural oscillations of the system. figure 2. schematic of the experimental apparatus for piston position control using a syringe in a syringe pump as a volume regulator. floating elements are coloured orange. not shown are the drive mechanism for starting piston rotation (see [7]), the support for the lvdt, and the vacuum chamber surrounding the pressure balance. figure 3. accelerometer model of a pressure balance when connected to a syringe in a syringe pump. the loaded piston ‘mass’ is attached to a syringe plunger ‘platform’ by a gas pressure ‘spring’, with ‘dashpot’ damping. usb microscope camera m a n u a l vo lu m e c o n tr o ll e r 500 g 1 kg piston cylinder lvdt sy ri n g e i n s y ri n g e p u m p sy ri n g e p lu n g e r bell to capacitance sensor ~100 kpa gauge acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 3 this system can be driven with a sinusoidal motion of the platform. for a driving displacement 𝑧′(𝑡) = 𝐴0 cos(𝜔𝑡) with amplitude 𝐴0 and frequency 𝜔, the equation of motion has a solution of the form: 𝑧𝑟 (𝑡) = 𝐴𝑒𝑙 cos(𝜔𝑡) + 𝐴𝑖𝑛𝑒𝑙 sin(𝜔𝑡). (2) solving for the coefficients gives the elastic coefficient, 𝐴𝑒𝑙 = 𝐴0𝜔 2(𝜔0 2 − 𝜔2) (𝜔0 2 − 𝜔2)2 + ( 𝑐 𝑚 𝜔) 2 , (3) and the inelastic coefficient, 𝐴𝑖𝑛𝑒𝑙 = 𝐴0 𝑐 𝑚 𝜔3 (𝜔0 2 − 𝜔2)2 + ( 𝑐 𝑚 𝜔) 2 . (4) the behaviour of the driven system can be understood by examining these coefficients at their limits. as the driving frequency 𝜔 tends to 0, the amplitude of 𝑧𝑟 tends to 0, and the motion of the mass, 𝑧, follows the motion of the platform: 𝑧(𝑡) = 𝑧′(𝑡) + 𝑧𝑟 (𝑡) = 𝐴0 cos(𝜔𝑡). (5) as the driving frequency tends to infinity, 𝐴𝑒𝑙 tends to −𝐴0 and 𝐴𝑖𝑛𝑒𝑙 tends to 0, such that the mass remains stationary despite the motion of the platform: 𝑧(𝑡) = 𝑧′(𝑡) + 𝑧𝑟 (𝑡) = 𝐴0 cos(𝜔𝑡) − 𝐴0 cos(𝜔𝑡) = 0. (6) in between these limits, for 𝑐 𝑚 < 𝜔0 the amplitude of 𝑧𝑟 reaches a maximum as the driving frequency approaches the resonant frequency. at resonance, 𝐴𝑒𝑙 tends to 0 and the motion of the mass is approximately 90° behind the driving oscillation 𝑧′: 𝑧(𝑡) = 𝑧′(𝑡) + 𝑧𝑟 (𝑡) = 𝐴0 cos(𝜔0𝑡) +𝐴0𝑄 sin(𝜔0𝑡) (7) where 𝑄 = 𝑚 𝑐 𝜔0 is the quality factor of the damped natural oscillations of the system. 4. results 4.1 fall rate compensation when used in a kibble balance in weighing mode, over the course of several ‘mass-on, mass-off measurements’ the natural fall rate of the piston of around 1 µm s-1 would result in a significant change in piston position. however, for the lowest uncertainty, the weighing position should be kept constant between loadings. here we consider the usefulness of a syringe pump to maintain a steady piston position by providing a very low flow to compensate for the natural fall of the piston. in figure 4 we show a typical fall rate compensation test using a constant flow rate of 0.012 ccm (ccm = cm3 min-1). initially, while the syringe volume is kept constant, the piston falls at an average rate of 1.004 µm s-1 over 5 minutes. the syringe plunger is then moved slowly, shifting the column of gas towards the piston and causing a slight rise of 0.015 µm s-1 over the ten minutes of applied flow. when the syringe plunger motion is stopped, the piston returns to falling, this time at 0.986 µm s-1 over 5 minutes. this fall compensation resulted in a rise in the piston position of 8.9 µm over ten minutes. over several flow rate compensation tests we observed some variation in the fall rate of the piston. within a test, the fall rate varied before and after fall compensation by up to 0.06 µm s-1, and between tests the variation was at most 0.28 µm s-1. during fall compensation with the same nominal flow rate of 0.012 ccm at the syringe plunger, the overall change in piston position varied between falling by 11 µm over ten minutes when the initial fall rate was 1.13 µm s-1, and rising by 80 µm over the same time when the initial fall rate was 0.86 µm s-1. in the latter case, a subsequent test with a constant flow rate of 0.011 ccm resulted in a rise in the piston position of 2 µm after ten minutes. the start and stop of the syringe plunger motion is a disturbance to the piston. however, the size of this disturbance is very small, and no resonant behaviour is observable. also, we would expect very little disturbance from the syringe plunger when in motion as these flow rates are in the pulsation-free regime of the syringe pump when using a 1 ml syringe. instead, we observe some random variation in piston position of up to 2 µm, which may be due to temperature fluctuations in the gas or external disturbances as they are also seen when the piston is falling without flow compensation. 4.2 calibration mode – constant velocity in calibration mode in most existing kibble balances, the coil is typically moved through the weighing position at an approximately constant velocity of between 1.3 mm s-1 and 3 mm s-1, [6]. with the experimental configuration in figure 2, these velocities are achievable with flow rates between 15 ccm and 35 ccm. however, unlike in the case of leak compensation, the abrupt start and stop of the syringe plunger during this rapid motion is a significant disturbance which causes a visible damped oscillator response. figure 5 shows the shape of the travel for an intermediate flow rate of 20 ccm. immediately after the travel region, we see figure 4. fall rate compensation using a flow rate of 0.012 ccm for ten minutes (grey shaded region). (a) the syringe plunger motion is linear in the shaded region. (b) the piston position is maintained with the 0.012 ccm flow, and falls at its natural fall rate outside of the shaded region (c) residuals are from a linear fit in each region. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 a damped harmonic oscillation; a similar oscillation is also superimposed onto the travel region. the weighing position will be at approximately the half-way point of the range of travel available. although the piston moves through the weighing position quickly, the damped harmonic oscillation causes fluctuations in travel speed, with some very slow motion and even backward motion for some higher flow rates. the number of oscillations in the travel region depends on the time taken to complete the travel and the resonant frequency of the system. here the system volume is as small as practicable, and the resonant frequency is around 1.2 hz. some improvement can be gained by reducing the resonant frequency of the system and timing the duration of the travel to match one period of the damped oscillation. however, the resulting instantaneous velocity is smoothly varying rather than constant during the travel. ideally, in constant velocity calibration mode the motion would have as close to zero acceleration as possible in the travel region. we now present two controlled start techniques to suppress the harmonic oscillations. in the first technique, the flow is ramped up to the target flow and then ramped down when time to stop the motion. while the ‘ramp down’ step is not strictly necessary for our goal of constant velocity travel, the controlled deceleration reduces the damped oscillations after the travel. minimising these oscillations maximises the range of available travel and minimises the settling time required before commencing the next traverse in the opposite direction. figure 6 (a) illustrates the syringe plunger motion for a gentle ramp up to 20 ccm, taking 0.76 s and travelling ~0.13 ml per ramp. for a total volume change of 0.8 ml, the time spent at 20 ccm is 1.64 s in which the syringe plunger travels 0.547 ml. note that in this plot the syringe plunger is moving to refill the syringe. figure 6 (b) shows the resulting motion of the piston, which is significantly more linear than without the accelerating and decelerating ramps, for the same maximum flow rate. a three-section piecewise linear fit to the piston motion gives an average travel speed of 1.72 mm s-1 in the middle section. residuals to this fit are shown in figure 6 (c). some disturbance is evident during each of the ramps, but the motion during the main travel section is linear to within ~33 µm. we note that the size of the observed variations will be influenced by integration of the signal within the sampling interval of 20 ms. the second controlled start technique is known as the crane operator’s trick, which exploits the natural oscillation period of the system, [10]. implementing this trick here involves three steps of applied flow: 1. an initial step at half the target flow, for half a period of oscillation, to accelerate the piston, 2. a steady step at the target flow, to generate constant velocity travel, and 3. a final step of half the target flow, for half a period of oscillation, to decelerate the piston to stationary. figure 5. shape of the piston travel for a 0.7 ml volume change at 20 ccm which gives an effective velocity of 1.7 mm s-1 in the travel region (shaded grey). figure 6. constant velocity travel using gentle ramps over 0.76 s to a maximum flow of 20 ccm, for a 0.8 ml total volume change. (a) shape of the syringe plunger motion (encoder values); the plunger is moving in the grey shaded region. (b) resulting position of the piston (orange markers) and 3piece linear fitting (grey line). (c) residual to the piecewise linear fitting in (b). figure 7. constant velocity travel using the crane operator’s trick with steps at 10 and 20 ccm for a 0.8 ml total volume change, with a period of 0.76 s. (a) shape of the syringe plunger motion (encoder values); the plunger is moving in the grey shaded region. (b) resulting position of the piston (orange markers) and 3-piece linear fitting (grey line). (c) residual to fit in (b). acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 5 figure 7 (a) shows these three distinct steps in the syringe plunger motion for a target flow of 20 ccm and total volume change of 0.8 ml. each section at 10 ccm takes 0.38 s and travels only 0.063 ml, leaving 0.673 ml to travel at 20 ccm, over 2.02 s. the resulting motion of the piston is shown in figure 7 (b), and residuals to a piecewise linear fit to the piston motion are shown in figure 7 (c). the motion during the main travel section is linear to within ~29 µm with an average travel speed of 1.73 mm s-1. the disturbance due to starting and stopping the syringe plunger is slightly less than for the ramp technique, and both techniques give very small damped oscillations after stopping the syringe plunger motion. to provide sufficient data for the determination of the ratio of the induced voltage to the coil velocity at the weighing position, the coil is usually moved at a constant velocity over a distance of at least 20 mm, [6]. however, in the pressure balance that will be used for the msl kibble balance, the range of travel is restricted to at most 13 mm. this shorter range will therefore increase the number of repeats required to achieve the desired accuracy. 4.3 calibration mode – oscillating velocity as an alternative to the constant velocity method, an oscillatory motion has been suggested for the kibble balance calibration mode, [6]. sinusoidal oscillations with frequencies from 0.1 hz to 5 hz could be suitable, even with amplitudes as small as 1 mm. oscillatory mode has been successfully implemented in the ulusal metroloji enstitüsü (ume) kibble balance by moving the magnet sinusoidally at a frequency of 0.5 hz with a peak velocity of around 3 mm s-1, [11]. oscillatory mode has also been demonstrated in the ‘pb1’ and ‘pb2’ planckbalances where the optimal oscillation is typically 4 hz with amplitudes of 4.5 and 20 µm respectively, [12], [13]. in the msl kibble balance, a sinusoidal oscillation would work with (rather than against) the harmonic oscillator character of the pressure balance. a suitable oscillation would have a frequency of around 1 hz and an amplitude of ~1 mm, [6]. here we demonstrate driving such an oscillation via a sinusoidal displacement of the syringe plunger. the driving oscillation in syringe volume of 0.02 ml amplitude is shown in figure 8 (a) along with the resulting piston oscillation in figure 8 (b). for each dataset, we fit a single sinusoid to establish the frequency and amplitude. the frequency of the best fitting sinusoid was 0.9993 hz for the syringe plunger motion and 0.9989 hz for the piston motion. the resonant frequency of the system was adjusted to be as close to 1 hz as practicable and was determined to be 0.9999 hz from the damped harmonic oscillation after stopping the driving excitation. the amplitude of the piston oscillation was about 1.1 mm, with a peak velocity of around 7.6 mm s-1. assuming the induced voltage 𝑈 = 1 v for 𝑣 = 2 mm s-1, we would expect this peak velocity to correspond to an induced voltage of almost 4 v when implemented in the msl kibble balance. from our model we would expect the steady-state piston motion to lag behind the syringe plunger motion with a phase difference of 90°. this phase difference is evident here, where a reduction in syringe volume causes an upward motion of the piston. residuals to the sinusoidal fits are shown in figure 8 (c), scaled by the respective amplitudes. we observe that there is periodic high frequency noise in the syringe plunger motion which is mostly filtered out by the pressure balance. however, the relative magnitude of the residual is transferred through to the piston motion and some non-sinusoidal periodic structure is evident. our accelerometer model for this scenario predicts an amplification of the piston oscillation when approaching the resonant frequency. in figure 9 we present the model amplification due to a sinusoidal driving displacement, along with the measured amplitudes from a range of driving frequencies. we used the amplitude of the lowest frequency oscillation as the normalising amplitude value a0. these data were collected using the smallest practical volume, giving a resonant frequency 𝑓0 of the system of around 1.175 hz, determined by the damped harmonic oscillation after stopping figure 8. oscillatory motion using a sinusoidal driving displacement at a frequency of 1 hz. (a) position of the syringe plunger. (b) resulting position of the piston. (c) residual to the best fitting sinusoids for (a) and (b), scaled by the respective amplitudes. figure 9. (orange markers) amplitude of the piston motion resulting from a sinusoidal driving excitation of 0.025 ml amplitude at various frequencies, shown relative to the resonant frequency, and normalized to the amplitude at the lowest frequency. (grey line) amplitude predicted by our model for a sinusoidal driving displacement, using 𝑄 = 𝑚 𝑐 𝜔0= 14 and 𝑓0 = 1.175 hz. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 6 the driving excitation. this damped harmonic oscillation had a quality factor of 𝑄 = 𝑚 𝑐 𝜔0~13, and the best fit model was obtained with a 𝑄 of 14. 5. discussion 5.1 fall rate compensation in the msl kibble balance, fall rate compensation can supplement the use of a current feedback arrangement to control the coil position in weighing mode. fall rate compensation will also be necessary for oscillatory mode if we use volume manipulation to generate the oscillation. from the results presented here, the use of a syringe pump to provide fall rate compensation is promising. the achieved 5 µm stability over 5 minutes would be adequate for our initial target accuracy of 1 part in 107 for realised mass. to improve the obtained stability and the repeatability of the method, which are both limited by fluctuation in the piston fall rate, the fall rate should be measured immediately before each period of fall compensation to allow the ideal compensation flow rate to be determined. alternatively, the flow could also be finely adjusted during an initial stabilisation routine. we have examined possible causes, other than measurement error, of variations in the observed fall rate. variation in the temperature of the piston-cylinder unit will affect the natural fall rate of the piston, through thermal expansion affecting the gap between piston and cylinder, and through the temperature dependence of the viscosity of the gas in the gap. for an estimated temperature change of 0.5 k of the piston-cylinder between measurements, these effects in terms of contribution to fall rate are calculated as being less than 2 nm s-1. the fall rate is also expected to vary with vertical position of the piston due to departures from cylindricity of their shape. however, such a correlation is not evident in our data. a steady drift in temperature of the tubing connecting the pressure balance will affect the observed fall rate. for a 0.1 k change in temperature over 5 minutes, this effect is calculated to be ~7 nm s-1 for a tubing volume of 35 ml and is mainly due to the section of ptfe tubing. this effect is an order of magnitude smaller than the variation in fall rate observed within a test. similarly, the variation between tests of ~280 nm s-1 is also unlikely to be due to the above causes. instead, the variations may indicate that the tubing volume has a small leak which is influenced by changing ambient conditions. sources of the ~2 µm random variation in position should also be addressed. it is possible that this noise is due to ground vibration, or wobble from the spinning of the pressure balance bell. a rotating cylinder pressure balance is currently being developed which would provide a direct comparison and ideally lower noise. this fall rate compensation technique is not only important for pressure balance 1 which carries the coil, but it can also be used for pressure balance 2 which is used to provide the reference pressure. both pressure balances need to be kept at a stable pressure for the duration of weighing mode, which could take around 5 minutes per weighing. if left without fall compensation, pressure balance 2 would require periodic height adjustment. the height could easily be adjusted with the syringe pump before each weighing, and/or fall rate compensation could also be provided for this pressure balance throughout the duration of the weighing. we note that this method could be considered to introduce a controlled leak into the system between the pressure balance and the differential pressure transducer, which is not usually recommended, [14]. in this situation, the pressure in the tubing is very close to ambient pressure and the tubing volume is as small as practicable, which will reduce the effect of the leak. additionally, the ‘leak’ caused by the motion of the syringe pump does not change the pressure or the number of gas molecules in the system; instead, the column of gas is merely moved along the tubing. for these reasons, we expect that the measured force difference would not be affected by a constant infusion during a weighing mode measurement consisting of a sequence of masson and mass-off weighings. 5.2 constant velocity travel as expected, the oscillator response of the pressure balance makes instantaneous travel at a constant velocity difficult to attain by simply applying a constant flow rate. therefore, we have presented here two controlled start techniques which both significantly reduce the oscillator response. with these techniques, accommodating for the oscillator response took potentially as little as ~0.3 mm at each end of the travel, out of the available travel of ~4 mm. of the two techniques, the crane operator’s trick is deceptively simple to implement, and produced constant velocity travel over almost the full range of travel of the piston with <30 µm deviations from linear. similar results were obtained for periods from 0.75 to 0.8 s, indicating that the exact timing of the steps is not critical. this technique warrants further investigation in the msl kibble balance to enable interferometric velocity measurement and to assess the stability of the generated voltage. 5.3 oscillating mode in addition, we have demonstrated that volume manipulation can be used effectively to drive a sinusoidal oscillation of a pressure balance. we see good agreement with the nature of the driven oscillation at the piston and the features predicted by our model, such as a phase difference of 90° and a significant amplification near resonance. we postulate that there is likely to be only one mode of oscillation present due to the length of narrow tubing between syringe pump and piston. the ~14-fold amplification of the driving oscillation near resonance is a major advantage in working with, rather than against, the harmonic oscillator character of the pressure balance. this amplification significantly reduces the amplitude of the driving oscillation that is required to generate an oscillation of ~1 mm amplitude at the piston. importantly, any noise in the driving oscillation is greatly reduced at frequencies above and below the resonant frequency. care must be taken to reduce any noise in the driving oscillation at or near the resonant frequency, as this noise is also amplified. the sinusoidal oscillation produced by our syringe pump is digitally created by updating the generated flow, 100 times per oscillation. this relatively coarse digitization results in a slightly distorted sinusoidal waveform, and the method of updating only the flow allows a slow drift of the average position of the plunger. while some optimisation is possible with the syringe pump system, a much better sinusoidal oscillation would be generated by an analogue or ac-driven input. such an input could be used to drive the oscillation of the syringe plunger or an equivalent pressure-maintaining membrane or diaphragm. alternatively, a large, slow sinusoidal oscillation could be used to provide predictable, repeated travel passing through the weighing position with approximately constant velocity. for example, an oscillation at 0.2 hz with 2 mm amplitude would acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 7 reach a peak velocity of 2.5 mm s-1, with little down-time between repeat measurements. 6. conclusions we have shown that a syringe pump may be used as an automated volume regulator to controllably adjust the height of the piston in a pressure balance. this method can also be used to assist in maintaining a stable piston, and therefore coil, position in both the weighing and calibration modes of the msl kibble balance. we demonstrated constant velocity travel of the piston at 1.7 mm s-1 using two controlled start techniques to minimise unwanted oscillations. an oscillatory motion, working with the resonant behaviour of the pressure balance, also shows promise for the msl kibble balance calibration mode. acknowledgement the authors wish to acknowledge useful discussions with and technical assistance from joseph borbely, yin hsien fung, and peter mcdowall. the authors thank the reviewers for their insightful comments and suggestions for achieving constant velocity travel. this work was funded by the new zealand government. references [1] i. a. robinson, s. schlamminger, the watt or kibble balance: a technique for implementing the new si definition of the unit of mass, metrologia. 53 (2016) a46–a74. doi: 10.1088/0026-1394/53/5/a46 [2] c. m. sutton, m. t. clarkson, y. h. fung, the msl kibble balance weighing mode, in: cpem 2018 conference on precision electromagnetic measurements, institute of electrical and electronics engineers inc., 2018. doi: 10.1109/cpem.2018.8500889 [3] c. m. sutton, the accurate generation of small gauge pressures using twin pressure balances, metrologia. 23 (1987) 187–195. doi: 10.1088/0026-1394/23/4/003 [4] c. m. sutton, m.p. fitzgerald, k. carnegie, improving the performance of the force comparator in a watt balance based on pressure balances, in: 2012 conf. precis. electromagn. meas., ieee, 2012: pp. 468–469. doi: 10.1109/cpem.2012.6251006 [5] c. m. sutton, m. t. clarkson, a magnet system for the msl watt balance, metrologia. 51 (2014) s101–s106. doi: 10.1088/0026-1394/51/2/s101 [6] c. m. sutton, an oscillatory dynamic mode for a watt balance, metrologia. 46 (2009) 467–472. doi: 10.1088/0026-1394/46/5/010 [7] c. m. sutton, an improved mechanism for spinning the floating element of a pressure balance, j. phys. e. 13 (1980) 825. doi: 10.1088/0022-3735/13/8/007 [8] c. m. sutton, m. p. fitzgerald, d. g. jack, an initial investigation of the damped resonant behaviour of gas-operated pressure balances, measurement. 45 (2012) 2476–2478. doi: 10.1016/j.measurement.2011.10.045 [9] o. l. de lange, j. pierrus, amplitude-dependent oscillations in gases, j. nonlinear math. phys. 8 (2001) 79–81. doi: 10.2991/jnmp.2001.8.s.14 [10] s. schlamminger, l. chao, v. lee, d. b. newell, c. c. speake, the crane operator’s trick and other shenanigans with a pendulum, am. j. phys. 90 (2022) 169–176. doi: 10.1119/10.0006965 [11] h. ahmedov, n. b. aşkin, b. korutlu, r. orhan, preliminary planck constant measurements via ume oscillating magnet kibble balance, metrologia. 55 (2018) 326–333. doi: 10.1088/1681-7575/aab23d [12] c. rothleitner, n. rogge, s. lin, s. vasilyan, d. knopf, f. härtig, t. fröhlich, planck-balance 1 (pb1) – a table-top kibble balance for masses from 1 mg to 1 kg – current status, acta imeko 9 (2020) 5, pp. 47–52. doi: 10.21014/acta_imeko.v9i5.937 [13] s. vasilyan, n. rogge, c. rothleitner, s. lin, i. poroskun, d. knopf, f. härtig, t. fröhlich, the progress in development of the planck-balance 2 (pb2): a tabletop kibble balance for the mass calibration of e2 class weights, technisches messen. 88 (2021) 731–756. doi: 10.1515/teme-2021-0101 [14] r. r. a. samodro, i. m. choi, s. y. woo, s. j. lee, a study on the pressure gradient effect due to a leak in a pressure calibration system, metrologia. 49 (2012) 315–320. doi: 10.1088/0026-1394/49/3/315 https://doi.org/10.1088/0026-1394/53/5/a46 https://doi.org/10.1109/cpem.2018.8500889 https://doi.org/10.1088/0026-1394/23/4/003 https://doi.org/10.1109/cpem.2012.6251006 https://doi.org/10.1088/0026-1394/51/2/s101 https://doi.org/10.1088/0026-1394/46/5/010 https://doi.org/10.1088/0022-3735/13/8/007 https://doi.org/10.1016/j.measurement.2011.10.045 https://doi.org/10.2991/jnmp.2001.8.s.14 https://doi.org/10.1119/10.0006965 https://doi.org/10.1088/1681-7575/aab23d https://doi.org/10.21014/acta_imeko.v9i5.937 https://doi.org/10.1515/teme-2021-0101 https://doi.org/10.1088/0026-1394/49/3/315 estimate the useful life for a heating, ventilation, and air conditioning system on a high-speed train using failure models acta imeko issn: 2221-870x september 2021, volume 10, number 3, 100 107 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 100 estimate the useful life for a heating, ventilation, and air conditioning system on a high-speed train using failure models marcantonio catelani1, lorenzo ciani1, giulia guidi1, gabriele patrizi1, diego galar2 1 department of information engineering, university of florence via di s. marta 3, 50139, florence (italy) 2 luleå university of technology, lulea, sweden section: research paper keywords: reliability; diagnostic; railway engineering; failure rate; hvac; useful life citation: marcantonio catelani, lorenzo ciani, giulia guidi, gabriele patrizi, diego galar, estimate the useful life for a heating, ventilation, and air conditioning system on a high-speed train using failure models, acta imeko, vol. 10, no. 3, article 10, september 2021, identifier: imeko-acta-10 (2021)-0310 section editor: lorenzo ciani, university of florence, italy received january 29, 2021; in final form august 2, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: giulia guidi, e-mail: giulia.guidi@unifi.it 1. introduction all devices are constituted from materials that will tend to degrade with time. the materials degradation will continue until some critical device parameter can no longer meet the required specification for proper device functionality [1]-[8]. for this reason, as well as the growing complexity of equipment and the rapidly increasing cost incurred by loss of operation and for maintenance, the interest in reliability is growing in many industrial fields. generally reliability could be assessed through different methods, such as reliability prediction, fault tree analysis , reliability block diagram etc (see for instance [9]-[12]). fault tree analysis (fta) [13], [14] is an analytical and deductive (topdown) method. it is an organized graphical representation of the conditions or other factors causing or contributing to the occurrence of a defined outcome, referred to as the "top event". while, reliability block diagram (rbd) [15] is a functional diagram of all the components making up the system that shows how component reliability contributes to failure or success of the whole system. these above-mentioned techniques need input data to be performed but sometimes data are not available, and they need to be predicted. an accurate reliability prediction should be performed in the early stages of a development program to support the design process [16]-[21]. a reliability prediction of electronic components could be assessed following the guidelines of several handbooks, while the prediction of mechanical components is more challenging because of the following reasons [16], [22]: • individual mechanical components such as valves and gearboxes often perform more than one function and abstract heating, ventilation, and air conditioning (hvac) is a widely used system used to guarantee an acceptable level of occupancy comfort, to maintain good indoor air quality, and to minimize system costs and energy requirements. if failure data coming from company database are not available, then a reliability prediction based on failure rate model and handbook data must be carried out. performing a reliability prediction provides an awareness of potential equipment degradation during the equipment life cycle. otherwise, if field data regarding the component failures are available, then classical reliability assessment techniques such as fault tree analysis and reliability block diagram should be carried out. reliability prediction of mechanical components is a challenging task that must be carefully assessed during the design of a system. for these reasons, this paper deals with the reliability assessment of an hvac using both failure rate model for mechanical components and field data. the reliability obtained using the field data is compared to the one achieved using the failure rate models in order to assess a model which includes all the mechanical parts. the study highlights how it is fundamental to analyze the reliability of complex system integrating both field data and mathematical model. mailto:giulia.guidi@unifi.it acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 101 failure data for specific applications of nonstandard components are seldom available. • failure rates of mechanical components are not usually described by a constant failure rate distribution because of wear, fatigue and other stress-related failure mechanisms resulting in equipment degradation. data gathering is complicated when the constant failure rate distribution cannot be assumed and individual times to failure must be recorded in addition to total operating hours and total failures. • mechanical equipment reliability is more sensitive to loading, operating mode and utilization rate than electronic equipment reliability. failure rate data based on operating time alone are usually inadequate for a reliability prediction of mechanical equipment • definition of failure for mechanical equipment depends upon its application. lack of such information in a failure rate data bank limits its usefulness. the above listed problems demonstrates the need for reliability prediction models that do not rely solely on existing failure rate data banks [23], [24]. trying to solve these needs, this paper aims to introduce a reliability assessment procedure which integrates failure rate models and field data to optimize the reliability analysis of a railway heating, ventilation and air conditioning (hvac) system. the paper uses both fta and rbd techniques to estimate the system reliability based on realistic failure rate models for mechanical components. the rest of the paper is organized as follow: section 2 illustrates the aim of an hvac and it presents the high-level taxonomy of the system under test; section 3 presents the failure rate prediction of three mechanical components (compressor, heat exchanger and blower) using failure models; section 4 shows the results of the reliability assessment carried out using fta and rbd techniques and finally section 5 compares the results achieved with the different techniques. 2. hvac for high-speed train underground transport and rail systems become more and more frequent as they allow rapid transit times while transporting a large number of users [25]. consequently, rams (reliability, availability, maintainability and safety) analysis has become a fundamental tool during the design of railway systems [25]-[27]. the network of high-speed trains and also standard rails are more and more transferred to underground tunnels in order to mitigate the environmental impact. both applications need ventilation rates. in metros the influx of a large number of people and the presence of moving trains generate a reduction of oxygen and an increase in heat and pollutant. mechanical ventilation is then required to achieve the necessary air exchange and grant users of the underground train systems comfortable conditions. ventilation systems have a second and even more important purpose. that is to guarantee safety in case of fire emergency. in order to create a safe and clean environment for escaping mechanical ventilation both in tunnels and in the stations is activated. in rails the ventilation of tunnels is mainly dedicated to fire emergencies where it is vital to keep under control the smoke propagation and create safe areas and clear environment for the users. furthermore, efficient temperature regulation is becoming a necessity to face overcrowded carriages [28]-[30]. hvac is the best way of regulating temperature and air quality on crowded trains [31]. one of the most important guarantees that rail manufacturers should look for during the design of an air conditioning systems is reliability under the actual operating conditions [28], [32]. during the design of an hvac system it is necessary to achieve information about the hvac equipment and their uses[33]. the taxonomy is a systematic classification of items into generic groups based on factors possibly common to several of the items (location, use, equipment subdivision, etc.). referring to figure 1, levels 1 to 5 represent a high-level categorization that relates to industries and plant application regardless of the equipment units (see level 6) involved. this is because an equipment unit (e.g., air conditioning unit) can be used in many different industries and plant configurations and, for analysing the failure/reliability/maintainability of similar equipment, it is necessary to have information about the operating context. taxonomic information on these levels (1 to 5) shall be included in the database for each equipment unit as “use/location data”. levels 6 to 9 are related to the equipment unit (inventory) with the subdivision in lower indenture levels corresponding to a parent-child relationship. the taxonomy of the system under test, from level 1 to level 5 is reported in table 1. the levels from 6 to 9 are very structured and include the level of the components divided also in the part sections. 3. failure rate models predicting the life of a mechanical element is not easy, it includes mathematical equations to estimate the design life of mechanical components [16]. these reliability equations consider the design parameters, environmental extremes and operational stresses to predict the reliability parameters. the total failure rate of the figure 1. taxonomy classification with taxonomic levels (source iso 14224 2016 [34]). table 1. taxonomy of the system from level 1 to level 5. taxonomy level description level 1 industry railway level 2 business category high speed level 3 installation s121 level 4 unit front car level 5 system hvac system acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 102 component is the sum of the failure rates for the parts for a particular time period in question. the equations rely on a base failure rate derived from laboratory test data where the exact stress levels are known. more information about the failure rate data used in this work could be found in [19]. the most critical components of an heating, ventilation and air conditioning (hvac) system are the compressor, the heat exchanger and the blower[25], [35]. in order to improve the failure rate of these items, the relative failure models have been analysed in the following sections. 3.1. compressor model a compressor system is made up of one or more stages. the compressor compresses the gas, increasing its temperature and pressure [16], [36]. the total compressor may be comprised of elements or groups of elements in series to form a multistage compressor based on the change in temperature and pressure across each stage. every compressor to be analyzed will be characterized by a unique design and it will be comprised of many different components. according to [16] and to the compressor datasheet, the designed hvac compressor is a reciprocating type compressor. the following equation has been obtained in order to estimate the failure rate of the actual compressor used in the considered hvac design. 𝜆c = (𝜆fd ∙ 𝐶sf)+𝜆ca + 𝜆be + 𝜆va + 𝜆se + 𝜆sh , (1) where • 𝜆c is the total failure rate of compressor • 𝜆fd is failure rate of fluid driver • 𝐶sf is the compressor service multiplying factor • 𝜆ca is the failure rate of the compressor casing • 𝜆be is the total failure rate of compressor shaft bearings • 𝜆va is the total failure rate of control valve assemblies • 𝜆se is the total failure rate of compressor seals • 𝜆sh is the failure rate of compressor shaft. different compressor configurations such as piston, rotary screw and centrifugal have different parts within the total compressor and it is important to obtain a parts list for the compressor prior to estimating its reliability. the failure rate for each part comprising the compressor must be determined before the entire compressor assembly failure rate, λc, can be determined. failure rates for each part will depend on expected operational and environmental factors that exist during compressor operation. the total failure rate of compressor shaft bearings is: 𝜆be = 𝜆be,b ∙ 𝐶r ∙ 𝐶v ∙ 𝐶cw ∙ 𝐶t ∙ 𝐶sf ∙ 𝐶c , (2) where • 𝜆be is the total failure rate of bearing • 𝜆be,b is base failure rate • 𝐶r is life adjustment factor for reliability • 𝐶v is multiplying factor for lubricant • 𝐶cw is multiplying factor for water contaminant level • 𝐶t is multiplying factor for operating temperature • 𝐶sf is multiplying factor for operating service conditions • 𝐶c is multiplying factor for lubrication contamination level. the total failure rate of control valve assemblies is given by: 𝜆va = 𝜆po + 𝜆se + 𝜆sp + 𝜆so + 𝜆ho , (3) where • 𝜆va is the total failure rate of total valve assemblies • 𝜆po is the failure rate of poppet assembly • 𝜆se is the failure rate of the seals • 𝜆sp is the failure rate of spring(s) • 𝜆so is the failure rate of solenoid • 𝜆ho is the failure rate of valve housing consequently, using the failure data illustrated in [19] it is possible to solve equation (2)-(3). then, the compressor failure rate could be estimated integrating these results into equation (1), as follow: 𝜆c = 1.56 ∙ 10 −5 failure/h (4) usually, failure rates of components implemented in railway applications are expressed in failure/km or for sake of simplicity fpmk (failure per million kilometers). moreover, the duty cycle of the compressor must be taken into account in order to obtain a more accurate evaluation. consequently, considering an approximate annual distance for high-speed train of half a million kilometers and a duty cycle of 30 %, the failure rate of the compressor becomes: 𝜆compressor = 8.22 ∙ 10 −2 fpmk . (5) the failure rate achieved above must be compared with the failure rate provided by the manufacturer of the component (merak) which is obtained integrating field data and internal company tests. 𝜆compressor,merak = 7.79 ∙ 10 −2 fpmk . (6) comparing equations (5) and (6) it is possible to note that the failure rate obtained using the compressor model provides a value slightly higher than the one provided by the compressor manufacturer. the several variables considered by the failure model in equation (1) produce the different results since the merak value is based mainly on field data. figure 2 shows the reliability curves relative to the failure rates of equations (5) and (6). the blue line represents the reliability calculated with the manufacturer “merak” failure rate while the red one the reliability calculated using the failure model of the compressor. the curve obtained using the model decreases faster because its failure rate is higher than the merak failure rate. anyway, the difference of the two curves is limited, at the beginning the figure 2. reliability curves of the compressor assembly calculated through merak data and compressor model given by equation (1). acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 103 curves are approximately equal then they decrease with different exponential decay rates. 3.2. heat exchanger model heat exchangers are essential part of any kind of hvac system nowadays. the main function of a heat recovery system is to increase the energy efficiency by reducing energy consumption and also by reducing the cost of operating by transferring heat between two gases or fluids, thus reducing the energy consumptions. in heat exchangers, as the name suggests, there is a transfer of energy from one fluid to another. both these fluids are physically separated and there is no direct contact between the fluids. there are different types of heat exchangers such as shell and tube, u tube, shell and coil, helical, plate etc. the transfer of heat can be between steam and water, water and steam, refrigerant and water, refrigerant and air, water and water. the hvac system’s compressor generates heat by compressing refrigerant. this heat can be captured and used for heating domestic water. for this purpose, a heat exchanger is placed in between the compressor and the condenser. the water that is to be heated is circulated though this heat exchanger with the help of a pump whenever the hvac system is on. the heat exchanger included in the system under test is composed by a tube and an expansion valve. the failure rate of a fluid conductor is extremely sensitive to the operating environment of the system in which it is installed as compared to the design of the pipe. each application must be evaluated individually because of the many installation, usage and maintenance variables that affect the failure rate. the failure of a piping assembly depends primarily on the connection joints and it can be estimated with the following equation [16]: 𝜆p = 𝜆p,b ∙ 𝐶e = 2.2 ∙ 10 −6 failure/h (7) where • 𝜆p is the failure rate of pipe assembly • 𝜆p,b is the base failure rate of pipe assembly, which is 1.57 ∙ 10−6 failure/h • 𝐶e is the environmental factor equal to 1.4 in case of a railway application. for the expansion valve the failure rate is provided by [16] and it is 𝜆va = 4.5 ∙ 10 −6 failure/h. therefore, the whole failure rate of the heat exchanger is given by the sum of the failure rate of the pipe and the failure rate of the valve, each one weighted on its own duty cycle. in particular, the duty cycle of the pipe is 80 % while the duty cycle of the valve is 30 %. consequently, the failure rate of the heat exchanger is given by: 𝜆heat exchanger = 3.11 ∙ 10 −6 failure h⁄ . (8) quite the same as the compressor, also the failure rate of the heat exchanger must be converted from failure/h into fpmk. the heat exchanger failure rate in case of railway application is the following: 𝜆heat exchanger = 5.4 ∙ 10 −2 fpmk . (9) also in this case, the failure rate based on field data has been provided by the component manufacturer merak and it is equal to: 𝜆heat exchanger,merak = 4.124 ∙ 10 −2 fpmk . (10) figure 3 shows the two different reliability curves, the blue one is related to merak data while the red one is related to the model results achieved in equation (9). the model line results a pessimistic estimation also for this component. the difference between the two reliability trends is extremely low. this is mainly due to the fact that for a simpler element like the heat exchanger, the manufacturer merak and model lead to a similar result. 3.3. blower model one of the most common downfalls of installed hvac systems is their inability to distribute the correct amount of air to where it’s needed most. when systems are restrictive, or blowers aren’t powerful enough, the air simply doesn’t make it to where it needs to go. a blower is composed by: • an ac motor; • two bearings; • a fan. the failure rate of a motor is affected by such factors as insulation deterioration, wear of sliding parts, bearing deterioration, torque, load size and type, overhung loads, thrust loads and rotational speed. the failure rate model developed is based on a fractional or integral horsepower ac type motor. the reliability of an electric motor is dependent upon the reliability of its parts, which may include bearings, electrical windings, armature/shaft, housing, gears and brushes. failure mechanisms resulting in part degradation and failure rate distribution (as a function of time) are considered to be independent in each failure rate model. the total motor system failure rate is the sum of the failure rate of each of the parts in the motor: 𝜆motor = 𝜆m,b ∙ 𝐶sf + 𝜆wi + 𝜆st + 𝜆as + 𝜆be + 𝜆gr + 𝜆c, (11) where • 𝜆motor is the total failure rate for the motor system • 𝜆m,b is the base failure rate of motor • 𝐶sf is the motor load service factor • 𝜆wi is the failure rate of electric motor windings • 𝜆st is the failure rate of the stator housing • 𝜆as is the failure rate of the armature shaft • 𝜆be is the failure rate of the bearing evaluated using equation (2) and the suitable factors • 𝜆gr is the failure rate of gears • 𝜆c is the failure rate of capacitor. the bearings failure rate could be estimated following the guidelines in section 3.1. the fans are modelled in according to mil-std-217f [37] by: figure 3. reliability curves of the heat exchanger assembly calculated through merak data and heat exchanger model according to equation (9). acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 104 𝜆fan = [ 𝑡 2 𝛼𝐵 3 + 1 𝛼𝑊 ] failure/h (12) where • t is the motor operating time period • 𝛼b the weibull characteristics life for motor bearing • 𝛼w the weibull characteristics life for motor windings finally, the whole blower failure rate is given by the sum of the failure rates of its components, so: 𝜆blower = 𝜆motor + 2 ∙ 𝜆bearing + 𝜆fan = 1.328 ∙ 10−5 failure/h . (13) then considering the duty cycle of 100 % and the required conversion from failure/h into fpmk, the blower failure rate become: 𝜆blower = 2.33 ∙ 10 −1 fpmk (14) the failure rate achieved analyzing field data has been provided by the component manufacturer merak, and it is illustrated in the following: 𝜆blower,merak = 7.97 ∙ 10 −2 fpmk . (15) figure 4 shows the two reliability curves, the blue one is related to the merak data and the red one is related to the model data. like for the other components, the reliability calculated through the models provides a pessimistic reliability trend respect the reliability calculated with the field data provided by merak. this time, the differences between field data and model data is quite remarkable. this could be due to the harsh operating condition considered by the failure rate model in [16]. 4. reliability analysis when the data (coming from tests or from the manufacturer) are available, techniques such as fta or rbd could be used to estimate the useful life of the system. 4.1. fault tree analysis fault tree diagrams consist of gates and events connected with lines. the and and or gates are the two most commonly used gates in a fault tree. to illustrate the use of these gates, consider two events (called "input events") that can lead to another event (called the "output event")[14], [35]. if the occurrence of either input event causes the output event to occur, then these input events are connected using an or gate. alternatively, if both input events must occur in order to the occurrence of the output event, then they are connected by an and gate. fault tree analysis gates can also be combined to create more complex representations. in case of the hvac system under analysis, the top event is “hvac failure” and it’s caused by four different events, as illustrate in figure 5: • “possible fire”, when some events could involve a risk of fire in the railway cabin. • “loss of emergency ventilation”, when the emergency ventilation doesn’t work. • “loss of functions caused by a single event”, when a single event causes a direct loss of all the cooling, heating, ventilation function • “indirect loss of cooling heating and ventilation”, when some events cause independently a loss of cooling, heating and ventilation functions. the top event “hvac failure” is linked to the abovedescribed input events trough an or gate, that means if at least one of the four input events happens the whole system fails. every one of the input events in figure 5 is in turn caused by an extremely complex combinations of several events. the complete fta diagram is very large and structured, and it is not possible to show it entirely. so, for the sake of simplicity, figure 5 shows only an extract of these fta. the reliability trend considering the fta configuration is shown in figure 6. the curve is a decreasing exponential, it starts from a unitary reliability and it tends to zero. the analysis is simulated starting from 0 km up to 6∙106 km. according to an annual forecast distance traversed of about 487∙103 km, the simulation in term of time is over 12 years. at distance 0.5∙106 km (approximately 1 year) the reliability is around the 80 %, while after 1∙106 km (approximately 2 years) is decrease approximately to the 60 % and then it tends to zero at 5∙106 km (approximately 10 years). these results are justified by two reasons: • the mechanical nature of the whole system, that contributes to a fast decrease of the reliability. • the or gate that lead to the top event, which is the worst-case scenario between the several ones considered during the design. 4.2. reliability block diagram an overall system reliability prediction can be made by looking at the reliabilities of the components that make up the whole system or product. in order to construct a reliability block diagram, the reliability-wise configuration of the components must be determined. consequently, the analysis method used for computing the reliability of a system will also depend on the reliability-wise configuration of the components/subsystems. figure 4. reliability curves of the blower assembly calculated through merak data and blower model as in equation (14). figure 5. extract of the fta diagram for the hvac system under analysis. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 105 that configuration can be as simple as units arranged in a pure series or parallel configuration. there can also be systems of combined series/parallel configurations or complex systems that cannot be decomposed into groups of series and parallel configurations. the hvac system under analysis could be described using a series of three main blocks (see figure 7): • cooling system • heating system • ventilation system. therefore, supposing the exponential distribution for all the items [18], [38], [39], the reliability equation of the whole system is: 𝑅sys(𝑡) = e −(𝜆cooling+𝜆heating+𝜆ventilation)∙𝑡 . (16) figure 8 shows a comparison between cooling, heating and ventilation reliability curves. the figure also shows the whole system reliability trend calculated with equation (16), which is illustrated using a dashed black line. the red line represents the heating system, the blue line the ventilation system and the green line the cooling system. the worst system, in reliability terms, is the cooling system, because it contains a lot of series elements and most of them are mechanical items. the three systems are connected in a series configuration, where the component with the least reliability has the biggest effect on the system's reliability. as a result, the reliability of a series system is always less than the reliability of the least reliable component. that’s why the black line, representing the whole system reliability is lower than the cooling curve (the least reliable). 4.3. comparison between fta and rbd results table 2 shows the comparison of the reliability trends between the two proposed methods: reliability block diagram and fault tree analysis. the two curves are very similar, but the rbd reliability is always higher than the fta results (at every distance). the differences could be caused by: • different algorithm used by the software for the calculation of the reliability. • the fta analysis results are more complete and they consider all the possible path that lead to the top event, so it considers also the relationship between failures. the first column of table 2 reports the distance travelled by the train, the second the corresponding time, the third and the fourth the reliability values of the fta and rbd respectively. the last one reports the absolute percentage difference of the two previous columns. all the values relative to the rbd are higher than the fta, but their difference is lower than 6 %. figure 9 shows the trend of the difference between the two curves, it illustrates that the maximum value is 6.5 %, so the difference is very low. at the beginning the difference is not remarkable, in particular before 1000000 km is lower than 4 %, then it increases, and the peak is between 1500000 km and 2500000 km where the difference is 6 %. after that, it decreases slowly and it reaches the value of 2 % at 6000000 km. therefore, the two methods provide comparable results, and both the outcomes are valid. 5. comparison between field data and model data a comparison between the failure estimation of the previous paragraphs and the failure data provided by the manufacturer of the hvac has been carried out in order to investigate how the model-based failure rates affect the reliability trend of the whole system. the model-based failure rates of compressor, blower and heat exchanger are used to calculate the whole hvac reliability together with the failure rate estimation of the other components, which make up the system. figure 6. reliability trend of the fta. figure 7. reliability block diagram of the hvac system. figure 8. reliability trends of cooling, heating and ventilation systems (continuous lines) compared with the hvac system reliability (dashed line). table 2. reliability data of or type fta and rbd. distance km time rfta rrbd difference 0.5 ∙ 106 1 year 0.78 0.80 3 % 1 ∙ 106 2 years 0.60 0.64 4 % 2 ∙ 106 4 years 0.34 0.40 6 % 3 ∙ 106 6 years 0.18 0.24 6 % 4 ∙ 106 8 years 0.1 0.14 4 % 5 ∙ 106 10 years 0.05 0.08 3 % 6 ∙ 106 12 years 0.03 0.05 2 % acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 106 figure 10 shows three reliability curves, the blue trend is related to the fta reliability, the red one is calculated with the rbd analysis, while the green one is related to the failure rate of the components calculated using the failure models. it’s possible to note that the model-based failure rates contribute to reduce the reliability and have an important contribution to the whole system reliability. the failure rate models provide a pessimistic reliability results for the three components analyzed before. consequently, their reliability curves affect the whole system reliability, producing a trend lower than the ones calculated with the manufacturer data (in case of both fta and rbd techniques. 6. conclusion the paper deals with a heating, ventilation and air conditioning system mounted on a high-speed train. the first part of the paper illustrates the taxonomy of the system under study. the architecture of an hvac system includes several critical components, such as: a fan (blower), a heat exchanger and a compressor. a detailed study on the failure rates of the hvac most critical components is presented in this paper. compressor, heat exchanger and blower show a model-based reliability lower than the reliability achieved using the field data provided by the manufacturer of the hvac “merak”. then, the reliability of the complete hvac system has been estimated using two wellknown techniques: fault tree analysis and reliability block diagram. fta and rbd methods take the field data provided by merak as input to evaluate the system reliability over distance travelled by the train. the final analysis shows how the model failure rates affect the whole hvac reliability comparing the results achieved using fta and rbd with the one obtained using the failure rate models. the model-based failure rate provides a pessimistic result because it considers every possible failure modes and failure mechanisms of each subitem that make up the component. despite this. it could be not so realistic since it doesn’t properly consider the real operating conditions of the system under test. quite the opposite, the reliability evaluated using the field data takes into account the real context of the hvac but some of the failure mechanisms might be not occur during the observed time interval. for these reasons. it is fundamental to analyze the reliability of such complex system integrating both techniques. references [1] g. d’emilia, a. gaspari, e. hohwieler, a. laghmouchi, e. uhlmann, improvement of defect detectability in machine tools using sensor-based condition monitoring applications, procedia cirp, vol. 67, 2018, pp. 325-331. doi: 10.1016/j.procir.2017.12.221 [2] d. capriglione, m. carratu, a. pietrosanto, p. sommella, online fault detection of rear stroke suspension sensor in motorcycle, ieee trans. instrum. meas., vol. 68, no. 5, may 2019, pp. 13621372. doi: 10.1109/tim.2019.2905945 [3] a. paggi, g. l. mariotti, r. paggi, f. leccese, general reliability assessment via the physics-based, in 2020 ieee 7th international workshop on metrology for aerospace (metroaerospace), jun. 2020, pp. 510-515. doi: 10.1109/metroaerospace48742.2020.9160087 [4] g. d’emilia, a. gaspari, e. natale, measurements for smart manufacturing in an industry 4.0 scenario a case-study on a mechatronic system, in 2018 workshop on metrology for industry 4.0 and iot, apr. 2018, pp. 1-5. doi: 10.1109/metroi4.2018.8428341 [5] d. capriglione, m. carratù, a. pietrosanto, p. sommella, narx ann-based instrument fault detection in motorcycle, measurement, vol. 117, mar. 2018, pp. 304-311. doi: 10.1016/j.measurement.2017.12.026 [6] u. leturiondo, o. salgado, d. galar, estimation of the reliability of rolling element bearings using a synthetic failure rate, 2016, pp. 99-112. [7] a. reatti, f. corti, l. pugi, wireless power transfer for static railway applications, in 2018 ieee international conference on environment and electrical engineering and 2018 ieee industrial and commercial power systems europe (eeeic / i&cps europe), jun. 2018, pp. 1-6. doi: 10.1109/eeeic.2018.8493757 [8] s. giarnetti, e. de francesco, r. de francesco, f. nanni, m. cagnetti, f. leccese, e. petritoli, g. schirripa spagnolo, a new approach to define reproducibility of additive layers manufactured components, in 2020 ieee 7th international workshop on metrology for aerospace (metroaerospace), jun. 2020, pp. 529-533. doi: 10.1109/metroaerospace48742.2020.9160076 [9] e. petritoli, f. leccese, m. botticelli, s. pizzuti, f. pieroni, a rams analysis for a precision scale-up configuration of the ‘smart street’ pilot site: an industry 4.0 case study, acta imeko 8 (2019) 2, pp. 3-11. doi: 10.21014/acta_imeko.v8i2.614 [10] m. khalil, c. laurano, g. leone, m. zanoni, outage severity analysis and ram evaluation of italian overhead transmission lines from a regional perspective, acta imeko 5 (2016) 4, pp. 7379. doi: 10.21014/acta_imeko.v5i4.424 [11] p. liu, x. cheng, y. qin, y. zhang, z. xing, reliability analysis of metro door system based on fuzzy reasoning petri net, in lecture notes in electrical engineering, vol. 288 lnee, no. vol. 2, 2014, pp. 283-291. [12] l. cristaldi, m. khalil, m. faifer, markov process reliability model for photovoltaic module failures, acta imeko 6 (2017) 4, pp. figure 9. absolute percentage difference between the two reliability trends achieved using rbd and fta methods. figure 10. reliability curves of the hvac assembly calculated through merak data (both fta and rbd techniques) and model data. https://doi.org/10.1016/j.procir.2017.12.221 https://doi.org/10.1109/tim.2019.2905945 https://doi.org/10.1109/metroaerospace48742.2020.9160087 https://doi.org/10.1109/metroi4.2018.8428341 https://doi.org/10.1016/j.measurement.2017.12.026 https://doi.org/10.1109/eeeic.2018.8493757 https://doi.org/10.1109/metroaerospace48742.2020.9160076 http://dx.doi.org/10.21014/acta_imeko.v8i2.614 https://doi.org/10.21014/acta_imeko.v5i4.424 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 107 121-130. doi: 10.21014/acta_imeko.v6i4.428 [13] iec 61025, fault tree analysis (fta). international electrotechnical commission, 2007. [14] l. ciani, g. guidi, d. galar, reliability evaluation of an hvac ventilation system with fta and rbd analysis, 2020. [15] iec 61078, reliability block diagram. international electrotechnical commission, 2016. [16] nswc, handbook of reliability prediction procedures for mechanical equipment, no. may. 2011. [17] m. catelani, l. ciani, a. bartolini, g. guidi, g. patrizi, standby redundancy for reliability improvement of wireless sensor network, in 2019 ieee 5th international forum on research and technology for society and industry (rtsi), sep. 2019, pp. 364369. doi: 10.1109/rtsi.2019.8895533 [18] m. rausand, a. hoyland, system reliability theory, second. john wiley & sons, inc., 2004. [19] m. catelani, l. ciani, g. guidi, d. galar, a practical solution for hvac life estimation using failure models, 2020. [20] a. paggi, g. l. mariotti, r. paggi, a. calogero, f. leccese, prediction by means hazard rate occurrence is a deeply wrong approach, in 2017 ieee international workshop on metrology for aerospace (metroaerospace), june 2017, pp. 276-281. doi: 10.1109/metroaerospace.2017.7999580 [21] m. catelani, l. ciani, m. venzi, component reliability importance assessment on complex systems using credible improvement potential, microelectron. reliab., vol. 64, sep. 2016, pp. 113-119. doi: 10.1016/j.microrel.2016.07.055 [22] b. s. dhillon, human reliability and error in transportation systems. springer-verlag, 2007. [23] l. ciani, g. guidi, application and analysis of methods for the evaluation of failure rate distribution parameters for avionics components, measurement, vol. 139, june 2019, pp. 258-269. doi: 10.1016/j.measurement.2019.02.082 [24] j. g. mcleish, enhancing mil-hdbk-217 reliability predictions with physics of failure methods, 2010 proceedings of the annual reliability and maintainability symposium (rams), jan. 2010, pp. 1-6. doi: 10.1109/rams.2010.5448044 [25] m. catelani, l. ciani, g. guidi, g. patrizi, maintainability improvement using allocation methods for railway systems, acta imeko 9 (2020) 1, pp. 10-17. doi: 10.21014/acta_imeko.v9i1.733 [26] a. massaro, e. cannella, g. dipierro, a. galiano, g. d’andrea, and g. malito, maintenance and testing protocols in the railway industry, acta imeko 9 (2020) 4, pp. 4-12. doi: 10.21014/acta_imeko.v9i4.718 [27] t. addabbo, a. fort, c. della giovampaola, m. mugnaini, a. toccafondi, v. vignoli, on the safety design of radar based railway level crossing surveillance systems, acta imeko 5 (2016) 4, pp. 64-72. doi: 10.21014/acta_imeko.v5i4.419 [28] a. vedavarz, s. kumar, m. i. hussain, hvac handbook of heating, ventilation, and air conditioning for design & implementation, fourth. industrial press inc., 2013. [29] s. c. sugarman, hvac fundamentals, second. crc press: taylor & francis group, 2007. [30] l. tanghong, x. gang, test and improvement of ventilation cooling system for high-speed train, in 2010 international conference on optoelectronics and image processing, vol. 2, nov. 2010, pp. 493-497. doi: 10.1109/icoip.2010.55 [31] c. luger, r. rieberer, multi-objective design optimization of a rail hvac co2 cycle, int. j. refrig., vol. 92, pp. 133-142, 2018. doi: 10.1016/j.ijrefrig.2018.05.033 [32] f. porges, hvac engineer handbook, eleventh. elsevier science & technology books, 2001. [33] l. marjanovic-halburd, i. korolija, v. i. hanby, heating ventilating and air-conditioning (hvac) equipment taxonomy, in iir 2008 hvac energy efficiency best practice conference, 2008, no. february 2016. [34] international organization for standardization, iso 14224 petroleum, petrochemical and natural gas industries — collection and exchange of reliability and maintenance data for equipment. 2016. [35] l. ciani, g. guidi, g. patrizi, a critical comparison of alternative risk priority numbers in failure modes, effects, and criticality analysis, ieee access, vol. 7, no. d, 2019, pp. 92398-92409, doi: 10.1109/access.2019.2928120 [36] i. values, compressor selection: semi-hermetic reciprocating compressors technical data: (4g-30.2y) dimensions and connections, pp. 4-7, 2015. [37] mil-hdbk-217f, military handbook reliability prediction of electronic equipment. us department of defense, washington dc, 1991. [38] a. birolini, reliability engineering. berlin, heidelberg: springer berlin heidelberg, 2017. doi 10.1007/978-3-662-54209-5 [39] m. lazzaroni, l. cristaldi, l. peretto, p. rinaldi, m. catelani, reliability analysis in the design phase, in: reliability engineering. berlin, heidelberg: springer berlin heidelberg, 2011. doi: 10.1007/978-3-642-20983-3_3 http://dx.doi.org/10.21014/acta_imeko.v6i4.428 https://doi.org/10.1109/rtsi.2019.8895533 https://doi.org/10.1109/metroaerospace.2017.7999580 https://doi.org/10.1016/j.microrel.2016.07.055 https://doi.org/10.1016/j.measurement.2019.02.082 https://doi.org/10.1109/rams.2010.5448044 http://dx.doi.org/10.21014/acta_imeko.v9i1.733 http://dx.doi.org/10.21014/acta_imeko.v9i4.718 http://dx.doi.org/10.21014/acta_imeko.v5i4.419 https://doi.org/10.1109/icoip.2010.55 https://doi.org/10.1016/j.ijrefrig.2018.05.033 https://doi.org/10.1109/access.2019.2928120 https://doi.org/10.1007/978-3-662-54209-5 https://doi.org/10.1007/978-3-642-20983-3_3 power quality metrics for dc grids with pulsed power loads acta imeko issn: 2221-870x june 2021, volume 10, number 2, 153 161 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 153 power quality metrics for dc grids with pulsed power loads andrea mariscotti1 1 diten, university of genova, via opera pia 11a, 16145 genova, italy section: research paper keywords: dc grid; power quality; pulsed power load citation: andrea mariscotti, power quality metrics for dc grids with pulsed power loads, acta imeko, vol. 10, no. 2, article 22, june 2021, identifier: imekoacta-10 (2021)-02-22 section editor: giuseppe caravello, università degli studi di palermo, italy received february 17, 2021; in final form april 24, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: andrea mariscotti, e-mail: andrea.mariscotti@unige.it 1. introduction the term “dc grid” may be considered as a catchall for various types of distribution networks that are being extensively used for a wide range of applications. one of the advantages of dc distribution is the ease of integration of sources and loads, without the complication of phase angle instability and coordination, typical of ac applications. such networks are operated at both medium voltage (mv) and low voltage (lv) with some standardized nominal voltage values. compared to ac grids in several cases dc grids can interface to pulse loads more effectively, that is with a faster response and without violating power quality (pq) constraints [1]. the physical extension is variable, from some tens of meters within a large room, a smart house or an electrified vehicle, hundreds of metres between buildings, at a campus, or onboard ships, and up to some km for distribution within smart residential districts, technological parks, etc. [2], [3]. some representative examples of such networks are as follows. smart data centres pursue a dc distribution perspective, in particular to ensure a high level of availability, using capacitors, batteries and autonomous renewable energy sources (photovoltaic and fuel cells to cite the most common ones operating at dc), covering a wide time scale of events (fluctuations, dips, short and long interruptions) [4]. mv power distribution onboard ships features various types of loads with different dynamics and power absorption levels [1], [5], [6]. various arrangements may be adopted: split dc and zonal bus types, different sub-bus solutions for generators, propulsion loads and ppls, etc. [6]. the extension of the network is of course limited to the physical size of the ship (up to about 300 m) but cabling and routing may be quite complex. another example of lv/mv application characterized by dynamic moving loads is the wide set of railways, metros, tramways and trolley buses, all supplied from a catenary (or third rail) system fed by rectifier substations [7], [8]. the line voltage is generally standardized to a set of nominal values of 600, 750, 1500 and 3000 v. these networks feature the largest extension, as they cover entire cities, regions and countries. they are sectioned, however, into smaller portions mainly for an exigency of maintenance and operation, besides control of supply voltage. the network frequency response is nevertheless peculiar, with significant variations of the network impedance and resonances already appearing at low frequency [9]. loads with large nominal power and significant dynamics have a direct impact on the line voltage (subject to appreciable variations consequential to load cycles) and can compromise network stability [10]. as in all distribution networks, the primary quantity is the voltage available at equipment terminals, but for a complete assessment of the network-load interaction, current and power should be also evaluated. in particular, the current waveform is specifically involved in phenomena such as inrush, identification of faults and in general control of the interface converters by feed-forward methods, attempting dynamic abstract dc grids can effectively feed pulsed power loads (ppls), integrating local energy storage to minimize the impact on other connected loads and implementing buffered sub-grids to isolate susceptible loads even more. the identification of regulatory limits for ppl integration and dc grid response and the assessment of ppl impact necessitate suitable power quality (pq) metrics. existing indexes and limits (e.g. ripple and distortion, voltage fluctuation) are compared to other metrics, derived from control methods and knowledge of the peculiar characteristics of ppl and interaction with dc grids in some examples. the objective is a unified approach for pq metrics suitable for a wide range of dc grids and in particular to the behavior of ppls. mailto:andrea.mariscotti@unige.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 154 impedance control. the power profile instead is the primary quantity for the power and energy budget, as well as for prediction and control [11]. in general, the impact of pulsed power loads (ppls) is reduced by a range of techniques suitable for dc grids, whose applicability may depend on the specific grid characteristics: • first of all load dynamics may be limited at the origin, e.g. by limiting the rate of rise of the power demand, compatible with the load mission and characteristics; a wide range of techniques have been proposed, falling all under a unique category that we may call “profile-based control” • additional passive energy storage devices (e.g. batteries, supercapacitors, etc.) may be installed, capable of feeding the supplementary power with the required rapidity, and at the same time dampen network swell (consequential to load release) and recover the energy, in case of regenerative behaviour • the role of passive energy storage devices may be backed up by the so called “dc springs” and other active compensation devices. the work is thus structured starting from ppl characteristics in terms of electrical behaviour and interaction with the dc grid (section 2). section 3 then focuses on metrics suitable to describe quantitatively such interaction and on the impact on dc grid operation and pq. these metrics are derived from previously proposed pq indexes, as well as from considerations regarding current and power absorption profiles. section 4 summarizes the normative pq requirements for dc grids in various applications that are characterized by the presence of ppls, such as avionics, shipboard and railways at various extents. section 5 then evaluates the performance of the proposed metrics, using simulated and measured data. 2. pulsed power loads and network interaction pulsed power loads (ppls) with repeated large variations of load current represent an issue to dc grid voltage stability and regulation [12]. for complex systems with non-essential and essential loads, the adoption of a dc zonal distribution system is commonplace, dividing the dc grid onboard into zones separated by interface converters. each sub-grid adopts specific solutions where necessary for critical loads, de facto confining and limiting the propagation of transients and disturbance. since power is fed often by intrinsically ac sources (such as gas turbine and diesel engine alternators), heavy loads may be instead fed directly from a primary ac grid, from which the other zoned dc sub-grids are derived, one or more dedicated to specific ppl and local energy storage. ppl loads are usually interfaced with a dc/dc front-end converter, based on a variety of topologies, depending also on power rating, voltage level and required dynamics. ppls for a wide range of applications may be exemplified as follows. • devices and apparatus that are part of scientific experiments in nuclear physics, such as the magnetic circuits of acceleration and bending at synchrotrons, the large hadron collider, etc. in several cases choices are a compromise between desirable performance for new types of experiments reutilization of existing equipment (undergoing thus gradual modernization) and exigencies of power absorption, feasibility of protections, and continuity of service. dc distribution facilitates the direct or indirect connection of energy storage, ensuring both fast dynamic response and higher continuity of service, even supporting reconfiguration on the fly (“hot swap”) to account for some amount of dc/dc supply going out of service [13]. • radar systems connected to aircraft dc grid with power absorption following transmission pulses. the dc grid onboard aircrafts is characterized by a limited power deployment, with flight control loads, constant power loads (such as fuel pump), and the said radar load, as well as modern radios. the load profile described in [14] may be taken as an exemplifying case: current pulses of 4 ms duration and peak power of 33.6 kw with respect to a steady power absorption of one third (11.2 kw); peaks are arranged in pulse bursts with spacing of 10 ms (5 peaks) with a repetition of 200 ms. modern aircrafts may be equipped with more than one radar device, for which coordination for interleaved operation may be a strategic option. modern radio systems with advanced characteristics of spectrum exploitation, long range and traffic acceptance flexibility (especially for military applications) may share a power absorption pattern similar to radars. • rail guns and electromagnetic weapons, such as high-power laser and microwave beams, typically located onboard ships [15]. all these weapon systems deliver peak power levels in the order of 1 gw or more through a pulse forming network, whose charging from the ship dc grid absorbs power levels in the order of 10 mw – 30 mw for a duration of some seconds. during intensive use dc grid loading is almost continuous, with the delivery of fast energy pulses that may easily decoupled thanks to the much higher characteristic frequency. • electrified transports featuring high-performance units with a dynamic power profile. two quite different modern transportation means fall into this category: urban highspeed high-pace electrified guideway systems and electric vehicles with dynamic wireless power charging, with pulsed power absorption with fast charging times when passing charging points. both can be assumed fed by a dc grid although the physical extension is quite different: dynamic wireless power transfer requires local dc distribution buses up to about some hundreds meters [16], whereas guideway systems, tramlines and railways of various kids feature supply sections of several km [17]. for guideway systems, such as metros, it must be pointed out that ppls at the load release or when implementing regenerative braking, may cause a significant increase of line voltage with consequence for electrical safety, especially for passengers and people standing at platform and near platform screen doors [18]. 3. disturbance metrics for both the implementation of the control policies and the assessment of performance, suitable metrics of the disturbance are necessary. such metrics can be applied to voltage, current, power, or a combination thereof (multi-criteria metrics). they were preliminarily discussed and compared to relevant standards in the imeko tc4 conference paper [19], of which this work is the continuation, with focus on dynamic ppl loads and network stability. from the normative viewpoint [20]–[25] transient electrical phenomena may be classified as: • macroscopic voltage variations lasting for tens ms or longer • shorter voltage variations, related to switching transients of variable amplitude acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 155 • periodic variations often described in terms of harmonic distortion or ripple, covering a wider frequency range. it is commonplace to evaluate all these phenomena by the instantaneous difference with respect to steady state vdc value [1], [12], applying the concept of ripple to all voltage variations, both transient and repetitive. the mil-std-1399-300 standard [26] has recently extended the concept of distortion and ripple to the active power absorption profile itself for ac grids onboard us navy ships. the concept is interesting for dc grids where the concept of absorbed power is more straightforward and does not involve reactive power. in the following, voltage fluctuations and transient variations are considered first, discussing alternative metrics other than ripple; periodic variations and ripple are then considered, including the extension to cover aperiodic variations and transients. 3.1. voltage fluctuation as anticipated the network response and the quality of the delivered energy (power quality in a wide perspective) is usually measured by indexes applied to the line voltage, measuring voltage spread during fluctuations to compare with maximum values ad time-amplitude limit curves. line transients may thus be evaluated by a measure of their pure amplitude a (e.g. peak or average value), equivalent time duration tx (e.g. half-amplitude duration t50) and combined amplitude-duration. the first index that has a strong relationship with ac networks is the rms value, that when defined for aperiodic phenomena corresponds in reality to a measure of the area of the signal. thus, focusing on aperiodic phenomena for generality, the combination of amplitude and duration gives two measures of the intensity of the phenomenon: area and energy. taking a time interval [t1, t2] and considering the ac portion x(t) of the network quantity (voltage or current), having first subtracted an estimate of the steady value x, we obtain: 𝑆 = ∫ 𝑥(𝑡) 𝑡2 𝑡1  𝑑𝑡, 𝐸 = ∫ |𝑥(𝑡)|2𝑑𝑡, 𝑡2 𝑡1 𝑃 = 𝐸 𝑡2 − 𝑡1 . (1) energy is calculated and considered in a signal processing perspective, as the square of a signal x over the time interval [t1, t2]. then power p is just obtained as energy e divided by the duration t2–-t1 of the time interval. it is easy to see that combining voltage and current corresponds to the squared rms. the mean square time duration may be also defined, that measures the interval where the energy is concentrated: 𝜏 2 = ∫ 𝑡 2|𝑥(𝑡)|2d𝑡 𝑡2 𝑡1 ∫ |𝑥(𝑡)|2d𝑡 𝑡2 𝑡1 ⁄ . (2) area (or impulse strength) and energy can be used to evaluate the impact on loads: the missing area or energy of a negative transient (a temporary reduction of the line voltage) will cause a reduction of the relevant quantities at the load side. the consequence is a diminished performance that may vary for duration and severity depending on the load type. such impact is partially compensated by filters (capacitors mainly), storage devices (e.g. batteries and supercapacitors) and dc springs over different time scales, covering fast and slow variations. in addition, control at the source with a reserve of energy driven by a suitable index of the network quality of power can support cooperative control. 3.2. voltage ripple dc grids are generally characterized by means of “ripple”, that en 61000-4-17 [27] describes as composed of “power frequency or its multiple 2, 3 or 6”, focusing with a limited view on the mechanism of production by classic ac/dc conversion (based e.g. on diode and thyristors rectifiers). this scheme is well represented in dc electrified transports where the ripple of substation rectifiers is clearly visible: its relevance is in terms of line voltage fluctuation reflected also in the track voltage, where a significant amount of ripple may be relevant for the assessment of touch voltage and body current [18]. ripple is thus a repetitive phenomenon superimposed to the dc steady value. modern ac/dc and dc/dc converters and poly-phase machines for renewable energy sources are used extensively to improve pq and for ease of interfacing in modern micro-grids and smartgrids. this leads necessarily to a reformulation of the concept of ripple to a more general definition accounting for nonharmonically related components, possibly non-stationary. the time domain definition of ripple is the maximum instantaneous voltage variation over a given time interval [28][29] or directly as difference between the observed maximum and the minimum voltage values [24], both weighted by the steady dc voltage value vdc. in this perspective ripple and voltage fluctuation are synonymous, provided that ripple is defined as difference with respect to vdc, taking thus positive and negative values (above or below vdc) separately, allowing to consider asymmetric limits for voltage variation. it is observed that dc networks have a much lower harmonic content thanks to the large deployed capacitance and in general the lower impedance in the harmonic frequency range, as it was demonstrated for railways in [30], comparing harmonic power terms in ac and dc systems. lower network impedance keeps harmonic voltage components low, while amplifying current distortion at higher frequency, as sources see a quasi-short-circuit condition. basically speaking, ripple may be calculated as the maximum of the peak-to-peak or peak excursion of network voltage [12], [28], but other measures of it (rms, percentiles, ...) were proposed in the past [31]-[33]. ripple addresses two objectives at once, quantifying the spread of instantaneous values and the network distortion (ripple was defined both in time and frequency domain [28], [29]). the explicit connection between ripple and dft (including harmonics as such and other components) was given in [28] with the index dlfsd and in [31] with rdf. ripple can describe the quality of the delivered voltage in terms of fluctuations and excursion, as well as assessing significant load steps and inrush phenomena when applied to the current waveform. 3.3. power trajectory ppl relevance is caused by the sudden power absorption and the line voltage drop consequential to the flow of absorbed current. grid voltage will change by tens of % when current increases by orders of magnitude, so that at a first approximation metrics based on power may be replicated for current with similar results. ppl input power as a function of time pp(t) is commonly named “power trajectory”. it has an approximately trapezoidal shape with amplitude p, comparable rise and fall time tr and tf, and top value duration tp; pulses of overall duration tp repeat in bursts of duration tb, that may occur more than once forming a train of bursts, one burst every tt seconds, as sketched in figure acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 156 1. the radar load depicted in section 2 fits this description with tp=4 ms, tp=10 ms, tb=50 ms and tt=200 ms. for our purpose the following elements are particularly important and characterize the power trajectory: the top value p, the rate of rise p/tr, the duration of the power pulse tp and the repetition intervals, tb and tt. the peak power p establishes the loading of the source and the grid voltage reduction, taking one pulse alone in almost steady conditions. the rate of rise instead indicates the dynamics of the power demand and the rapidity with which the dc grid and its energy storage must feed the load to sustain the network voltage. a metric that weights the rate of rise is the derivative vs. time of the power trajectory. the pulse duration tp , together with the absorbed power p, is relevant from an energy viewpoint, indicating the level of depletion of the energy storage during one power absorption event. the two parameters may be combined by calculating the area ap = p tp of the power trajectory pulse, that as a general metric was introduced in section 3.1. tb and tt indicate instead the periodicity and the amount of time available to recharge the energy storage devices between two pulses or two bursts of pulses. the number of pulses in one burst and of bursts in one train come directly as np = tb/tp and nb = tt/tb. combined with the area ap, that has the unit of measure of energy, we obtain the total depletion in terms of number of pulses, and the rate of depletion of the energy storage, that must be matched by the source capability. the time rate of change of the power trajectory was proposed as the reference quantity of a disturbance metric to apply to ppls [11]: 𝑑𝑃 = √ 1 𝑇𝑃 ∫ ( d𝑃𝑃 (𝑡) d𝑡 ) 2 d𝑡 𝑇𝑃 0 . (3) the expression for dp returns the average square derivative of the power trajectory pp(t); in other words, this is the rms value of the derivative of the power trajectory. considering the sinusoidal components composing it, the time derivative corresponds to a multiplication by  of each component; the structure of this penalization is similar then to the one adopted for high-voltage stress of capacitor insulation [34] (the reference quantity being there the electric field or the terminal voltage). by considering the power trajectory expressed as pp(t) = v(t)i(t), the derivative and square of the product may be further developed. the derivative of a product is equal to the sum of the two mixed products of one of the quantities with the derivative of the other. each derivative is the multiplication by  of the quantity expressed as a fourier series: 𝑑𝑃 ′ = √ 1 𝑇𝑃 ∫ (∑ �̃�𝑚 (𝑡) 𝑚 ∑ 𝜔𝑛 𝐼𝑛(𝑡) 𝑛 + ∑ 𝜔𝑚 �̃�𝑚 (𝑡) 𝑚 ∑ 𝐼𝑛(𝑡) 𝑛 ) 2 𝑇𝑃 0 having identified for brevity with )( ~ tvm and )( ~ tim the fourier series terms each with an exponential term and the peak value of the voltage or current component. it is also observed that to the aim of calculating energy, only the terms with a non-null integral over the longest period are retained, that is the mixed products of )( ~ tvm and )( ~ tin , mn, give a null contribution and only those with the same harmonic index are retained. the peak values in )( ~ tvm and )( ~ tin , once passed through the integral over tp, are transformed into rms values vm ad in with a scaling of 1/2 each that compensates with the factor of 2 resulting from the sum left and right of identical terms. 𝑑𝑃 ′ = √(2 ∑ 𝜔𝑛 1 2 𝑉𝑛 𝐼𝑛 𝑛 ) 2 = ∑ 𝜔𝑛𝑉𝑛 𝐼𝑛 𝑛 = ∑ 𝜔𝑛𝑃𝑛 𝑛 . (4) the resulting alternative power rate metric pd is thus a harmonic active power term multiplied by the pulsation, representing a sort of weighting function. harmonic active power was analysed for dc and ac railway systems in [30], finding that relevant terms are located at low frequency with relevance of a fraction of % compared to the total exchanged energy. such terms are larger for ac than for dc systems, that feature in general larger deployed capacitance (and may be also backed up by various energy storage systems). 3.4. power distortion treating the absorbed power profile (i.e. the power trajectory pp(t)) as a signal, the equivalent of the total harmonic distortion may be calculated, as well the intensity of spectral components of pp(t) for specific frequency intervals. this is the approach recently proposed in the mil-std-1399-300 interface standard for ac grids onboard us navy electric ships. power distortion (pd) is calculated for the active power component only (called “real power”): 𝑃𝐷 = √∑(𝑃𝑛 √2𝑃𝑎𝑣⁄ ) 2 𝑛 . (5) such a metric speaks of peak value of active power at a given frequency interval (or bin n) and extracts its rms value by dividing by 2. this metric is preliminarily considered here for dc systems by simply analysing the spectral behaviour of pp(t), compared to the expression derived for pd in (4), where the active power terms are arithmetically summed with the inclusion of pulsation as relative weight. considering the typical behaviour and response of the elements of a dc grid (energy storage devices, filters, converter controls), we briefly observe that an increased weight linear with frequency does not express a practical scenario where high frequency is in reality filtered by capacitors, whereas medium frequency components may fall beyond control bandwidth. figure 1. ideal power trajectory pp(t) of a ppl, absorbing power in bursts of individual approximately trapezoidal pulses. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 157 4. normative requirements 4.1. voltage variations, swells and sags (dips), and interruptions overvoltages and undervoltages may be named swells and sags (or dips) with analogy to ac networks. standards specifying requirements for pq, performance and reliability of dc distribution divide among the areas of application: mil-std704f [23] for aircrafts; iacs reg. e5 [22] and ieee std. 45.1 [21] for ships; en 50155 [24] onboard rolling stock; itu-t l.1200 [25] for telecommunication and data centers. in general, distinction is needed from other long-term transients (long interruptions and fluctuations), as well as from very short-term phenomena, usually classified as spikes or surges. these latter in dc grids with no overhead lines are in general of modest amplitude, thanks to the large deployed capacitance (batteries and other energy storage devices, output filters of interfacing converters, etc.) and the consequent low transient impedance). definitions of ieee std. 1159 [35] and en 61000-4-30 [36] for ac systems indicate a voltage dip or swell when the rms crosses a threshold, its duration quantified measuring the time interval between two consecutive crossings [35][37]. this definition based on the rms value is possible because swells and dips are defined for durations > 1 cycle. the analogy with dc systems is not perfect and various time scales for the estimate of the transient and related steady value may be used. this will be shown with an example in section 5. table 1 shows limits and reference values for voltage fluctuations, swells, sags and interruptions taken from the most relevant standards for the environment of application of ppls reviewed in section 2. figure 2 gives insight in graphical form into the timeamplitude limits of mil-std-704f and en 50155 for aircraft and railway applications, respectively. faster transients (with shorter duration and rise and fall times) are more application specific and may depend e.g. on switching operations. a peculiar source of repetitive fast transients in electrified transportation systems are electric arcs, the known byproduct of the current collection mechanism, occurring at small discontinuities of the catenary line, during pantograph detachment in dynamic conditions and with adverse weather (e.g. ice) [38][39]. such phenomenon is also relevant from an energy consumption viewpoint, in principle directly (as dissipative phenomenon in the arc itself and involved components), but most of all indirectly (interfering with the operation of the braking chopper of dc rolling stock) [40]. 4.2. ripple, harmonics and periodic variations the en 61000-4-7 [41] is a well-structured and complete standard that covers methods and algorithms quantifying spectral harmonic components, including interharmonics. however, the underlying assumption is always that of the existence of a fundamental, even for interharmonics, that e.g. combine mains and motor frequency in variable frequency drives or two mains frequencies in interface converters. as observed in sec. 3.2, for dc grids it is always assumed that the dc supply comes from ac mains upstream, although technology advancement is characterized also by autonomous dc grids, supplied by sources that are intrinsically dc, such as fuel cells and photovoltaic panels. it is evident thus that concepts as distortion and spectrum components, and suitable limits, should be independent from a purely harmonic perspective: • distortion becomes the amount of ac ripple superposed to the dc steady value vdc, rather than the composition of the table 1. limits and reference values for transient events (voltage variation, dip and interruptions) (e=emission, i=immunity, a=ambient specification). standard phenomenon type nom. volt. un in v ref. values mil-std-704f voltage var. a 28 v, 270 see figure 1 en 61000-4-29 voltage var. i 24-110 85-120 % 0.1-10 s en 61000-4-29 voltage dip i 24-110 40,70 % 0.01-1 s en 61000-4-29 voltage interr. hiz & loz i 24-110 0 % 0.001-1 s iacs voltage var. i  1kv 95-105 % en 50155 voltage var. a 24-110 see figure 1 en 50155 voltage interr. a 24-110 0 % 0.01-0.03 s l.1200 voltage var. a 300, 380 un-400-un 1 min un-260-un 1 min un-410-un 1 s un-420-un 10 ms l.1200 voltage dip i 300, 380 40 % 0.01 s l.1200 voltage interr. i 300, 380 0 % 0.01 s (loz) 1 s (hiz) a) b) figure 2. profile of allowed transient overvoltage / undervoltage for (a) avionic (mil-std-704f) and (b) railway (en 50155) applications. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 158 amplitude of the harmonic components; this definition goes back to ripple, that candidates itself as a flexible and all comprehensive metric • spectrum components are considered for a continuous frequency axis, although resolution frequency limitation is still relevant; this aspect is mentioned as mil-std-704f poses limits not only for distortion, but also to the spectral components [19]. table 2 summarizes normative limits and reference values for ripple and distortion. two quantities can be identified in the various standards that weight distortion and voltage variations: • the distortion d is the rms value of the ac components • the distortion factor df = d/vdc • the ripple r is the maximum absolute difference between an instantaneous value and the steady value vdc [23]; an alternative definition is the difference of maximum and minimum of the line voltage divided by the half sum [24]; then, in some cases ripple is expressed as an rms quantity, so that for clarity the properly said ripple r is considered as a “peak quantity”. distortion may evidently accompany ppls that are interfaced through static converters, but the most relevant feature is their impulsive and possibly repetitive power absorption profile. this profile for the considered dynamics may give rise to voltage and current components that may fall under the spectrum limitation at low frequency. for aircrafts in fact the mil-std-704f establishes limits of distortion components starting at 10 hz, unrelated from a concept of harmonic of a fundamental component. 5. results and discussion before going into the various cases, it is remembered that in general the voltage and current profiles characterizing loads and ppls have exponential profiles at the rising and falling edges, simply due to the most common behaviour of controls and electric network, featuring a dominant pole or two damped complex poles. shorter spikes may be present, but their origin is exogenous, as they are caused by lightning and fault induced phenomena. 5.1. pq indexes for typical ppl waveforms the behaviour of the metrics for transients (s, e and 2) and ripple index discussed in section 3.1 and 3.2, respectively, is analysed with respect to two typical ppl transient waveforms. case 1, shown in figure 3, includes a voltage reduction due to a significant current absorption and a fast transient (swell) following the release of the load that returns to a negligible current absorption. the transient response is stable and characterized by a first-order exponential. as evident, the voltage reduction is slow and characterized by local changes of slope, visible in the moving average profile superposed to the original waveform. as anticipated, a simple criterion based on a crossing threshold may face difficulties when the rate of change is very slow and the slope is not uniform. accurate and extensive noise removal is thus a minimum requisite. case 2, shown in figure 4, shows the voltage and current waveforms of a periodic ppl with the initial transient followed by the oscillatory response of the generator control and a variable amount of high-frequency ripple, that may be caused by the interface converters or may be the symptom of an incipient resonance. this waveform is quite similar to that of electric arc phenomena recognized in dc electric railways [39][42], where electric arcs trigger an impulsive reduction or increase of the line voltage (depending if the traction converter is absorbing power in a traction condition or injecting power into the network during a regenerative braking). the waveform in fact, besides a first pulse, features the oscillation of the rolling stock onboard filter (usually between 10 and 20 hz). substation ripple at 300 and 600 hz, besides a more or less intense component at 100 hz, is always present and appears as soon as the major transient response components have vanished. for case 1 the calculated values for the two portions of the waveform (voltage reduction with negative slope and subsequent swell) of the waveform, and altogether, are reported in table 3. the swell, although not so rapid, has a larger  than the five-time longer voltage reduction that precedes it. case 2 instead is evaluated for both the three indexes s, e, 2, and for ripple with a band limited approach as well, in order to separate the observed oscillations, at lower and higher frequency. table 2. limits and reference values for ripple and distortion (e=emission, i=immunity, a=ambient specification). standard phenomenon type nom. volt. un in v quantity ref. values mil-std-704f distortion a 28 d 3.5 % mil-std-704f ripple a 28 r 1.5 v mil-std-704f distortion a 270 d 1.5 % mil-std-704f ripple a 270 r 6 v iacs ur e5 ripple i rrms 10 % en 50155 ripple a 24-110 r 5 % en 61000-4-17 ripple i 24-110 r 1, 2.5, 5, 7.5 % figure 3. network voltage following a significant current absorption by the ppl local storage with a sudden release at the end, causing an exponential transient increase (swell). figure 4. ppl profile with periodic pulsation and superposed ripple following the initial power absorption: voltage (black) and current (grey) use the left axis, power (red) uses the right axis. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 159 since case 2 shows four absorption pulses, index values are commented for equivalence between apparently similar events. the results are shown in table 3, where the mean duration, and not its square, is reported for ease of comparison with signal duration and time intervals. the first voltage pulse has visibly less high-frequency oscillations, a slightly larger maximum value (the difference is about 3 v, so less than 1 %), but no other apparent differences. instead, the calculation of the energy points out an 8 % larger value than the remaining pulses; analogously the mean duration is shorter by about 10 %, that means that the energy concentration is higher. the physical explanation may be a deeper charging of internal storage, that is then less depleted during the successive pulses. indeed, the current peak in the first pulse is almost 4 % higher. case 2 is then further analysed for what concerns ripple components, that visibly are located at two distinct frequency intervals, around about 25 and 200 hz, and whose intensity changes between periods of the pulsating waveforms of voltage and current of figure 4. the band-limited ripple rbp [29] is thus calculated over different frequency intervals, using a windowed dft (hann window). from a preliminary evaluation of the band occupation, the frequency intervals should be separated at about 15 hz and 50-100 hz. with a sample frequency fs = 2.048 khz (matched by applying resampling) the shown frequency resolution is 8 hz, to match the visible main oscillation preliminarily qualitatively estimated at 24-25 hz. spectra shown in figure 5 are cut at 400 hz. as briefly commented in the figure caption, the dft of pp(t) highlights more effectively the spectrum pollution that may be a symptom of instability and that in a practical case should be further investigated. 5.2. spectrum of the ppl pattern the radar load profile [14], exemplified in section 2, is theoretically analysed for its frequency occupation, starting from the general ppl waveform of figure 1. the characteristics of the waveform are summarized as: current pulses of tp = 4 ms duration and peak power of 33.6 kw with respect to a steady power absorption of one third (11.2 kw); peaks are arranged in pulse bursts with spacing of tb = 10 ms (5 peaks) with a train repetition interval of tt = 200 ms. the coefficients of the fourier series of a symmetrical trapezoidal pulse are reported in (6) [43]. 𝑐0 = 𝑃 𝑝 , 𝑞 = 𝑇𝑝 𝑡𝑟 , 𝑝 = 𝑇𝑝 𝑡𝑝 + 𝑡𝑟 𝑐𝑘 = 2 𝑃 𝑝 ∙ sin( 𝑘 π/𝑞) 𝑘 π/𝑞 ∙ sin( 𝑘 π/𝑝) 𝑘 π/𝑝 , (6) having defined p and q as the fraction of the half-value pulse duration tp+tr and rise (or fall) time tr with respect to the duration of the single pulse tp. the resulting spectrum is a line spectrum with components repeating as per pulse spacing and modulated by the ck value. a more complex expression may be derived without the simplifying assumption of equal rise and fall times. however, as evident from the curve shape of figure 4, the trapezoidal curve shape is only an approximation and should not be assumed for more accurate assessment. the spectrum reported in figure 5, instead, gives more reliable indication of the average behaviour of components over one tp interval. a more refined assessment of the peak power oscillation and its damping could be more effectively achieved by curve fitting in the time domain. in general, when the pulse duration is shorter and the oscillations are characterized by variable instantaneous frequency, the precautions and verifications discussed in [45] should be considered. 5.3. power rate metric pd the power rate metric dp is evaluated with respect to a variable amount of harmonic distortion, using a realistic pp(t) trajectory with a pulsed profile. the first three periods of the red power curve of figure 1, reconstructed from the measured voltage and current waveforms, are considered as cases of an increasing amount of distortion due to the visible superposed ripple. such cases are indicated as case 2-intv1, -intv2 and -intv3 for the first three pulse periods. synthetic results are shown in table 4, reporting both the value of dp and the values of the previously defined indexes s, e and  for the selected intervals. it must be underlined that the two metrics s and e applied to pp(t) agree and weight more the table 3. values of area s, energy e, and mean duration  for the voltage waveforms of case 1 and case 2. interval s in v s e in v2  in s case 1-intv 1 202.07 2200 8.80 case 1-intv 2 55.51 1725 10.25 case 2-intv 1 3.61 203.56 0.0308 case 2-intv 2 3.56 193.58 0.0343 case 2-intv 3 3.58 193.24 0.0353 a) b) figure 5. dft of (a) v(t) and (b) pp(t), for the three first pulses of figure 4 (assigned colors black, blue and red). the main oscillation is clearly visible at about 24 hz, more stable in the v(t) profile; the high-frequency ripple is more recognizable in the pp(t) spectrum as a general increase, rather than isolated and visible in the 250 hz channel. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 160 larger peak value of the second and third pulse, but they ignore the increasing high-frequency ripple. the dp metric is based on the derivative of the fed signal (the power trajectory) and has a significant variability depending on the implementation. table 4 reports two versions of the calculation using “diff” (so the difference between adjacent vector components) and “gradient” (so the central difference between components with index differing by 2): the difference is significant especially because the two results do not have the same behaviour with respect to the interval number. as a note, since the derivative operator is insensible to the steady value of the fed signal, pp(t) was fed to s and e also as an “ac signal”, having subtracted the estimated steady value for each interval. for clarity s and e are thus indicated as s0 and e0. 6. conclusions this work has considered the presence of pulsed power loads (ppls) in dc grids with significant impact on network voltage variations and possibly on network stability. for the assessment of the impact and rating of various design solutions and implementations, suitable power quality metrics should be identified and applied. such metrics have been discussed, focusing in particular on the quantification of network transients and accounting for the typical profile of absorbed power, named power trajectory. these metrics may be divided in those weighting the characteristics of pulses and transients (area, energy, mean duration) and those dealing with characteristics that have been historically assigned to steady phenomena (ripple and distortion). metrics have been applied to voltage and current waveforms, but also to the power trajectory itself, as it has been recently proposed in the mil-std-1399-300 [26] for ac grids onboard us navy ships. it is interesting to note how the power curve in one case of a pulsed waveform with increasing distortion anticipates and allows detection of this phenomenon more effectively than processing voltage or current waveforms alone. the use and processing of power trajectory is particularly suited for ppls and opens new possibilities of revisiting “traditional” pq metrics with interesting performances and results. the better informative content of power with respect to voltage and current taken separately was pointed out in [46], when investigating internal and external sources of harmonic emissions in ac railways. references [1] y. luo, s. srivastava, m. andrus, d. cartes, application of disturbance metrics for reducing impacts of energy storage charging in an mvdc based ips, ieee electric ship technologies symposium (ests), arlington, va, usa, 22–24 april 2013, pp. 287-291. doi: 10.1109/ests.2013.6523748 [2] s. whaite, b. grainger, a. kwasinski, power quality in dc power distribution systems and microgrids, energies, 8 (2015), pp. 4378-4399. doi: 10.3390/en8054378 [3] v. a. prabhala, b. p. baddipadiga, p. fajri, m. ferdowsi, an overview of direct current distribution system architectures and benefits, energies 11 (2018), no. 2463, 20 pp. doi: 10.3390/en11092463 [4] b. r. shrestha, u. tamrakar, t. m. hansen, b. p. bhattarai, s. james, r. tonkoski, efficiency and reliability analyses of ac and 380 v dc distribution in data centers, ieee access 6 (2018), pp. 63305–63315. doi: 10.1109/access.2018.2877354 [5] s. g. jayasinghe, l. meegahapola, n. fernando, z. jin, j. m. guerrero, review of ship microgrids: system architectures, storage technologies and power quality aspects, inventions 2 (2017), pp. 1-4. doi: 10.3390/inventions2010004 [6] k. kim, k. park, g. roh, k. chun, dc-grid system for ships: a study of benefits and technical considerations, journal of international maritime safety, environmental affairs, and shipping, 2 (2018), pp. 1-12. doi: 10.1080/25725084.2018.1490239 [7] x. j. shen, y. zhang, s. chen, investigation of grid-connected photovoltaic generation system applied for urban rail transit energy-savings, ieee industry applications society annual meeting, las vegas, nv, usa, 7-11 october 2012, pp. 1-4. doi: 10.1109/ias.2012.6373995 [8] a. hinz, m. stieneker, r. w. de doncker, impact and opportunities of medium-voltage dc grids in urban railway systems, 18th european conf. on power electronics and applications (epe europe), karlsruhe, germany, 5-9 sept. 2016, pp. 1-10. doi: 10.1109/epe.2016.7695410 [9] p. ferrari, a. mariscotti, p. pozzobon, reference curves of the pantograph impedance in dc railway systems, ieee intern. conf. on circ. and sys., geneve, switzerland, 28-31 may 2000, pp. 555558. doi: 10.1109/iscas.2000.857155 [10] p. shamsi, b. fahimi, stability assessment of a dc distribution network in a hybrid micro-grid application, ieee trans. smart grid 5 (2014), pp. 2527–2534. doi: 10.1109/tsg.2014.2302804 [11] j. m. crider, s. d. sudhoff, reducing impact of pulsed power loads on microgrid power systems, ieee trans. smart grid 1 (2010), pp. 270-277. doi: 10.1109/tsg.2010.2080329 [12] m. steurer, m. andrus, j. langston, l. qi, s. suryanarayanan, s. woodruff, p. f. ribeiro, investigating the impact of pulsed power charging demands on shipboard power quality, proc. of ieee electric ship technologies symposium, arlington, va, usa, 2123 may 2007, pp. 315-321. doi: 10.1109/ests.2007.372104 [13] european synchrotron, ebs storage ring technical report, sept. 2018 [online]. available: https://www.esrf.eu/files/live/sites/www/files/about/upgrade /documentation/design%20report-reduced-jan19.pdf [14] h. ebrahimi, h. el-kishky, m. biswass, m. robinson, impact of pulsed power loads on advanced aircraft electric power systems with hybrid apu, ieee intern. power modulator and high voltage conf. (ipmhvc), san francisco, ca, usa, 6-9 july 2016, pp. 434-437. doi: 10.1109/ipmhvc.2016.8012857 [15] j. j. a. van der burgt, p. van gelder, e. van dijk, pulsed power requirements for future naval ships, proc. of 12th ieee intern. pulsed power conf., monterey, ca, usa, 27-30 june 1999, pp. 1357-1360 vol. 2. doi: 10.1109/ppc.1999.823779 [16] g. guidi, s. d’arco, j. a. suul, a modular and distributed grid interface for transformer-less power supply to road-side coil table 4. values of metric dp compared to s and e for case 2 applied to the power trajectory. s and e thus must be interpreted as “area” and “energy” of pp(t) considered as a signal and they do not have the same physical meaning. the derivative operator is insensible to the steady pp(t) value, so that to calculate s and e, pp(t) has been corrected by subtracting the steady value. case dp diff in whz dp grad in whz s0 in ws e0 in w2 case 2-intv 1 1.84 103 1.28 103 457.8 5.55 106 case 2-intv 2 1.58 103 1.25 103 470.2 5.65 106 case 2-intv 3 1.86 103 1.39 103 497.8 6.89 106 https://doi.org/10.1109/ests.2013.6523748 https://doi.org/10.3390/en8054378 https://doi.org/10.3390/en11092463 https://doi.org/10.1109/access.2018.2877354 https://doi.org/10.3390/inventions2010004 https://doi.org/10.1080/25725084.2018.1490239 https://doi.org/10.1109/ias.2012.6373995 https://doi.org/10.1109/epe.2016.7695410 https://doi.org/10.1109/iscas.2000.857155 https://doi.org/10.1109/tsg.2014.2302804 https://doi.org/10.1109/tsg.2010.2080329 https://doi.org/10.1109/ests.2007.372104 https://www.esrf.eu/files/live/sites/www/files/about/upgrade/documentation/design%20report-reduced-jan19.pdf https://www.esrf.eu/files/live/sites/www/files/about/upgrade/documentation/design%20report-reduced-jan19.pdf https://doi.org/10.1109/ipmhvc.2016.8012857 https://doi.org/10.1109/ppc.1999.823779 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 161 sections of dynamic inductive charging systems, ieee pels workshop on emerging technologies: wireless power transfer, london, uk, 18-21 june 2019, pp. 318-323. doi: 10.1109/wow45936.2019.9030614 [17] s. jung, h. lee, c. s. song, j.-h. han, w.-k. han, g. jang, optimal operation plan of the online electric vehicle system through establishment of a dc distribution system, ieee trans. pow. electron. 28 (2013), pp. 5878–5889. doi: 10.1109/tpel.2013.2251667 [18] a. mariscotti, electrical safety and stray current protection with platform screen doors in dc rapid transit, ieee trans. transp. electrif., (2021) (in print). doi: 10.1109/tte.2021.3051102 [19] a. mariscotti, overview of the requisites to define steady and transient power quality indexes for dc grids, proc. of 24th imeko tc4 intern. symp., palermo, italy, 14-16 september 2020. online [accessed 20 june 2021] https://www.imeko.org/publications/tc4-2020/imeko-tc42020-23.pdf [20] g. van den broeck, j. stuyts, j. driesen, a critical review of power quality standards and definitions applied to dc microgrids, applied energy 229 (2018), pp. 281-288. doi: 10.1016/j.apenergy.2018.07.058 [21] ieee std. 45.1, ieee recommended practice for electrical installations on shipboard — design, 2017. [22] iacs, electrical and electronic installations – e5: voltage and frequency variations, 2019. [23] mil-std-704f, aircraft electric power characteristics, w/change 1, 2016. [24] en 50155, railway applications – rolling stock – electronic equipment, 2017. [25] itu-t std. l.1200, direct current power feeding interface up to 400 v at the input to telecommunication and ict equipment, 2012. [26] mil-std-1399-300, interface standard section 300, part 1: low voltage electric power, alternating current, 2018. [27] en 61000-4-17, electromagnetic compatibility – part 4-17: testing and measurement techniques – ripple on d.c. input power port immunity test, 2009. [28] m. c. magro, a. mariscotti, p. pinceti, definition of power quality indices for dc low voltage distribution networks, proc. of ieee intern. meas. techn. conf. imtc, sorrento, italy, 20-23 april 2006, pp. 1885-1888. doi: 10.1109/imtc.2006.328304 [29] a. mariscotti, methods for ripple index evaluation in dc low voltage distribution networks, ieee intern. meas. techn. conf. imtc, warsaw, poland, 2-4 may 2007, pp. 1-4. doi: 10.1109/imtc.2007.379205 [30] a. mariscotti, characterization of active power flow at harmonics for ac and dc railway vehicles, proc. of ieee vehicle power and prop. conf., hanoi, vietnam, 14-17 october 2019, pp. 1-7. doi: 10.1109/vppc46532.2019.8952310 [31] j. barros, m. de apráiz, r. i. diego, power quality in dc distribution networks, energies 12 (2019), no. 848, 13 pp. doi: 10.3390/en12050848 [32] i. ciornei, m. albu, m. sanduleac, l. hadjidemetriou, e. kyriakides, analytical derivation of pq indicators compatible with control strategies for dc microgrids, proc. of ieee pes powertech, 18-22 june 2017, manchester, uk, pp. 1-6. doi: 10.1109/ptc.2017.7981179 [33] a. mariscotti, discussion of power quality metrics suitable for dc power distribution and smart grids, proc. of 23rd imeko tc4 intern. symp., xi’an, china, 17-20 september 2017. pp. 150154. online [accessed 20 june 2021] https://www.imeko.org/publications/tc4-2019/imeko-tc42019-032.pdf [34] g. c. montanari, d. fabiani, the effect of nonsinusoidal voltage on intrinsic aging of cable and capacitor insulating materials, ieee trans. diel. electr. insul. 6 (1999), pp. 798-802. doi: 10.1109/94.822018 [35] ieee std. 1159, ieee recommended practice for monitoring electric power quality, 2019. [36] en 61000-4-30, electromagnetic compatibility – part 4-30: testing and measurement techniques – power quality measurement methods, 2015. [37] a. florio, a. mariscotti, m. mazzucchelli, voltage sag detection based on rectified voltage processing, ieee trans. pow. del. 19 (2004), pp. 1962–1967. doi: 10.1109/tpwrd.2004.829924 [38] g. crotti, d. giordano, p. roccato, a. delle femine, d. gallo, c. landi, m. luiso, a. mariscotti, pantograph-to-ohl arc: conducted effects in dc railway supply system, proc. of 9th ieee intern. workshop on applied meas. for power systems (amps) , bologna, italy, 26-28 september 2018, pp. 1-6. doi: 10.1109/amps.2018.8494897 [39] a. mariscotti, d. giordano, experimental characterization of pantograph arcs and transient conducted phenomena in dc railways, acta imeko 9(2) (2020), pp. 10–17. doi: 10.21014/acta_imeko.v9i2.761 [40] a. mariscotti, d. giordano, a. delle femine, d. gallo, d. signorino, how pantograph electric arcs affect energy efficiency in dc railway vehicles, ieee vehicle power and propulsion conf., gijon, spain, 18 november -16 decemer 2020, pp. 1-5. doi: 10.1109/vppc49601.2020.9330954 [41] en 61000-4-7, electromagnetic compatibility – part 4-7: testing and measurement techniques – general guide on harmonics and interharmonics measurements and instrumentation, for power supply systems and equipment connected thereto, 2009. [42] a. mariscotti, d. giordano, a. delle femine, d. signorino, filter transients onboard dc rolling stock and exploitation for the estimate of the line impedance, proc. of ieee international instrumentation and measurement technology conference (i2mtc), dubrovnik, croatia, 25-28 may 2020. pp. 1-6. doi: 10.1109/i2mtc43012.2020.9128903 [43] c. r. paul, bandwidth of digital waveforms, ieee emc society newsletter, no. 223, 2009, pp. 58–64. online [accessed 20 june 2021] http://www.emcs.org/acstrial/newsletters/fall09/practicalpaper s.pdf [44] a. pratt, p. kumar, t. v. aldridge, evaluation of 400v dc distribution in telco and data centers to improve energy efficiency, 29th intern. telecommunications energy conf., rome, italy, 30 september 4 october 2007, pp. 32-39. doi: 10.1109/intlec.2007.4448733 [45] l. sandrolini, a. mariscotti, impact of short-time fourier transform parameters on the accuracy of emi spectra estimates in the 2-150 khz supraharmonic interval, electr. pow. sys. res. 195 (2021), 107130. doi: 10.1016/j.epsr.2021.107130 [46] a. mariscotti, experimental characterization of active and nonactive harmonic power flow of ac rolling stock and interaction with the supply network, iet electr. sys. transp., (2021) (in print), pp. 109-120. doi: 10.1049/els2.12009 https://doi.org/10.1109/wow45936.2019.9030614 https://doi.org/10.1109/tpel.2013.2251667 https://doi.org/10.1109/tte.2021.3051102 https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-23.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-23.pdf https://doi.org/10.1016/j.apenergy.2018.07.058 https://doi.org/10.1109/imtc.2006.328304 https://doi.org/10.1109/imtc.2007.379205 https://doi.org/10.1109/vppc46532.2019.8952310 https://doi.org/10.3390/en12050848 https://doi.org/10.1109/ptc.2017.7981179 https://www.imeko.org/publications/tc4-2019/imeko-tc4-2019-032.pdf https://www.imeko.org/publications/tc4-2019/imeko-tc4-2019-032.pdf https://doi.org/10.1109/94.822018 https://doi.org/10.1109/tpwrd.2004.829924 https://doi.org/10.1109/amps.2018.8494897 https://doi.org/10.21014/acta_imeko.v9i2.761 https://doi.org/10.1109/vppc49601.2020.9330954 https://doi.org/10.1109/i2mtc43012.2020.9128903 http://www.emcs.org/acstrial/newsletters/fall09/practicalpapers.pdf http://www.emcs.org/acstrial/newsletters/fall09/practicalpapers.pdf https://doi.org/10.1109/intlec.2007.4448733 https://doi.org/10.1016/j.epsr.2021.107130 https://doi.org/10.1049/els2.12009 gesture recognition of sign language alphabet with a convolutional neural network using a magnetic positioning system acta imeko issn: 2221-870x december 2021, volume 10, number 4, 97 102 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 97 gesture recognition of sign language alphabet with a convolutional neural network using a magnetic positioning system emanuele buchicchio1, francesco santoni1, alessio de angelis1, antonio moschitta1, paolo carbone1 1 department of engineering university of perugia, italy section: research paper keywords: gesture recognition; sign language; machine learning; cnn citation: emanuele buchicchio, francesco santoni, alessio de angelis, antonio moschitta, paolo carbone, gesture recognition of sign language alphabet with a convolutional neural network using a magnetic positioning system, acta imeko, vol. 10, no. 4, article 17, december 2021, identifier: imeko-acta10 (2021)-04-17 section editors: umberto cesaro and pasquale arpaia, university of naples federico ii, italy received october 15, 2021; in final form december 4, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: emanuele buchicchio, e-mail: emanuele.buchicchio@studenti.unipg.it 1. introduction sign language recognition (slr) is a research area that involves gesture tracking, pattern matching, computer vision, natural language processing, linguistics, and machine learning [1]. the final goal of slr is to develop methods and algorithms to build an srl system (slrs) capable of identifying signs, decoding their meaning, and producing some output that the intended receiver can understand (figure 1). the general slr problem includes the following tasks: 1) letter/number sign gesture recognition, 2) word sign gesture recognition, and 3) sentence-level sign language translation available literature surveys [2]-[5] report that recent research achieved accuracy in the range of 80–100% for the first two tasks using vision-based and sensor-based approaches. in this paper, we compare the performance of the two systems we developed: a vision-based system and a hybrid system with sensor-based data acquisition and vision-based classification stages. 1.1. slrs performance assessment in the instrumentation and measurement field, machine learning is used for processing indirect measurement results. an indirect measurement is defined in [6] as a “method of measurement in which the value of a quantity is obtained from measurements made by direct methods of measurement of other quantities linked to the measurand by a known relationship.” in the machine learning (ml) common jargon [7], the quantities that can be measured with a direct method are denoted as features x1, x2, …, xn, and the measurand as y. the measurand y is linked to features by a functional relationship y=f(x1, x2, …, xn). the process of estimating f is known as “training.” in the training process, the ml model is trained with the given dataset to find the best possible approximation according to the selected optimality criterion. the trained model produces an estimation of y in response to the vector x=(x1, x2, …, xn). abstract gesture recognition is a fundamental step to enable efficient communication for the deaf through the automated translation of sign language. this work proposes the usage of a high-precision magnetic positioning system for 3d positioning and orientation tracking of the fingers and hands palm. the gesture is reconstructed by the magik (magnetic and inverse kinematics) method and then proce ssed by a deep learning gesture classification model trained to recognize the gestures associated with the sign language alphabet. results confirm the limits of vision-based systems and show that the proposed method based on hand skeleton reconstruction has good generalization properties. the proposed system, which combines sensor-based gesture acquisition and deep learning techniques for gesture recognition, provides a 100% classification accuracy, signer independent, after a few hours of training using transfer learning technique on well-known resnet cnn architecture. the proposed classification model training method can be applied to other sensorbased gesture tracking systems and other applications, regardless of the specific data acquisition technology. mailto:emanuele.buchicchio@studenti.unipg.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 98 in the case of classification systems, the measurand y is the class to which an input vector x belongs. the most widely used performance metric for gesture slrs is classification accuracy (defined as the ratio of correct predictions over the total predictions). in this work, accuracy was adopted both for model benchmark and as model optimality criterion. 1.2. sign language sign language (sl) is defined as "any means of communication through bodily movements, especially of the hands and arms, used when spoken communication is impossible or not desirable" [8]. modern sign language originated in the 18th century when charles-michel de l'épée developed a system for spelling out french words with a manual alphabet and expressing whole concepts with simple signs. other national sign languages were developed from this system and became an essential means of communication among the hearing-impaired and deaf communities. according to the world federation of the deaf, today exist over 200 sign languages used by 70 million deaf [9]. sign language involves using facial expressions and different body parts, such as arms, fingers, hands, head, and body. one class of sign languages, also known as fingerspelling, is limited to a set of manual signs that represent the symbols of the letters of an alphabet performed with one hand [10]. the asl signs of the alphabet letters are shown in figure 2. 1.3. vision-based vs. sensor-based approaches for hands tracking and gesture recognition many common devices and applications rely on tracking hands, fingers, or handheld objects. specifically, smartphones and smartwatches track 2d finger position, a mouse tracks 2d hand position, and augmented reality devices like the microsoft hololens 2 track the 3d pose of the finger. in addition to slr, many other applications rely on hand gesture recognition such as augmented reality [12], assistive technology [13], [14], collaborative robotics [15], telerobotics [16], home automation [17], infotainment systems [18], [19], intelligence and espionage [20] and many others [21]. in this paper, we focused on recognizing static hand gestures associated with the letters of the alphabet for fingerspelling. both computer-vision-based and sensor-based approaches were implemented for sign language alphabet recognition. hand features extraction is a significant challenge for vision-based systems [11] because extraction is affected by many factors, such as lighting conditions, complex backgrounds in the image, occlusion, and skin color. sensor-based gesture recognition systems are commonly implemented as gloves featuring various types of sensors. sensor-based approaches have the advantage of simplifying the detection process and can help make the gesture recognition system less dependent on input devices. on the other hand, a disadvantage of sensor-based systems is that they can be expensive and too invasive for real-world deployment. 2. vision-based sign language gesture recognition machine learning techniques are widely adopted for gesture classification tasks. various public datasets are available for system performance assessment and benchmark. the american sign language mnist dataset [22], a flavor of the classic mnist dataset [23], created for sign language gesture, is often used as a baseline. other more complex datasets such as [24], [25] are also available. 2.1. classic machine learning and convolutional neural network on mnist dataset the american sign language mnist dataset is in a tabular format similar to the original mnist dataset. each row in the csv file has a label and 784 pixels values ranging from 0-255, representing a single 28 × 28 pixels greyscale image. in total, there are 27,455 training cases and 7,172 tests cases in this dataset. the classification accuracy was selected as the primary metric for models’ performance assessment and benchmarking with other published comparable works. two different models were trained to accomplish the letter/number gesture recognition task from static images using two different approaches: a classic ml model and a deep neural network (figure 3). the first model was selected among many model candidates obtained by applying different combinations of features engineering techniques, ml algorithms, and ensemble methods using the automated ml (automl) service of azure machine learning. azure machine learning [26] is a cloud-based platform that provides tools for automation and orchestration of all training, scoring, and comparison operations. automl tests hundreds of models in a few hours with parallel job execution with no human interaction after the initial experiment and remote compute target cluster setup. the experiment generates many models that achieve 100% classification accuracy. among figure 1. block diagram of a sign language recognition system (slrs). figure 2. letters of the american sign language (asl) alphabet [11]. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 99 them, the “logistic regression” based model has a smaller memory footprint at runtime. the second model was created with a minimal custom convolutional neural network (cnn) architecture (2d convolution, max pooling, flatten, dense layer, dropout, dense) commonly used for simple deep learning image recognition tasks (figure 4). the model was built and trained with the keras library. model hyperparameters such as the number of neurons in layers, batch size, the number of training epochs, and dropout percentage were tuned using the hyperdrive service from azure machine learning. the best scoring model achieves a classification accuracy score of 99.99%. the best models from the two training pipelines were deployed as web services for production usage. the (zipped) size of the cnn model is about 17 mb when the logistic regression model size is only 0.8 mb. simple and lightweight models should be preferred if there is no performance penalty. 2.2. vision-based classification accuracy the 100% accuracy was confirmed after deployment with test cases from the american sign language mnist. simple classic ml models could not recognize gestures in realistic images with variable backgrounds and light conditions. the cnn model scores over 90% accuracy on a subset of the "asl alphabet" [24] image dataset that includes more "realistic" light and background conditions. however, while deployed as a web service, the performance on image stream from a live camera was not satisfactory for production usage in challenging conditions such as partial line of sight obstruction, presence of shadows in the image, and confusing backgrounds like in the test case of asl alphabet test dataset [25]. 3. sensor-based gesture recognition with deep cnn on visual gesture representation our experiment with a vision-based approach confirms both performance and limitation described in other works. given the result of our experiments and other works, in this paper, we propose an slrs system that combines a sensor-based approach in the acquisition stage and computer vision techniques in the gesture recognition stage (figure 5). 3.1. hand tracking with magnetic position system (mps) the magnetic positioning system (mps) described in [27] is immune from many problems that affect computer vision techniques such as occlusion, light condition, shadows, skin colors. the mps is composed of transmitting nodes and receiving nodes. the transmitting nodes are mounted on the fingers and hand to be tracked (figure 6), whereas the receiving nodes are placed at known positions on the sides of the operational volume. an advantage of the sensor-based systems is that they are not sensitive to illumination conditions and the other factors affecting vision-based systems. furthermore, mps can also operate in the presence of obstructions caused by objects or body parts. therefore, the proposed approach enables robust and reliable tracking of the hand and fingers. it is thus suitable for slr and the other applications of hand gesture recognition, such as human-machine interaction, virtual and augmented reality, robotic telemanipulation, and automation. figure 3. workflow for the comparison of various machine learning models for static gesture recognition using azure skd, automl and hyperdrive for operations automation. figure 4. deep cnn model architecture. figure 5. proposed slrs with sensor-based data acquisition and vision-based gesture recognition. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 100 3.2. gesture recognition using skeleton reconstruction classic machine learning models can achieve 100% accuracy on static sign language recognition tasks on laboratory datasets like [24]. cnn deep learning models score high accuracy (over 90%) on realistic images. classic machine learning models can achieve 100% accuracy on static sign language recognition tasks on laboratory datasets. cnn deep learning models score high accuracy (over 90%) on realistic images with variable light. however, these high performances are not robust and cannot be easily replicated in real-world operating conditions. in our paper [11], we demonstrated that training the classification model on data from a tracking system gives substantial advantages in terms of robustness to environmental conditions and signer variability. the hand gesture is reconstructed using the technique illustrated in [28], with the improvements added in [11], which we called magik (magnetic and inverse kinematics). the method, with some empirical modification introduced in the model to optimize the reconstruction of the gesture among different test subjects, allows reconstructing the movement of the hand with 24 degrees of freedom (dof). positions and orientations of all the magnetic nodes estimated by the mps are sent to a kinematic model of the hand, to obtain the position and flexion of each joint and the position and orientation of the whole hand with respect to the mps reference frame. as the last step, magik produces a visual representation, such as the examples shown in figure 7. we call this technique “skeleton reconstruction”. 3.3. efficient deep cnn training for sign language recognition many pre-trained deep learning models are proven to be adequate for image/video classification tasks. we chose the resnet34 cnn because the resnet (residual network) architecture achieves good results in image classification tasks and is relatively fast to train [29]. figure 8 illustrates the training pipeline implemented with pytorch and fastai [30] library. transfer learning approaches allow fast training of the deep cnn (resnet34) model. the optimal learning rate for training was estimated with the cyclical learning rates method [31] to avoid time-consuming multiple runs to perform hyperparameters sweeps. the rules of thumb for the selection of learning rate value from [31] are: 1) one order of magnitude less than where the minimum loss was achieved; and 2) the last point where the loss was clearly decreasing. the loss estimation plot (figure 9) produced by the algorithm implementation in the fastai library suggested a learning rate in the range 10-2 – 10-3. model fine-tuning was performed using fastai api with a sequence of freeze, fit-one-cycle, unfreeze, and fit-one-cycle operations using the «discriminative learning rate» method. the training continued until error rate, validation loss, and training loss converged to zero after four epochs (figure 10). 3.4. gesture classification inference with mps the trained model, after the fine-tuning process, was developed in an inference pipeline (figure 11) that takes the figure 6. mps transmitting coils mounted on a wearable glove. figure 7. examples of asl letters (y and l) articulated while wearing the glove, and their respective reconstructions obtained through the kinematic model and magik technique. figure 8. training pipeline for resnet43 cnn with transfer learning. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 101 output generated by mps control software and, for each acquired frame: 1) reconstructs the gesture using magik model kinematic model, 2) exports the visual representation as a bitmap image, 3) feeds the cnn model with the generated gesture image and get the array of confidence values associated with each class in the training dataset, and 4) printouts the label of the sign class with the highest confidence value. 4. conclusions classic machine learning models can only achieve 100% accuracy on static sign language recognition tasks on laboratory datasets [22]. deep cnn models can accomplish the task with over 90% accuracy also on more realistic images [24]. however, these high performances are not robust and cannot be replicated in real-world operating conditions. combining sensor-based acquisition, visual reconstruction of the skeleton, and a deep cnn classification model, the proposed system achieves 100% inference accuracy on gestures performed by different people after a few epochs of training. we cannot achieve 100% accuracy with classic machine learning in comparable experimental conditions. the sensor-based approach is immune from many problems that affect computer vision techniques such as occlusion, light condition, shadows, skin colors. building a gesture recognizer on top of a tracking system, instead of direct classification from a sensor stream, can help make the gesture recognition system less dependent on input devices. skeleton tracking allows for good generalization: system performances are robust across different sign performers and classifications do not rely on specific hand characteristics. the classification method implemented in this work can be applied to almost any sensor-based dataset: the only requirement is to provide a convenient visual representation of input data to be used both in training and inference. after replacing the magik with another method suitable for the specific application, other stages of the training pipeline and inference pipeline do not need any change and can be directly used for many other applications. references [1] h. cooper, b. holt, r. bowden, sign language recognition. in visual analysis of humans; moeslund, t., hilton, a., krüger, v., sigal, l., eds.; springer 2011. doi: 10.1007/978-0-85729-997-0_27 [2] a. wadhawan, p. kumar, sign language recognition systems: a decade systematic literature review. arch. comput. methods eng. 28 (2019) pp. 785–813. doi:10.1007/s11831-019-09384-2 [3] m. j. cheok, z. omar, m. h. jaward, a review of hand gesture and sign language recognition techniques. int. j. mach. learn. cyber 10 (2019) pp. 131–153. doi: 10.1007/s13042-017-0705-5 [4] r. elakkiya, machine learning based sign language recognition: a review and its research frontier. j. ambient. intell. hum. comput. 2020. doi: 10.1007/s12652-020-02396-y [5] r. rastgoo, k. kiani, s. escalera, sign language recognition: a deep survey. expert syst. appl. 164 (2021). doi: 10.1016/j.eswa.2020.113794 [6] "iec standard 60050–300", international electrotechnical vocabulary (iev) part 300: electrical and electronic measurements and measuring instruments, international electrotechnical commission, jul. 2001. [7] s. shirmohammadi, h. al osman, machine learning in measurement part 1: error contribution and terminology figure 9. loss estimation plot against learning rate values for optimal learning rate selection. the optimal value for training is in range 10-2 – 10-3. figure 10. loss and error rate values recorded during the training process. figure 11. inference pipeline with mps and skeleton reconstruction and an example of execution from jupyter notebook python environment. https://doi.org/10.1007/978-0-85729-997-0_27 https://doi.org/10.1007/s11831-019-09384-2 https://doi.org/10.1007/s13042-017-0705-5 https://doi.org/10.1007/s12652-020-02396-y https://doi.org/10.1016/j.eswa.2020.113794 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 102 confusion, ieee instrumentation & measurement magazine, 24(2) (2021) pp. 84-92. doi: 10.1109/mim.2021.9400955 [8] encyclopedia britannica, sign language. online [accessed december 05 2021] https://www.britannica.com/topic/sign/language [9] world federation of the deaf. online [accessed december 05 2021]. http://wfdeaf.org/our-work [10] fingerspelling. wikipedia. online [accessed december 05 2021] https://en.wikipedia.org/wiki/fingerspelling [11] m. rinalduzzi, a. de angelis, f. santoni, e. buchicchio, a. moschitta, p. carbone, p. bellitti, m. serpelloni, gesture recognition of sign language alphabet using a magnetic positioning system. appl. sci. 11 (2021), 5594. doi:10.3390/app11125594 [12] j. dong, z. tang, q. zhao, gesture recognition in augmented reality assisted assembly training. j. phys. conf. ser. 1176(3) (2019), art. 032030. doi: 10.1088/1742-6596/1176/3/032030 [13] r. e. o. ascari schultz, l. silva, r. pereira, personalized interactive gesture recognition assistive technology. in proceedings of the 18th brazilian symposium on human factors in computing systems, vitória, brazil, 22–25 october 2019. doi: 10.1145/3357155.3358442 [14] s. s: kakkoth, s. gharge, real time hand gesture recognition and its applications in assistive technologies for disabled. in proceedings of the fourth international conference on computing communication control and automation (iccubea), pune, india, 16–18 august 2018. doi: 10.1109/iccubea.2018.8697363 [15] m. a. simão, o. gibaru, p. neto, online recognition of incomplete gesture data to interface collaborative robots, ieee trans. ind. electron. 66 (2019) pp. 9372–9382. doi: 10.1109/tie.2019.2891449 [16] i. ding, c. chang, c. he, a kinect-based gesture command control method for human action imitations of humanoid robots. in proceedings of the 2014 international conference on fuzzy theory and its applications (ifuzzy2014), kaohsiung, taiwan, 26–28 november 2014; pp. 208–211. doi: 10.1109/ifuzzy.2014.7091261 [17] s. yang, s. lee, y. byun, gesture recognition for home automation using transfer learning, 2018 international conference on intelligent informatics and biomedical sciences (iciibms), bangkok, thailand, 21–24 oct. 2018, pp. 136–138. doi: 10.1109/iciibms.2018.8549921 [18] q. ye, l. yang, g. xue, hand-free gesture recognition for vehicle infotainment system control, 2018 ieee vehicular networking conference (vnc), taipei, taiwan, 5–7 december 2018; pp. 1–2. doi: 10.1109/vnc.2018.8628409 [19] z. u. a. akhtar, h. wang, wifi-based gesture recognition for vehicular infotainment system—an integrated approach, appl. sci. 9 (2019), art. 5268. doi: 10.3390/app9245268 [20] y. meng, j. li, h. zhu, x. liang, y. liu, n. ruan, revealing your mobile password via wifi signals: attacks and countermeasures, ieee trans. mob. comput. 19(2) (2019) pp. 432–449. doi: tmc.2019.2893338 [21] m. j. cheok, z. omar, m. h. jaward, a review of hand gesture and sign language recognition techniques, int. j. mach. learn. cyber. 10 (2019) pp. 131–153. doi: 10.1007/s13042-017-0705-5 [22] the american sign language mnist dataset. online [accessed december 05 2021] https://www.kaggle.com/datamunge/sign-language-mnist [23] lecun, y., & cortes, c. (2010). mnist handwritten digit database. at&t labs. online [accessed december 05 2021] http://yann.lecun.com/exdb/mnist [24] asl alphabet. online [accessed december 05 2021] https://www.kaggle.com/grassknoted/asl-alphabet [25] asl alphabet test, online [accessed december 05 2021] https://www.kaggle.com/danrasband/asl-alphabet-test [26] azure machine learning product overview. online [accessed december 05 2021] https://azure.microsoft.com/it-it/services/machinelearning/#product-overview [27] f. santoni, a. de angelis, a. moschitta, p. carbone, a multi-node magnetic positioning system with a distributed data acquisition architecture, sensors 20(21) (2020), art. 6210, pp. 1-23. doi: 10.3390/s20216210 [28] f. santoni, a. de angelis, a. moschitta, p. carbone, magik: a hand-tracking magnetic positioning system based on a kinematic model of the hand, ieee transactions on instrumentation and measurement 70 (2021), art. 9376979 doi: 10.1109/tim.2021.3065761 [29] k. he, x. zhang, s. ren, j. sun, deep residual learning for image recognition, 2016 ieee conference on computer vision and pattern recognition (cvpr), las vegas, nv, usa, 27-30 june 2016, pp. 770-778. doi: 10.1109/cvpr.2016.90 [30] j. howard, s. gugger, fastai: a layered api for deep learning, information 11(2) (2020), art. 108. doi: 10.3390/info110201081 [31] l. n. smith, cyclical learning rates for training neural networks. online [accessed december 05 2021] https://arxiv.org/abs/1506.01186 https://doi.org/10.1109/mim.2021.9400955 https://www.britannica.com/topic/sign/language http://wfdeaf.org/our-work https://en.wikipedia.org/wiki/fingerspelling https://doi.org/10.3390/app11125594 https://doi.org/10.1088/1742-6596/1176/3/032030 https://doi.org/10.1145/3357155.3358442 https://doi.org/10.1109/iccubea.2018.8697363 https://doi.org/10.1109/tie.2019.2891449 https://doi.org/10.1109/ifuzzy.2014.7091261 https://doi.org/10.1109/iciibms.2018.8549921 https://doi.org/10.1109/vnc.2018.8628409 https://dx.doi.org/10.3390/app9245268 https://doi.org/10.1109/tmc.2019.2893338 https://doi.org/10.1007/s13042-017-0705-5 https://www.kaggle.com/datamunge/sign-language-mnist http://yann.lecun.com/exdb/mnist https://www.kaggle.com/grassknoted/asl-alphabet https://www.kaggle.com/danrasband/asl-alphabet-test https://azure.microsoft.com/it-it/services/machine-learning/#product-overview https://azure.microsoft.com/it-it/services/machine-learning/#product-overview https://doi.org/10.3390/s20216210 https://doi.org/10.1109/tim.2021.3065761 https://doi.org/10.1109/cvpr.2016.90 https://doi.org/10.3390/info11020108 https://arxiv.org/abs/1506.01186 editorial to selected papers from the international excellence phd school ‘i. gorini’ acta imeko issn: 2221-870x december 2021, volume 10, number 4, 5 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 5 editorial to selected papers from the international excellence phd school ‘i. gorini’ pasquale arpaia1, umberto cesaro1, francesco lamonaca2 1 university of naples federico ii, dept. of information and electrical engineering, via cluadio 21, naples, 80125 (na), italy 2 university of calabria, dept. of computer science, modelling, electronic and system, via p.bucci 41c, arcavacata di rende, 87036 (cs), italy section: editorial citation: pasquale arpaia, umberto cesaro, francesco lamonaca, editorial to selected papers from the international excellence phd school ‘i. gorini’, acta imeko, vol. 10, no. 4, article 3, december 2021, identifier: imeko-acta-10 (2021)-04-03 received december 14, 2021; in final form december 14, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: umberto cesaro, e-mail: ucesaro@unina.it dear readers, this acta imeko special issue collects the best works presented by young researchers attending the international ph.d. school "italo gorini 2021", held in naples in the period 6-10 september 2021. the international ph.d. school "italo gorini 2021" is the doctoral school promoted by the italian associations "electrical and electronic measurements group" (gmee) and "mechanical and thermal measurements group" (gmmt). the school activity deals with a wide variety of issues related to measurement. the school is aimed at phd students, as well as young people from research and industry fields. the school addresses both methodological issues, in the field of science and technology related to the measures and instrumentation, and advanced problems of practical interest. then, special attention is paid also to the impact of measures on the scientific and engineering context, which is strongly influenced by the evolution of technology in different sectors. this year the “italo gorini” school has received the patronage of imeko and we are honoured to promote imeko among young brilliant scientists who will be the future of measurement science. the works presented in this special issue are the extended version of those that won the "best presentation", "best scientific contribute" and "best application" awards during the school. simone mari et al., in the paper entitled ‘measurements for nonintrusive load monitoring through machine learning approaches’, propose several possible approaches for the non-intrusive load monitoring systems operating in real time, analysing them from the measurement point of view. the investigated measurement and post-processing techniques are illustrated and the results discussed. emanuele buchicchio et al., in the paper ‘gesture recognition of sign language alphabet with a convolutional neural network using a magnetic positioning system’ introduce a system that combines sensor-based gesture acquisition and deep learning techniques for gesture recognition providing a 100% classification accuracy. the social impact of the proposal is wide, since gesture recognition is a fundamental step to enable efficient communication for the deaf through the automated translation of sign language. mattia alessandro ragolia et al., in the paper ‘a virtual platform for real-time performance analysis of electromagnetic tracking systems for surgical navigation’ propose a virtual platform for assessing the performance of electromagnetic tracking systems (emtss) for surgical navigation, showing in real time the effects of the various sources affecting the distance estimation accuracy. the implemented measurement platform provides a useful tool for supporting engineers during design and prototyping of emtss. a particular effort was dedicated to the development of an efficient and robust algorithm, to obtain an accurate estimation of the instrument position for distances from the magnetic field generator beyond 0.5 m. indeed, the main goal of the paper is to improve the limited range of current commercial systems, which strongly affects the freedom of movement of the medical team. the paper by leila es sebar et al. ‘a metrological approach for multispectral photogrammetry’ presents the design and development of a three-dimensional reference object for the metrological quality assessment of photogrammetry-based techniques. such techniques are typically used in the cultural heritage field. the reference object was a 3d printed specimen, with a nominal manufacturing uncertainty in the order of 0.01 mm. the object has been realized as a dodecahedron, and in each face, a different pictorial preparation has been inserted. the preparations include several pigments, binders, and varnishes, to be representative of the materials and techniques used historically by artists. pasquale arpaia, umberto cesaro, guest editors francesco lamonaca, editor-in-chief mailto:ucesaro@unina.it comparative evaluation of three image analysis methods for angular displacement measurement in a mems microgripper prototype: a preliminary study acta imeko issn: 2221-870x june 2021, volume 10, number 2, 119 125 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 119 comparative evaluation of three image analysis methods for angular displacement measurement in a mems microgripper prototype: a preliminary study federica vurchio1, giorgia fiori1, andrea scorza1, salvatore a. sciuto1 1 engineering department, roma tre university, rome, italy section: research paper keywords: microgripper; mems; microactuators; displacement measurements; characterization citation: federica vurchio, giorgia fiori, andrea scorza, salvatore andrea sciuto, comparative evaluation of three image analysis methods for angular displacement measurement in a mems microgripper prototype: a preliminary study, acta imeko, vol. 10, no. 2, article 17, june 2021, identifier: imekoacta-10 (2021)-02-17 section editor: ciro spataro, university of palermo, italy received january 18, 2021; in final form april 22, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: federica vurchio, e-mail: federica.vurchio@uniroma3.it 1. introduction mems devices (micro-electro-mechanical systems) represent a category of sensors and actuators widely used in the most varied fields of technology, from automotive to micro assembly for photonics and rf application, microphones, microfluidic device, gyroscopes, chemical sensors for microfluidics systems, lab-on-chip systems and complex actuation systems [1]. one of the most promising fields of application is undoubtedly the biomedical one, such as biology [2],[3] and microsurgery [4]-[6]. microgrippers are a particular class of mems devices, able to handle objects, including cells and molecules that have micrometric dimensions. nowadays, there are few works concerning the characterization of devices such as microgrippers, even if the study of the metrological and performance characteristics would be of great help for the optimization of the prototypes and the improvement of their performances. in this study, a set of images have been acquired by means of a trinocular optical microscope and processed by means of three different methods implemented ad hoc in matlab environment: the semi-automatic (sam), the surf-based (angular displacement measurement based on speeded up robust features, admsurf) and the fft-based (angular displacement measurement based on fast fourier transform, admfft). a comparison among the abovementioned methods has been made to estimate the angular displacement of a mems microgripper prototype comb-drive for biomedical applications. semiautomatic method (sam) already widely described by the authors in [7]-[9], is based on template-matching, and it is able to evaluate the rotation and both the gripper and the angular displacement of a microgripper. its main limitations are the high computational costs and the operator dependence. abstract the functional characterization of mems devices is relevant today since it aims at verifying the behavior of these devices, as well as improving their design. in this regard, this study focused on the functional characterization of a mems microgripper prototype suitable in biomedical applications: the measurement of the angular displacement of the microgripper comb-drive is carried out by means of two novel automatic procedures, based on an image analysis method, surf-based (angular displacement measurement based on speeded up robust features, admsurf) and fft-based (angular displacement measurement based on fast fourier transform, admfft) method, respectively. moreover, the measurement results are compared with a semi-automatic method (sam), to evaluate which of them is the most suitable for the functional characterization of the device. the curve fitting of the outcomes from sam and admsurf, showed a quadratic trend in agreement with the analytical model. moreover, the admsurf measurements below 1° are affected by an uncertainty of about 0.08° for voltages less than 14 v, confirming its suitability for microgripper characterization. it was also evaluated that the admfft is more suitable for measurement of rotations greater than 1° (up to 30°), with a measurement uncertainty of 0.02°, at 95% of confidence level. mailto:federica.vurchio@uniroma3.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 120 the above issues have been deepened in this work, starting from the previous study presented in [10]. in section 2, the materials and methods are described, with particular reference to the experimental setup and the measurement protocol used for the digital image acquisition. due to the limitation encountered in sam previously proposed [7]-[9], in subsection 2.1, the authors propose a new version of the sam, in which novel tests have been implemented to quantify the uncertainty contribution introduced by the operator in the angular displacement measurement of a microgripper comb-drive prototype; in subsections 2.2 and 2.3 the authors described two novel and automatic methods and their application for the measurement of the comb-drive angular displacement: the admsurf, based on the surf algorithm [11], and the admfft, that is an application of 2d fast fourier transform (fft) to digital images [12]-[16]. in section 3, the procedure for estimating the sources of uncertainty of the three measurement methods is described and a comparison and the evaluation of the outcomes obtained through the three abovementioned methods have been carried out and discussed to identify which of the three implemented methods is the best suitable for the characterization of the mems device. finally, in section 4 and 5, the results of our study are illustrated, and the conclusions presented. 2. materials and methods in this section the main components of the experimental setup have been described together with a detailed overview of the three implemented methods; in particular, the surf-based and the fft-method have been proposed as alternative methods to the semi-automatic one for the measurement of the angular displacement of the comb-drive. the device under examination is a microgripper prototype (figure 1), which is part of a project concerning the metrological and performance characterization of a new class of mems devices for biomedical applications [17]-[21]. these devices mainly consist of capacitive electrostatic actuators (i.e., the comb-drives shown in figure 2) and particular hinges called conjugate surfaces flexural hinges (csfh) [22], which allow the mechanical movement of the tips located on the end of the device. the images have been acquired through a nb50ts trinocular light microscope equipped with a 6mp camera. the device has been positioned on an instrumented stage with micrometric screws and powered through a hp e3631a power supply. the latter is electrically connected to the device by means of a coaxial cable and tungsten needles put in contact with the electrical connections of the device. the voltage has been brought to the electrical connections by means of two micropositioners that allow the tungsten needle movement along the three axes, x, y, and z. a set of 30 images has been collected for each applied voltage with a 2 v step (i.e., 0 v, 2 v, 4 v, ... 24 v). 2.1. semi-automatic based method (sam) the first method used in this study, has been the semiautomatic one, which for clarity we will call sama, widely described in [7],[8] and used in [9]. as illustrated in [7]-[9], the method introduces a measurement uncertainty contribution which corresponds to 0.02 °, at 95% confidence level, evaluated by means of monte carlo simulation. moreover, the software requires high computational costs and the uncertainty analysis of the preliminary results obtained with the sama was previously carried out partially [7]-[10], assuming the uncertainty component introduced by the operator's subjectivity; for this reason, in this study further tests were carried out to better evaluate the above contribution. the test on the sama, in fact, consists, in its first part, of a selection by the operator of four points and of a region of interest (roi) on the image; to evaluate the dispersion in the selection of these points, in the new version of semi-automatic method, called samb, ten different observers were asked to identify both the four points and the roi in an image of the comb-drive, for a number of times equal to 30. in particular, for the four points, the x and y coordinates on the image were considered and for the roi, the coordinates of the top left vertex (x and y), its length and width (each of them expressed in pixels), as can be seen in figure 3. 2.2. speeded up robust features based method (surf) an automatic method based on speeded up robust features (surf) has been implemented to measure the angular displacement of the comb-drive (admsurf), as already described in [23]. it is an interest point detector and descriptor, used in figure 1. microgripper prototype. figure 2. the comb-drive. figure 3. four points (red cross) and roi (yellow square) selection on the comb-drive image. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 121 many applications including image registration, object recognition and 3d reconstruction [24]. the main advantage of this method is mainly the computational cost reduction; in fact, as illustrated in [11], a significant reduction in image processing time has been observed thanks to the complexity reduction of the descriptor, without altering the performance in terms of repeatability, noise robustness, detection errors, geometric and photometric deformation. the in-house method consists of three main steps: 1) finding interest points on the image; in particular, a roi0v is selected on the first image img0, that corresponds to 0 v power supply, and it is important that this selected area is chosen by the operator in an image area where the movement of the combdrive is visible; the coordinates of the selected roi are saved and used to select the rois (i.e. roi2v, roi4v, roi6v, …, roi24v) of all the subsequent images img2v, img4v, img6v, …, img24v. after that, the algorithm finds the interest points on each selected roi. 2) building a descriptor for the representation of the interest points; for example, in this case they are the red circles for the first image and the green crosses for all the others (figure 4). 3) matching the various descriptors found on the images; by using a geometric transform, the object position on the images can be obtained and therefore it has been possible go back to its relative rotation referred to each applied voltage. 2.3. fast fourier transform based method (fft) this method is based on the application of the fourier transform to digital images. as shown in figure 5, the combdrives of the microgripper have a particular periodic pattern; also, if an image consists of an array of uniformly spaced parallel straight lines, the fourier transform of the combination will be the convolution of their respective fourier transform. the result will be a string of impulses (see figure 6), with a separation equal to the reciprocal of the spacing of the lines and in a direction perpendicular to these parallel lines [12],[13]. this last feature has been used to estimate the angular aperture of the comb-drive: in fact, for each angular opening, the corresponding pattern of the comb-drive will take a different direction and consequently, also the position of the impulses will change and assume a different direction each time. therefore, for each angular opening, and for each image, there will be a series of points in different directions; subsequently a least squares approximation has been used to find the linear polynomial function that best approximates these points from which the angular coefficient of the straight line is obtained and therefore the opening angle of the comb-drive. as previously noted in [10], the major limitation associated with this procedure is due to the inability of the admfft to measure angular displacements less than a tenth of a degree, typical of mems devices for biomedical applications such as microgrippers. however, some microgrippers actuated by rotary comb-drive, as those studied in this work, are powered with voltages much higher than 30 v [25],[26] and therefore it was considered relevant to define whether this method could be used for the characterization of other mems devices. in order to evaluate the limit of applicability of the admfft, we proceeded in this way: once an image that presented a pattern like the one shown in figure 5 was identified, it was rotated of a quantity reported in table 1, where set1 and set2 correspond to two set of rotation, the first consisting of rotations less than one degree, the second higher than one degree; in particular, the rotation values of the first set correspond to the measurements obtained from the images acquired during the experimental campaign, using the sam. 3. uncertainty analysis in order to make a comparison among the three different image analysis methods, it is necessary to estimate the main uncertainty sources introduced by the measurement systems. it is important to underline that the experimental setup is the same, except for the three different methods. following the procedure adopted in [7], type a and type b uncertainties will be combined [27], as follows (1): 𝛿𝑇 = √𝛿𝐴 2 + 𝛿𝐵 2 , (1) figure 4. interest points descriptor of the first image (red circles) and of other images (green cross). figure 5. comb-drive pattern. figure 6. example of fourier transform applied to images properly filtered, constituted by a string of impulses. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 122 where type a uncertainty, a, has been calculated directly from the standard deviation of the experimental data, while type b uncertainty, b, has been obtained considering the uncertainties due to the power supply (evaluated from the datasheet), the optical system [7]-[9], the angle measurement, which uncertainty contribution has been assessed by means of a monte carlo simulation [28]-[29] in order to estimate the uncertainty related to the three implemented method. considering the sam, in order to simulate the uncertainty of the operator’s point selection and therefore evaluate the algorithm uncertainty, a monte carlo simulation with 104 iterations has been performed. in table 2, the variables x, y and roi with their assigned distributions and their standard deviations have been reported, in order to estimate the uncertainty introduced by the method. this contribution has been evaluated for each angular displacement of the comb-drive (i.e. 𝛿𝛼0−2v , 𝛿𝛼0−4v, 𝛿𝛼0−6v , … , 𝛿𝛼0−24v), and combined with the type a uncertainty, following the equation (1). on the other hand, to evaluate the uncertainty introduced by the fft based method in the measurement of the angular displacement of the comb-drive, the image in figure 5 has been subjected to different rotations. the contribution of the systematic uncertainty, considered in this procedure, has been evaluated by building a particular 4k (3840 px × 2160 px) image (figure 7), rotated by the same quantities reported in table 1. this contribution is mainly due to the uncertainty with which the software implements the rotation of the image and therefore into the error that it makes in measuring the angle . considering that at angles up to 15° the sine is only about 1% different and the tangent about 2% different from the measurement of the angle in radiant [30], the following approximation can be used (2): tan 𝛼 ≅ 𝛼 = 𝑎 𝑏 , (2) where a and b are the measurements of the segment reported in figure 7; therefore, for angles less than 15°, the angle measurement relative uncertainty  has been evaluated by the following equation (3): 𝛿𝛼 𝛼 = √( 𝛿𝑎 𝑎 ) 2 + ( 𝛿𝑏 𝑏 ) 2 , (3) where a and b are the measurement uncertainty of segments a and b, respectively, which are considered ±1 px, and on the other hand, if  is greater than 15°, it can be determined as follows (4): 𝛼 = arctg ( 𝑎 𝑏 ), (4) and its uncertainty  can be evaluated by equation (5), as 𝛿𝛼 = 𝑑[arctg(𝑐)] 𝑑𝑐 ∙ 𝛿𝑐 (5) where c is the ratio between the segments a and b, while c, is the corresponding uncertainty. once this uncertainty contribution has been evaluated, for each angular displacement, it is then combined following the equation (1). once uncertainties have been evaluated, a comparison among the three different set of results will be made, following the procedure adopted in [8] and reported in [31]. in practice, the different methods are able to measure the angular displacement of the comb-drive without significant differences if the following condition is verified (6): |�̅�1 − �̅�2| ≤ (𝛿𝑇1 + 𝛿𝑇2) , (6) where �̅�1 and �̅�2 are the mean values of the measurement results, while 𝛿𝑇1 and 𝛿𝑇2 are the total uncertainty estimate. in particular, if the difference |�̅�1 − �̅�2| has the same order of magnitude, or even less than, the sum (𝛿𝑇1 + 𝛿𝑇2 ), then measurements can be considered consistent, within the interval of the experimental uncertainties. 4. results and discussion in this section the outcomes from sam, admsurf and admfft are reported and commented. the graphs in figure 8 and in figure 9, show the results related to the comb-drive angular displacement, expressed as mean value, corresponding to sam, admsurf and admfft respectively. table 4 shows the measurement results expressed as the mean table 1. rotation values. set1 in ° 0.007 0.032 0.070 0.120 0.194 0.277 0.379 0.497 0.631 0.777 0.939 1.118 set2 in ° 1 2 3 4 5 7 10 13 16 20 25 30 table 2. variables settings in mcs to estimate the uncertainty introduced by the operator's subjectivity. parameter distribution standard deviation in px p1 coordinate x gaussian 8 p2 coordinate x 10 p3 coordinate x 8 p4 coordinate x 9 p1 coordinate y gaussian 6 p2 coordinate y 7 p3 coordinate y 6 p4 coordinate y 7 roi coordinate x gaussian 15 roi coordinate y 14 roi width 20 roi height 18 figure 7. 4k image, rotated of 20° for the estimation of angular rotation uncertainty in fft-based method. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 123 value and the corresponding measurement uncertainties at 95% of confidence level. in particular, the sam, introduces a measurement uncertainty contribution which corresponds to 0.8°, at 95% confidence level, and has been retrieved from 2.5 and 97.5 monte carlo distribution percentiles. the analysis of the data showed that both the sam and the admsurf follow a quadratic trend, that is in good agreement with the results obtained through the analysis of the analytical model [32]. as reported in [31], if the difference |�̅�1 − �̅�2| has the same order of magnitude as, or even less than, the sum (𝛿𝑇1 + 𝛿𝑇2 ), then measurements can be considered consistent, within the interval of the experimental uncertainties. from the data reported in table 4, the differences between the mean values are less with respect to the sum of the correspondent total uncertainties, therefore the measurements can be considered compatible, confirming that the admsurf is suitable for the measurement of the angular displacement of the comb-drive of the mems microgripper under test. moreover, it is important to underline that the computational complexity has been considerably reduced by using the admsurf: to process 390 images, as in our case, the sam requires about 2 hours, instead the admsurf about 4-5 minutes only. as regards the data obtained with the admfft (figure 9), it emerged that the results do not follow a trend that can be closely related to the angular displacement of the comb-drive. from a first analysis, it can be deduced that, the admfft cannot be considered suitable for the measurement of mems grippers whose angular displacement is below 1°, as there is no possibility of appreciating displacements around the tenth of a degree. anyway, since some prototype of microgrippers, built with rotary comb-drives, are powered with voltages higher than 30 v and can be moved with angular displacements above 1°, the admfft is evaluated also for an object that rigidly rotates around its axis by quantities greater than 1°. table 3, shows the measurement of rotation, mor1, calculated applying admfft to an image (see figure 5), rotated of a quantity equal to set1 and the measurement of rotation, mor2, calculated applying the admfft to the same image, rotated of a quantity equal to set2, together with the angle measurement uncertainty  , estimated from (3), considering  < 15° and from (5), considering  > 15°. test results confirm that angular displacements up to 30° can be measured with an angle measurement uncertainty  lower than 0.02°, as can be seen in table 3. the different behavior of the figure 8. relationship between angular displacement and applied voltage considering sam (green dot line) and surf method (red dot line). figure 9. relationship between angular displacement and applied voltage considering fft-based method. table 3. measurement of rotation, mor1, calculated applying admfft to an image (see figure 5), rotated of a quantity equal to set1 and the measurement of rotation, mor2, calculated applying admfft to the same image, rotated of a quantity equal to set2, together with the angle measurement uncertainty  . set1 in ° mor1 in °  set2 in ° mor2 in °  0.007 0 0.03 1 1.193 0.015 0.032 0 0.015 2 2.767 0.015 0.070 0 0.016 3 3.918 0.015 0.120 0 0.015 4 4.029 0.015 0.194 0 0.016 5 4.963 0.015 0.277 0 0.015 7 5.078 0.015 0.379 0 0.015 10 8.857 0.015 0.497 0 0.015 13 8.127 0.016 0.631 0.735 0.015 16 14.697 0.015 0.777 -0.262 0.015 20 16.571 0.015 0.939 1.193 0.015 25 19.759 0.015 1.118 1.193 0.015 30 29.237 0.015 figure 10. measurement of rotations less than 1° (above), and measurement of rotations between 1° and 30° (below), applying fft-based method. -1 -0,5 0 0,5 1 1,5 2 2,5 0 2 4 6 8 10 12 14 16 18 20 22 24 26 c o m b -d ri v e a n g u la r d is p la ce m e n t in ° applied voltage in v -0,4 -0,2 0 0,2 0,4 0,6 0,8 1 1,2 0 5 10 15 20 25 30c o m b -d ri v e a n g u la r d is p la ce m e n t in ° applied voltage in v r² = 0,5352 -0,4 -0,2 0 0,2 0,4 0,6 0,8 1 1,2 0 0,2 0,4 0,6 0,8 1 1,2 m e a su re m e n t o f r o ta ti o n i n ° rotation set1 in ° r² = 0,9661 0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35 m e a su re m e n t o f ro ta ti o n i n ° rotation set2 in ° acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 124 admfft depending on the angular range can be deduced from results of the two rotation sets (table 1) in figure 10: for rotations below 1°, the least squares regression line has shown a r2 = 0.54, while r2 = 0.97 for angles between 1° and 30°. in conclusion, it is possible to confirm that the admfft is not suitable for the measurement of rotations below 1°, but for greater rotations (higher than 1°), it has an almost linear behavior. 5. conclusions this preliminary study has the purpose of comparing the measurements performed by different methods for the angular displacement of a comb-drive of a mems gripper prototype for biomedical applications. in particular, three in-house methods have been implemented in matlab environment: the sam, the admsurf and the admfft. considering the sam, the contribution of uncertainty related to the subjectivity of the operator has been estimated, which has found to be 0.8°, at 95% of confidence level, as previously indicated. from the experimental results obtained, it has been found that the sam and admsurf are suitable to measure the small angular displacement of the comb-drive of the microgripper, showing quadratic curves, consistent with the results obtained with the analytic model. conversely, from the results retrieved by means of the admfft, it has been found no good correlation between small angular displacement and applied voltage that describe the real behavior of the device, and the data are not consistent with both the data obtained through the analytical method and with the two abovementioned methods. anyway, it was also assessed that this method was suitable for measuring rotations from 1° to 30°, and a good correlation is observed between the admfft outcomes and the rotations applied by the operator, with an uncertainty of about 0.02°. a comparison between the sam and the admsurf has been proposed: the measurements can be considered compatible, confirming that the admsurf is suitable for the measurement of the angular displacement of the comb-drive of the mems microgripper under test can be considered compatible. in particular the admsurf measurements of the comb-drive angular displacement are affected by an uncertainty lower than 8% for voltages less than 14 v, as well as smaller than sam. in conclusion, it can be confirmed that the admsurf is the most suitable method among the three proposed for the characterization of the angular displacement of mems devices such as microgrippers, both for the results obtained and the significant reduction of the computational costs. references [1] bhansali, shekhar, abhay vasudev, eds. mems for biomedical applications. elsevier, 2012, isbn 978-0-85709-627-2. [2] d. panescu, mems in medicine and biology, ieee eng. med. biol. mag. 25 (2006), pp. 19-28. doi: 10.1109/memb.2006.1705742 [3] k. keekyoung, x. liu, y. zhang, y. sun, nanonewton forcecontrolled manipulation of biological cells using a monolithic mems microgripper with two-axis force feedback, j. micromech. microeng. 18 (2008). doi: 10.1088/0960-1317/18/5/055013 [4] f. vurchio, p. ursi, a. buzzin, a. veroli, a. scorza, m. verotti, s. a. sciuto, n. p. belfiore, grasping and releasing agarose micro beads in water drops, micromachines 10 (2019). doi: 10.3390/mi10070436 [5] a. gosline, n. vasilyev, e. butler, c. folk, a. cohen, r. chen, n. lang, p. del nido, p. dupont, percutaneous intracardiac beatingheart surgery using metal mems tissue approximation tools, int. j. rob. res. 31 (2012), pp. 1081-1093. doi: 10.1177%2f0278364912443718 [6] d. benfield, s. yue, e. lou w. moussa, design and calibration of a six-axis mems sensor array for use in scoliosis correction surgery, j. micromech. microeng. 24 (2014). doi: 10.1088/0960-1317/24/8/085008 [7] f. orsini, f. vurchio, a. scorza, r. crescenzi, s. a. sciuto, an image analysis approach to microgrippers displacement measurement and testing, actuators 7 (2018). doi: 10.3390/act7040064 [8] f. vurchio, p. ursi, f. orsini, a. scorza, r. crescenzi, s. a. sciuto, n. p. belfiore, toward operations in a surgical scenario: characterization of a microgripper via light microscopy approach, appl. sci. 9 (2019). doi: 10.3390/app9091901 [9] f. vurchio, f. orsini, a. scorza, s. a. sciuto, functional characterization of mems microgripper prototype for biomedical application: preliminary results, proc. of 2019 ieee international symposium on medical measurements and applications (memea), istanbul, turkey, 26 – 28 june 2019. doi: 10.1109/memea.2019.8802178 [10] f. vurchio, g. fiori, a. scorza, s. a. sciuto, a comparison among three different image analysis methods for the displacement measurement in a novel mems device, proc. of the 24th imeko tc4 international symposium & 22nd international workshop on adc modelling and dac modelling and testing, palermo, italy, 14 – 16 september 2020. online [accessed 18 january 2021]. https://www.imeko.org/publications/tc4-2020/imeko-tc42020-61.pdf table 4. comb-drive angular displacement obtained through the sam, surf and fft methods. applied voltage in v sam in ° total uncertainty 𝜹𝑻𝑺𝑨𝑴 in ° surf in ° total uncertainty 𝜹𝑻𝑺𝑼𝑹𝑭 in ° fft in ° total uncertainty 𝜹𝑻𝑭𝑭𝑻 in ° 2 0.0 0.8 0.01 0.07 0.03 0.29 4 0.0 0.8 0.04 0.08 0.21 0.23 6 0.0 0.8 0.08 0.07 0.31 0.24 8 0.1 0.8 0.14 0.07 0.22 0.28 10 0.2 0.8 0.23 0.09 0.28 0.28 12 0.3 0.8 0.33 0.08 0.23 0.31 14 0.4 0.8 0.45 0.08 0.50 0.25 16 0.5 0.8 0.56 0.19 0.68 0.23 18 0.6 0.8 0.95 0.12 0.57 0.18 20 0.8 0.8 0.89 0.10 0.10 0.21 22 0.9 0.8 1.08 0.10 0.05 0.21 24 1.1 0.8 1.27 0.16 0.04 0.18 https://doi.org/10.1109/memb.2006.1705742 https://doi.org/10.1088/0960-1317/18/5/055013 https://doi.org/10.3390/mi10070436 https://dx.doi.org/10.1177%2f0278364912443718 https://doi.org/10.1088/0960-1317/24/8/085008 https://doi.org/10.3390/act7040064 https://doi.org/10.3390/app9091901 https://doi.org/10.1109/memea.2019.8802178 https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-61.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-61.pdf acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 125 [11] h. bay, a. ess, t. tuytelaars, l. van gool, speeded up robust features (surf), computer vision and image understanding, 110 (2008), pp. 346-359. doi: 10.1016/j.cviu.2007.09.014 [12] r. bracewell, fourier analysis and imaging, springerscience+business media, llc, 2003. [13] k. j. r. liu, pattern recognition and image processing, marcel dekker, inc. [14] g. dougherty, digital image processing for medical applications, cambridge university press, 2009. [15] r. c. gonzalez, digital image processing using matlab, pearson prentice-hall, 2004. [16] w. burger, m. j. burge, principles of digital image processing, springer. [17] a. bagolini, s. ronchin, p. bellutti, m. chistè, m. verrotti, n. p. belfiore, fabrication of novel mems microgrippers by deep reactive ion etching with metal hard mask, journal of microelectromechanical systems 26 (2017), pp. 926-934. doi: 10.1109/jmems.2017.2696033 [18] c. potrich, l. lunelli, a. bagolini, p. bellutti, c. pederzolli, m. verotti, n. p. belfiore, innovative silicon microgrippers for biomedical applications: design, mechanical simulation and evaluation of protein fouling, actuators 7 (2018). doi: 10.3390/act7020012 [19] m. verotti, a. dochshanov, n. p. belfiore, a comprehensive survey on microgrippers design: mechanical structure, j. mech. des. 139 (2017). doi: 10.1115/1.4036351 [20] r. cecchi, m. verotti, r. capata, a. dochshanov, g. b. broggiato, r. crescenzi, m. balucani, s. natali, g. razzano, f. lucchese, a. bagolini, p. bellutti, e. sciubba, n. p. belfiore, development of micro-grippers for tissue and cell manipulation with direct morphological comparison, micromachines 6 (2015), pp. 17101728. doi: 10.3390/mi6111451 [21] p. di giamberardino, a. bagolini, p. bellutti, i. j. rudas, m. verotti, f. botta, n. p. belfiore, new mems tweezers for the viscoelastic characterization of soft materials at the microscale, micromachines 9 (2018). doi: 10.3390/mi9010015 [22] m. verotti, a. dochshanov, n. p. belfiore, compliance synthesis of csfh mems-based microgrippers, j. mech. des. 139 (2017). doi: 10.1115/1.4035053 [23] f. vurchio, f. orsini, a. scorza, f. fuiano, s. a. sciuto, a preliminary study on a novel automatic method for angular displacement measurements in microgripper for biomedical applications, proc. of 2020 ieee international symposium on medical measurements and applications (memea), bari, italy, 1 june – 1 july 2020. doi: 10.1109/memea49120.2020.9137249 [24] m. schaeferling, g. kiefer, object recognition on a chip: a complete surf-based system on a single fpga, proc. of 2011 international conference on reconfigurable computing and fpgas, cancun, mexico, 30 november – 2 december 2011. doi: 10.1109/reconfig.2011.65 [25] q. xu, design, fabrication, and testing of an mems microgripper with dual-axis force sensor, ieee sensors journal, 15 (2015), pp. 6017-6026. doi: 10.1109/jsen.2015.2453013 [26] m. verotti, a. bagolini, p. bellutti, n. p. belfiore, design and validation of a single-soi-wafer 4-dof crawling microgripper, micromachines (basel.) 10 (2019). doi: 10.3390/mi10060376 [27] iso/iec guide 98-3: 2008. [28] g. fiori, f. fuiano, a. scorza, j. galo, s. conforto, s. a. sciuto, lowest detectable signal in medical pw doppler quality control by means of a commercial flow phantom: a case study, proc. of the 24th imeko tc4 international symposium & 22nd international workshop on adc modelling and dac modelling and testing, palermo, italy, 14 – 16 september 2020. online [accessed 14 june 2021]. https://www.imeko.org/publications/tc4-2020/imeko-tc42020-63.pdf [29] g. fiori, f. fuiano, f. vurchio, a. scorza, m. schmid, s. conforto, s. a. sciuto, a preliminary study on a novel method for depth of penetration measurement in ultrasound quality assessment, proc. of the 24th imeko tc4 international symposium & 22nd international workshop on adc modelling and dac modelling and testing, palermo, italy, 14 – 16 september 2020. online [accessed 14 june 2021]. https://www.imeko.org/publications/tc4-2020/imeko-tc42020-62.pdf [30] c. h. holbrow, j. n. lloyd, j. c. amato, e. galvez, m. e. parks, modern introductory physics, second edition, springer 2010. [31] j. r. taylor, an introduction to error analysis. uncertainty stuty in physical measurements, zanichelli: bologna, italy, 1986. [32] r. crescenzi, m. balucani, n. p. belfiore, operational characterization of csfh mems technology based hinges, j. micromech. microeng. 28 (2018). doi: 10.1088/1361-6439/aaaf31 https://doi.org/10.1016/j.cviu.2007.09.014 https://doi.org/10.1109/jmems.2017.2696033 https://doi.org/10.3390/act7020012 https://doi.org/10.1115/1.4036351 https://doi.org/10.3390/mi6111451 https://doi.org/10.3390/mi9010015 https://doi.org/10.1115/1.4035053 https://doi.org/10.1109/memea49120.2020.9137249 https://doi.org/10.1109/reconfig.2011.65 https://doi.org/10.1109/jsen.2015.2453013 https://doi.org/10.3390/mi10060376 https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-63.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-63.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-62.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-62.pdf https://doi.org/10.1088/1361-6439/aaaf31 calculating vessel capacity from the neolithic sites of lugo di grezzana (vr) and riparo gaban (tn) through 3d graphics software acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 10 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 calculating vessel capacity from the neolithic sites of lugo di grezzana (vr) and riparo gaban (tn) through 3d graphics software andrea tavella1, marika ciela1, paolo chistè1, annaluisa pedrotti1 1 labaaf, laboratorio bagolini archeologia archeometria fotografia, university of trento, via tommaso gar 14, 38122 trento, italy section: research paper keywords: vessel capacity; ceramics; 3d models; blender; neolithic citation: andrea tavella, marika ciela, paolo chistè, annaluisa pedrotti, calculating vessel capacity from the neolithic sites of lugo di grezzana (vr) and riparo gaban (tn) through 3d graphics software, acta imeko, vol. 11, no. 1, article 7, march 2022, identifier: imeko-acta-11 (2022)-01-07 section editor: fabio santaniello, university of trento, italy received march 7, 2021; in final form february 23, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: andrea tavella, e-mail: andrea.tavella@live.com 1. the ceramic record the study takes into account ceramic finds from the neolithic sites of lugo di grezzana and riparo gaban. both sites show a frequentation during early neolithic and play a key role for the understanding of neolithization process in the northern italy (figure 1). 1.1 lugo di grezzana (vr) the area to the south of the small town of lugo di grezzana, called locality campagne, is situated over a river terrace (300 m above sea level) along valpantena, a short prealpine valley located in the lessini mountains [1]. the discovery of the site dates back to 1990 by fernando zanini and giorgio chelidonio. the area has been the object, since the early nineties, of systematic research undertaken by the archaeological heritage of veneto region, in collaboration with the university of trento since 1996 (b. bagolini laboratory – labaaf) up until 2005 [2]. the first evidence is dated in the middle of the 6th millennium bc cal, while an intense occupation of the area is dated between 5300 – 5050 bc cal [3]. figure 1. the localization of lugo di grezzana (verona, italy) and riparo gaban (trento, italy). abstract this paper reports new data about the estimation of the volumetric capacity of ceramic vessels from the neolithic sites of lugo di grezzana (verona, italy) and riparo gaban (trento, italy). the methodological protocol is based on a free and open source 3d computer graphics software, called blender®. the estimate of the volumetric capacity has been relied from the graphic elaboration of the archaeology drawing of the artifacts. through the calculation of volume has been possible to obtain an estimation of the total capacity of the vessels, proposing two types of content. subsequently, the volumetric data was related to diameter/height ratio of each ceramic vessel, in order to define a range of variability in each typological class. data from both sites were later compared, highli ghted for the most part of them a specific distribution that could be a consequence of different functional uses and/or cultural models. this paper concludes the preliminary results presented at the 2020 imeko tc4 international conference on metrology and archaeology for cultural heritage. mailto:andrea.tavella@live.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 based on material culture [2]-[5], the site is mainly attributed to fiorano, which is present in northern italy during the early neolithic and shows a typical homogeneity in vessels typology. jug is possibly one of the most distinctive shapes of the fiorano culture (figure 2) and is often imported into contemporary cultures. although, numerous elements have been permitted to underline influences from other contexts such as vhò group, adriatic impressed ware and catignano cultures [2], mainly due to the supply of lessinian flint. the latter, thanks to its highquality, becomes the object of exchange par excellence and a sort of common denominator between the various groups of the early neolithic in northern italy, between the middle of the 6th and the beginning of the 5th millennium bc [3], [6]. around 5000 bc cal, the occupation of the settlement seems to show a temporary interruption with the occurrence of colluvial episodes, while the last neolithic occupation, scanty represented, is attested between 4900 and 4800 bc cal. during this period, the early geometric-linear style of square mouthed pottery culture is already widespread in the northern italy, the latter attested within the site in contemporary with later aspects of fiorano culture [3]. 1.2 riparo gaban (tn) the site of riparo gaban is located at piazzina di martignano, in a small hanging valley that runs parallel to the left side of adige valley (270 metres above sea level), a few kilometres north-west of trento [7]. the site, identified as a rock-shelter, has been discovered in 1970 by a group of local amateurs as a part of the palaeoethological activities of the museo tridentino di scienze naturali. the excavations have been conducted under the technical direction of bernardino bagolini from 1972 to 1981, by alberto broglio and stefan k. kozlowski from 1982 to 1985 for the mesolithic phases [8], [9]. the site is characterised by a complex stratigraphic evolution from mesolithic to middle bronze age, with a stratigraphic continuity between castelnovian mesolithic and local early neolithic deposits, these latter dated between the end of 6th and the beginning of 5th millennium bc. the site is one of the main pieces of evidence for the understanding of first neolithic evidences in trentino alto-adige and gives its name to the cultural group presents in the adige valley during this period. unlike lugo di grezzana and generally to the fiorano culture, the main aspect of the gaban group appears to be a strong mesolithic component. especially observed in the lithic and bone industries, with extraordinary examples of mobiliary art that gives to the site, not only the appearance of a simple rockshelter but probably also a magical-religious connotation [8], [10]. from a typological point of view, the gaban group, despite a markedly autonomous framework, presents several connections with others cultural groups of the early neolithic, in particular with isolino and vhò groups, and to a lesser extent with fiorano [11]. about the material culture identified at riparo gaban (figure 3), the stratigraphic evolution allowed to observe an oldest phase characterised by a strong presence of impressed ware and a more recent phase where scratched pottery is more attested [11]. the neolithic occupation is interrupted as from 4700 bc cal, documenting a possible phase of abandonment of the rockshelter up to 2700 bc cal [8]. 2. the calculation of volumetric capacity the volumetric estimate of a vessel can be calculated mainly through three types of volume calculation: direct measurements, two-dimensional geometrical methods (manual calculation), and computer-assisted methods, these latter based on 3d models (automatic calculation). direct measurements are taken from the container and allow directly to obtain the volumetric capacity. these methods involve filling the vessel with a suitable material able to adapt to the internal profile. however, they cannot be applied to the entire ceramic record, both because usually a limited percentage of vessels from archaeological excavations are complete or partially reconstructed, and also for conservation issues [12]-[14]. the second method, about manual calculation, is based on the decomposition of the vessel volume into basic forms (spheres, cylinders, or truncated cones) and calculated through mathematical formulae [15]-[22]. the archaeological drawing represents the starting point of this method and, unlike direct measurements, does not require the availability of the archaeological find. however, the degree of approximation represents a negative aspect, as the complex form of the vessel is transformed into simplified form, although this depends on the geometric shape used. the last method, computer-assisted, is focused on 3d models. here too, the volumetric capacity is obtained through measurements directly on the archaeological drawings, exploiting the principle of symmetry. different software can be used, such as: autocad®, rhinoceros™ and blender® [14], [23]. in addition, other suitable programs are available as kotyle© [24] and web applications like capacity [12], [25], [26]. in this study, the 3d graphics program of choice is blender® [27] since it is free and open source, which allows the users to generate extensions in order to improve it. the estimate of the volumetric calculation was relied on the 3d-print toolbox extension, although different add-ons are known to be effective as well [23]. regarding the study of ceramic record, it is important to refer to the digitalization in 3d of some pottery mentioned in this paper through photogrammetry. this work was carried out at tefalab (laboratorio di tecniche fotografiche avanzate, unit figure 2. vessel from the neolithic site of lugo di grezzana (photo p. chistè – labaaf) [2]. figure 3. vessel from the neolithic site of riparo gaban (photo e. turco) [10]. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 of labaaf, university of trento) under the technical direction of paolo chistè. about the study of volumetric capacity, are only present preliminary results for the site of lugo di grezzana [28], while for riparo gaban this aspect of research has not been studied yet. at the present time, a systematic analysis that evaluates the metric criteria of the ceramics of the fiorano culture has not yet been carried out [29] and more. however, for the neolithic of northern italy there is a typological classification of the vessels that distinguishes their morphology in relation to the profile, the diameter/height ratio (ø/ℎ), and the size of the mouth [30]. nevertheless, this classification does not include the volumetric capacity parameter. 3. material and methods the methodological protocol was applied to a selection of 48 archaeological drawing, of which 35 from lugo di grezzana and 13 from riparo gaban (figure 6 and figure 7). the sample analysed was chosen taking into consideration the typological classification. out of the total samples, 26 drawings illustrate whole artifacts, with a continuous profile from the rim to the bottom of the vessel (lugo di grezzana: 15 samples; riparo gaban: 11 samples), while the other are only partially preserved. for the latter, as in the previous study [28], it was therefore necessary to hypothesize the profile and the height of the vessel. to carry out this operation, the fragmented samples were integrated through the study of whole ceramic vessels belonging to the same typological class (figure 4). for both groups, and especially the latter, it is essential to keep in mind that the capacity estimate deduced from archaeological drawings has a different degree of accuracy. the manual drawing is based on one radial section and represents a two-dimensional shape of the vessel not taking into account the possible variations in the three-dimensional shape of the object [31]. the development of the operating methodology is based on three minimum requirements that ceramics and designs must have: • availability of diameter and internal profile; • scale of representation; • high-resolution drawing (d.p.i.); the calculation of the volumetric capacity is carried out by importing each drawing into the 3d graphic program (blender version 2.92), providing the exact graphic resolution of the file (d.p.i.). this step is necessary in order to avoid any change in the original dimensions of the imported drawing that would therefore entail an incorrect estimate of the volume. subsequently it is generated a curve (bezier), which is modified along the x and y axes and divided into several segments, in order to trace the underlying drawing. after obtaining a 2d profile, it is necessary generate a line (path), which will correspond with the rotation axis of the curve itself and with the midline of the archaeological drawing. once the rotation axis is fixed, the curve can be rotated 360 degrees. this procedure requires to define some options, namely: the cartesian axis to which the curve is oriented, the object around which the rotation takes place and lastly the number of segments the revolution is divided into (a greater number of these entail corresponds to a better graphic resolution and consequently a more accurate estimate of the volume). figure 4. typological table with some samples used to hypothesize the profile and the height of the vessels: 1-3, 6, 8-11 from lugo di romagna (ra) [32]; 45, 7 from vhò di piadena-campo ceresole (cr) [33]. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 the essential step for obtaining the volume is the closure of the solid at the rim and at the base. once the solid is closed, the calculation of the volumetric capacity is performed automatically using the add-on: 3d-print toolbox (available since version 2.67, released in may 2013), which volume is expressed in cm3 (figure 5). the validity of the procedure was previously established during the formulation of the method, through the graphic reproduction and the volumetric calculation of a cylinder of known dimensions (𝑟 = 5 cm; ℎ = 20 cm). this procedure allowed to calculate the absolute and relative error in the method developed, taking into account the tolerance. the latter is characterised by different causes such as: the inherent uncertainty regarding the measured object, the conservation status, the operator, the procedure and the measuring instrument used. taking these issues into account, it was calculated a tolerance of about ± 1 mm. 𝐴𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝐸𝑟𝑟𝑜𝑟 (𝐸𝐴) = (𝑉𝑜𝑙max − 𝑉𝑜𝑙min) 2 = 70.6889 𝑐𝑚3 𝑅𝑒𝑙𝑎𝑡𝑖𝑣𝑒 𝐸𝑟𝑟𝑜𝑟 (𝑅𝐸) = 𝐸𝐴 𝑉𝑜𝑙avg = 0.0449 𝑃𝑒𝑟𝑐𝑒𝑛𝑡𝑎𝑔𝑒 𝐸𝑟𝑟𝑜𝑟 (𝑃𝐸) = 𝑅𝐸 × 100 % = 4.49 % . the methodological approach was subsequently extended, considering two hypothetical types of contents, a liquid and a solid one. as to what concerns the estimate of the capacity, it was treated converting the measure from cm3 to ml (1 cm3 = 1 ml). instead, in the case of solids contents was calculated the weight (grams) of three types of cereals such as: whole barley, emmer and naked wheats, selected accordingly to the data collected from archaeobotanical analysis carried out for the site of lugo di grezzana [34]. the weights were estimated in relation to the bulk density of each kind of cereal (whole barley 0.61 ÷ 0.69 g/ml, emmer 0.47 g/ml e naked wheats 0.54 g/ml) [35], [36] and the volumes of the containers, according to the following formula: 𝑊𝑒𝑖𝑔ℎ𝑡 = 𝐵𝑢𝑙𝑘 𝑑𝑒𝑛𝑠𝑖𝑡𝑦 × 𝑉𝑜𝑙𝑢𝑚𝑒 . lastly, metrical analysis were carried out through the correlation of the maximum volumetric capacity (cm3), diameter and height ratio (ø/ ℎ), and typology [30]. figure 5. summary scheme of the operating methodology performed with blender®. table 1. summary of results from lugo di grezzana (l.g.) and riparo gaban (r.g.). legend: bw. = bowl; cp. = cup; h. v. = handle vessel; jg. = jug; l.b. = large bowl; mn. = miniaturistic; n.v. = necked vessel; pt. = pot; t.v. = truncate cone-shaped vessel; * = partially preserved. emmer naked wheats ø/h ratio l.g. bw. 1* 545 332 376 256 294 2,45 l.g. bw. 2* 1014 619 700 477 548 3,03 l.g. bw. 3* 1680 1025 1159 789 907 2,39 l.g. bw. 4* 3755 2290 2591 1765 2028 1,97 l.g. l.b. 1* 5276 3218 3640 2480 2849 3,23 l.g. l.b. 2 6959 4245 4802 3271 3758 2,74 l.g. l.b. 3* 4814 2936 3322 2263 2599 2,42 l.g. l.b. 4* 6763 4125 4666 3178 3652 2,18 l.g. t.v. 1 5646 3444 3895 2653 3049 0,92 l.g. t.v. 2* 17307 10557 11942 8134 9346 1,08 l.g. t.v. 3 2967 1810 2047 1395 1602 0,97 l.g. t.v. 4* 1329 811 917 625 718 0,77 l.g. t.v. 5* 5941 3624 4099 2792 3208 0,82 l.g. t.v. 6 944 576 652 444 510 1,00 l.g. t.v. 7 3961 2416 2733 1862 2139 0,80 l.g. t.v. 8* 5235 3193 3612 2460 2827 0,87 l.g. t.v. 9* 750 458 518 353 405 1,08 l.g. t.v. 10* 1521 928 1049 715 821 0,84 l.g. t.v. 11* 2112 1288 1457 993 1140 0,96 l.g. h.v. 1 925 564 638 435 499 1,16 l.g. jg. 1 413 252 285 194 223 0,86 l.g. jg. 2 1825 1114 1260 858 986 0,83 l.g. jg. 3 772 471 533 363 417 0,83 l.g. jg. 4 839 512 579 394 453 0,74 l.g. jg. 5* 653 398 451 307 353 0,88 l.g. jg. 6* 889 542 614 418 480 0,83 l.g. jg. 7* 1508 920 1040 709 814 0,87 l.g. jg. 8* 2022 1234 1395 951 1092 0,85 l.g. n.v. 1* 6054 3693 4177 2845 3269 0,47 l.g. n.v. 2* 13378 8161 9231 6288 7224 0,26 l.g. ld. 1 38 23 27 18 21 1,84 l.g. mn. 1 53 33 37 25 29 0,76 l.g. mn. 2 158 97 109 74 86 2,24 l.g. mn. 3 59 36 41 28 32 1,14 l.g. mn. 4 79 48 54 37 42 2,50 r.g. bw. 1 2858 1743 1972 1343 1543 1,52 r.g. t.v. 1* 2894 1765 1997 1360 1563 0,97 r.g. t.v. 2 17135 10452 11823 8053 9253 1,11 r.g. t.v. 3 1589 969 1097 747 858 1,18 r.g. t.v. 4* 1290 787 890 606 697 0,82 r.g. cp. 1 397 242 274 186 214 1,19 r.g. cp. 2 403 246 278 190 218 1,10 r.g. cp. 3 764 466 527 359 413 1,38 r.g. cp. 4 1406 858 970 661 759 0,96 r.g. cp. 5 2023 1234 1396 951 1092 1,02 r.g. jg. 1 645 393 445 303 348 0,82 r.g. pt. 1 1668 1017 1151 784 901 0,52 r.g. mn. 1 60 36 41 28 32 1,58 estimate solid content (g) estimate liquid content (ml) samples whole barley acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 figure 6. typological table of the samples analysed during the study from lugo di grezzana (l.g.) and riparo gaban (r.g.). scale drawing 1:6. legend: bw. = bowl; l.b. = large bowl; t.v. = truncate cone-shaped vessel; * = partially preserved. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 figure 7. typological table of the samples analysed during the study from lugo di grezzana (l.g.) and riparo gaban (r.g.). scale drawing 1:6. legend: cp. = cup; h. v. = handle vessel; jg. = jug; mn. = miniaturistic; n.v. = necked vessel; pt. = pot; t.v. = truncate cone-shaped vessel; * = partially preserved. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 4. results the methodological approach allowed to provide an estimate of the capacity (ml) and the weight of different contents (g). at the same time, it was possible to correlate the values determined by the computer-assisted calculations with the ratio between diameter and height (table 1), distinguishing them based on typology1 (table 2). the elaboration of the data took place through the compilation of a scatter plot, reporting the volumetric capacity in the x axis and the ø/ℎ ratio in the y axis (figure 8). from a volumetric point of view, in either case the same degree of variation is observed, where the maximum limit is about 17,000 cm3 and is represented by two truncate coneshaped vessels (l.g. t.v. 8*; r.g. t.v. 2). at the same time, however, the distribution of the samples is different. in the case of riparo gaban, almost all samples (12 out of 13) have a volumetric capacity lower than 3000 cm3, while for the site of lugo di grezzana this aspect is found in two-third of the samples (23 out of 35), showing a wider volumetric variability (figure 8). a similar distribution is observed for the ø/ℎ ratio, which is wider for lugo di grezzana (0.26 ÷ 3.23) than riparo gaban (0.52 ÷ 1.58). these dissimilarities are due to the absence of some ceramic forms (large bowls, necked vessels) or the presence in a smaller percentage (bowls, truncate cone-shaped vessels) in the dataset of riparo gaban. 1 in this study, the distinction between jugs and mugs follows the typological classification defined in banchieri et al. 1999 [30]. for some typological classes, better represented, it was possible to make a comparison of the samples between the two investigated sites (table 2). bowls: represented by 5 samples (4 of which are partially reconstructed). the ceramic samples are characterised by a volume between 545 and 3755 ml, containing between 256 and 2591 g of solid content and a ø/ℎ ratio between 1.52 and 3.03. although only one samples comes from riparo gaban and most of the volumes are reconstructed, there is a distinction on the basis of ø/ℎ ratio, estimated between 1.97 and 3.03 for lugo di grezzana, compared to a lower value for riparo gaban equal to 1,52. jugs: represented by 9 samples (4 of which are partially reconstructed). the ceramic samples are characterised by a volume between 413 and 2022 ml, containing between 194 and 1395 g of solid content and a ø/ℎ ratio between 0.74 and 0.88. although most of samples comes from lugo di grezzana, a limited variability both on the volumetric data and in particular about ø/ℎ ratio is observed. the lower range of ø/ℎ ratio allowed to identify jugs as the ceramic shape with the highest degree of homogeneity compared to the other typological classes. truncate cone-shaped vessels: represented by 15 samples (9 of which are partially reconstructed). the ceramic samples are characterised by a volume between 750 and 17307 ml, containing between 353 and 11942 g of solid content and a ø/ℎ ratio table 2. summary of results organised for typological class. samples samples partially preserved estimate liquid content (ml) estimate solid content (g) ø/h ratio samples samples partially preserved estimate liquid content (ml) estimate solid content (g) ø/h ratio samples samples partially preserved estimate liquid content (ml) estimate solid content (g) ø/h ratio bowl 5 4 545 ÷ 3755 256 ÷ 2591 1,52 ÷ 3,03 4 4 545 ÷ 3755 256 ÷ 2591 1,97 ÷ 3,03 1 2858 1343 ÷ 1972 1,52 large bowl 4 3 4814 ÷ 6959 2263 ÷ 4802 2,18 ÷ 3,23 4 3 4814 ÷ 6959 2263 ÷ 4802 2,18 ÷ 3,23 truncate coneshaped vessel 15 9 750 ÷ 17307 353 ÷ 11942 0,77 ÷ 1,18 11 7 750 ÷ 17307 353 ÷ 11942 0,77 ÷ 1,08 4 2 1290 ÷ 17135 606 ÷ 11823 0,82 ÷ 1,18 internal handles vessel 1 925 435 ÷ 638 1,16 1 925 435 ÷ 638 1,16 mug 5 397 ÷ 2023 186 ÷ 1396 0,96 ÷ 1,38 5 397 ÷ 2023 186 ÷ 1396 0,96 ÷ 1,38 jug 9 4 413 ÷ 2022 194 ÷ 1395 0,74 ÷ 0,88 8 4 413 ÷ 2022 194 ÷ 1395 0,74 ÷ 0,88 1 645 303 ÷ 445 0,82 pot 1 1668 784 ÷ 1151 0,52 1 1668 784 ÷ 1151 0,52 necked vessel 2 2 6054 ÷ 13378 2845 ÷ 9231 0,26 ÷ 0,47 2 2 6054 ÷ 13378 2845 ÷ 9231 0,26 ÷ 0,47 ladle 1 38 18 ÷ 27 1,84 1 38 18 ÷ 27 1,84 miniaturistic 5 53 ÷ 158 25 ÷ 109 0,76 ÷ 2,5 4 53 ÷ 158 25 ÷ 109 0,76 ÷ 2,5 1 60 28 ÷ 41 1,58 riparo gabantotal typological classification lugo di grezzana acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 8 between 0.77 and 1.18. both datasets are characterised by a homogeneity about volumetric capacity (lugo di grezzana: 750 ÷ 17307 ml, 353 ÷ 11942 g; riparo gaban: 1290 ÷ 17135 ml, 606 ÷ 11823 g) and ø/ℎ ratio (lugo di grezzana: 0.77 ÷ 1.08; riparo gaban: 0.82 ÷ 1.18). miniaturistic forms: represented by 6 samples. the ceramic samples are characterised by a volume between 53 and 158 ml, containing between 25 and 109 g of solid content and a ø/ℎ ratio between 0.76 and 2.5. the dataset is represented by different ceramic shapes, with a wide variability of ø/ℎ ratio and only associated for the limited dimensions. 5. discussion the methodological protocol has led to obtain an analysis of the volumetric capacity of a wide selection of samples, correlating the volumetric data to diameter/height ratio and typological class of each ceramic vessel. compared to the previous study [28], the increase in the number of samples allowed to provide further information on individual typological class, especially in those most attested such as: bowls, large bowls, truncate cone-shaped vessels, cups, jugs and miniaturistic forms. the different distribution of the ceramic assemblages (figure 8) highlighted a wider distribution of samples from lugo di grezzana both on the volumetric estimates (mainly within 6000 cm3) and in the ø/ℎ ratio (between 0.26 and 3.23). while in the case of riparo gaban, both the volumetric range (mainly within 3000 cm3) and the ø/ℎ ratio (between 0.56 and 1.58) represent a lower variability. the different distribution could depend on different reasons, both due to a poor conservation of some typological classes (e.g. necked vessels, bowls and large bowls), not allowing an estimate of the volume, and due to a different connotation of riparo gaban compared to the settlement of lugo di grezzana. this later characterised by numerous structural complexes [3], [37], [38], as well as a larger number of ceramic finds. about the distribution of each typological class in relation to the parameters examined, for the most part of them it was possible to highlight a specific distribution, according to four trends: • limited volumetric range and limited ø/ℎ ratio (jugs, mugs); • limited volumetric range and wide ø/ℎ ratio (miniaturistic forms); • wide volumetric range and limited ø/ℎ ratio (necked vessels, truncate cone-shaped vessels); • wide volumetric range and wide ø/ℎ ratio (bowls, large bowls); for each typological class, the greater or lesser volumetric capacity and ø/ℎ ratio could provide new information about the research. for instance the case of jugs, where a limited ø/ℎ ratio allowed to recognize a degree of homogeneity higher compared to the other typological classes. this aspect could result by several factors, such as: the attribution of the ceramic shape to a limited number of function and/or the evidence of a model widely shared within the fiorano culture, where jugs are one of the most distinctive shapes of this material culture [39]. similar case for truncate cone-shaped vessels, typical of vhò group, where the wide volumetric range could be the outcome of a figure 8. scatter plot between volumetric capacity (x axis) and diameter/height ratio (y axis). lugo di grezzana (l.g. = green), riparo gaban (r.g. = red); legend: bw. = bowl; cp. = cup; h. v. = handle vessel; jg. = jug; l.b. = large bowl; mn. = miniaturistic; n.v. = necked vessel; pt. = pot; t.v. = truncate cone-shaped vessel. 0,00 0,50 1,00 1,50 2,00 2,50 3,00 3,50 0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000 l.g. bw. l.g. h.v. l.g. jg. l.g. l.b. l.g. ld. l.g. mn. l.g. n.v. l.g. t.v. r.g. bw. r.g. cp. r.g. jg. r.g. mn. r.g. pt. r.g. t.v. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 9 plurality of technological aspects. for example, aspects like an unrestricted orifice, thick walls and bases, in particular in large samples for increase stability, could be assumed as dry storage vessels [20], but an evaluation of their functionality is not possible yet. regarding the functions of each ceramic class, although the volumetric capacity depends upon its shape and size, it was not possible to formulate a direct relationship with the function. as has been seen in rice [20], pots are multifunctional with primary or secondary uses before being abandoned. in other words, the relation between use and capacity of a vessel depends on several considerations like the amount and kind of contents (liquid or solid), the duration of storage, the number of uses, microenvironmental factors or other necessities [40], [41]. to understand this complexity, volumetric and typological aspects must be related to other technological criteria like petrographic analysis, surface treatment processes (smoothing, polishing, slip) [42], use-wear and organic residues [43]-[45]. ceramic paste and manufacturing analysis are under study and will be able to provide new interpretative ideas about the functionality of each typological class. 6. conclusion this study aimed to provide new data about the estimation of the volumetric capacity of ceramic vessels from the neolithic sites of lugo di grezzana and riparo gaban, with the aid of 3d graphic software. the automatic calculation, based on a reconstruction of the vessel through blender®, represents an efficient method for the estimation of the volumetric capacity since: allow to work directly on the bibliography available, the volumetric calculation takes place in just few steps with very reliable results and sufficiently valid to be applied to an archaeological study. the results show a different distribution of the ceramic dataset that could depend on different reasons like a poor conservation of some typological classes, that would not allow to estimate the volume or due to a different connotation of the two sites. about the distribution of each typological class for the most part of them it was possible to highlight a specific distribution of the results, according to four trends, each of them could be conditioned by functional uses and/or cultural models. to date, numerous questions have therefore emerged and remain unresolved, especially regarding jugs and truncate coneshaped vessels. could a limited volumetric range and ø/ℎ ratio of the jugs be an evidence of a restricted number of functions? at the same, its homogeneity is shared within other sites belonging to the fiorano culture? how far is it diversified compared to other contemporary cultures of the northern italy? conversely in the case of truncate cone-shaped vessels, could a wide volumetric range and limited ø/ℎ ratio represent a plurality of technological aspects compared to other typological class? in general terms, the study of vessel capacity is one of the parameters necessary for the functional understanding of the artifacts. however, only through a systematic application of this method to other contemporary sites and the evaluation of further investigation parameters (currently in progress), it will be possible to obtain more information about the functionality of ceramic vessels. acknowledgement the research project is conducted by the research laboratory labaaf (laboratorio bagolini archeologia archeometria fotografia) belongs to ceasum (centro di alti studi umanistici) of the university of trento. the project has prof.ssa annaluisa pedrotti as scientific manager and paolo chistè as technical director of tefalab (laboratorio di tecniche fotografiche avanzate, unit of labaaf). all the authors equally contributed to research and writing. references [1] f. cavulli, d. e. angelucci, a. pedrotti, la successione stratigrafica di lugo di grezzana (verona), preistoria alpina 38.8 (2002), pp. 89-107. [2] a. pedrotti, p. salzani, lugo di grezzana: un “emporio” di settemila anni fa sui monti lessini veronesi, la lessinia ieri oggi domani 33 (2010), pp. 87-103. [3] a. pedrotti, p. salzani, f. cavulli, m. carotta, d. e. angelucci, l. salzani, l’insediamento di lugo di grezzana (vr) nel quadro del primo neolitico padano alpino, in: studi di preistoria e protostoria 2, preistoria e protostoria in veneto. g. leonardi, v. tinè (editors). iipp, firenze, 2015, isbn 9788860450562, pp. 95-107. [4] l. salzani, grezzana, abitato neolitico in località campagne di lugo, in: quaderni di archeologia del veneto ix. giunta regionale del veneto (editor). canova, padova, 1993, isbn 9788886177191, pp. 83-87. [5] l. moser, il sito neolitico di lugo di grezzana (verona). i materiali archeologici della campagna di scavo 1993, in: la neolitizzazione tra oriente ed occidente. a. pessina, g. muscio (editors). museo friulano di storia naturale, udine, 2000, pp. 125150. [6] f. santaniello, v. delladio, a. ferrazzi, s. grimaldi, a. pedrotti, nuovi dati sulla tecnologia litica del neolitico antico dell’area padano alpina: i rimontaggi di lugo di grezzana (verona), ipotesi di preistoria 13.1 (2020), pp. 53-66. doi: 10.6092/issn.1974-7985/11008 [7] a. pedrotti, il riparo gaban (trento) e la neolitizzazione della valle dell'adige, in: antenate di venere 27.000 – 4.000 a. c.. v. kruta, l. kruta poppi, m. lička, e. magni (editors). catalogo della mostra, skira, milano, 2009, isbn 9788857204765, pp. 39-47. [8] b. bagolini, riparo gaban: preistoria ed evoluzione dell’ambiente, museo tridentino di scienze naturali, edizioni didattiche, 1980. [9] b. bagolini, a. pedrotti, l‟italie septentrionale, in: atlas du néolithique europée. j. guilaine (editor). université de liège, liège, 1998, pp. 233-341. [10] a. pedrotti, il gruppo gaban e le manifestazioni d'arte del primo neolitico, in: settemila anni fa il primo pane. ambienti e culture delle società neolitiche. a. pessina, g. muscio (editors). catalogo mostra, museo friulano di storia naturale, udine, 1998, pp. 125131. [11] b. bagolini, p. biagi, le più antiche facies ceramiche dell’ambiente padano, rivista di scienze preistoriche 32 (1977), pp. 219-233. [12] l. engels, l. bavay, a. tsingarida, calculating vessel capacities: a new web-based solution, in: proceedings of the symposium shapes and uses of greek vases (7th-4th centuries bc). a. tsingarida (editor). crea-patrimoine, bruxelles, 2009, isbn 9789077723852, pp. 129-133. [13] e. c. rodriguez, c. a. hastorf, calculating ceramic vessel volume: an assessment of methods, antiquity 87 (2013), pp. 1182-1190. doi: 10.1017/s0003598x00049942 [14] c. velasco felipe, e. celdrán beltrán, towards an optimal method for estimating vessel capacity in large samples, journal of archaeological science 27 (2019), pp. 1-12. doi: 10.1016/j.jasrep.2019.101966 [15] a. o. shepard, ceramics for the archaeologist, carnegie institution, washington, 1956, isbn 9780872796201. https://doi.org/10.6092/issn.1974-7985/11008 https://doi.org/10.1017/s0003598x00049942 https://doi.org/10.1016/j.jasrep.2019.101966 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 10 [16] n. castillo tejero, j. litvak, un sistema de estudio para formas de vasijas, departamento de prehistoria, instituto nacional de antropología e historia, mexico city, 1968. [17] j. w. ericson, e. g. stickel, a proposed classification system for ceramics, world archaeology 4.3 (1973), pp. 357-367. doi: 10.1080/00438243.1973.9979545 [18] g. a. johnson, local exchange and early state development in southwestern iran, anthropological archaeology 51, university of michigan press, 1973. doi: 10.3998/mpub.11396443 [19] p. m. rice, pottery analysis: a sourcebook. university of chicago press, chicago, 1987, isbn 9780226711164. [20] p. m. rice, pottery analysis: a sourcebook. university of chicago press, chicago, 2015, isbn 9780226923215. [21] l. m. senior, d. p. birnie, accurately estimating vessel volume from profile illustrations, american antiquity 60.2 (1995), pp. 319334. doi: 10.2307/282143 [22] j. p. thalmann, a seldom used parameter in pottery studies: the capacity of pottery vessels, in: the synchronization of civilizations in the eastern mediterranean in the second millennium b.c. iii. m. bietak, e. czerny (editors). österreichische akademie der wissenschaften, vienne, 2007, isbn 9783700135272, pp. 431-438. [23] á. sánchez climent, m. l. cerdeño serrano, methodological proposal for the volumetric study of archaeological ceramics through 3d edition free-software programs: the case of the celtiberians cemeteries of the meseta, virtual archaeology review 5.11 (2014), pp. 20-33. doi: 10.4995/var.2014.4173 [24] kotile. software program. online [accessed 11 march 2022] https://kotyle.readthedocs.io/en/latest/index.html# [25] n. karasik, p. u. smilansky, instructions for users of the module ’capacity’, 2006. online [accessed 10 june 2012] http://archaeology.huji.ac.il/depart/computerized/files/instruct ions_capacity_6.pdf. [26] capacity – centre de recherches en archéologie et patrimoine. université libre de bruxelles. web application. online [accessed 11 march 2022] https://capacity.ulb.ac.be/index.php?langue=en [27] blender foundation, blender the free open source 3d content creation suite, available for all major operating systems under the gnu general public license. online [accessed 11 march 2022] https://www.blender.org/ [28] a. tavella, m. ciela, p. chistè, a. pedrotti, preliminary studies on the volumetric capacity of ceramic from the neolithic site of lugo di grezzana (vr) through 3d graphics software, 2020 imeko tc4 international conference on metrology for archaeology and cultural heritage, trento, italy, 22-24 october 2020, pp. 257-262. https://www.imeko.org/publications/tc4-archaeo2020/imeko-tc4-metroarchaeo2020-049.pdf [29] v. becker, studien zum altneolithikum in italien, vol. 3, lit verlag, münster, 2018, isbn 9783643142207. [30] d. g. banchieri, e. montanari, g. odetti, a. pedrotti, il neolitico dell’italia settentrionale, in: criteri di nomenclatura e di terminologia inerente alla definizione delle forme vascolari del neolitico/eneolitico e del bronzo/ferro. d. cocchi genick (editor). atti del congresso di lido di camaiore, 26-29 marzo 1998, vol. 1, octavo, firenze, 1999, isbn 9788880301905, pp. 4362. [31] a.m. portillo, c. sanz, fourth order method to compute the volume of archaeological vessels using radial sections: pintia pottery (spain) as case study, international journal of computer mathematics 98 (2021), pp. 705-718. doi: 10.1080/00207160.2020.1777405 [32] n. dal santo, g. steffè, le industrie: ceramica, pietra scheggiata, altre pietre lavorate. in: il villaggio neolitico di lugo di romagna fornace gattelli. strutture ambiente culture. g. steffè, n. degasperi (editors). origines, istituto italiano di preistoria e protostoria, firenze, 2019, isbn 9788860450746, pp. 393-466. [33] p. biagi, e. starnini, d. borić, n. mazzucco, early neolithic settlement of the po plain (northern italy): vhò and related sites. documenta praehistorica xlvii (2020), pp. 192-221. doi: 10.4312/dp.47.11 [34] m. rottoli, f. cavulli, a. pedrotti, l’agricoltura di lugo di grezzana (verona): considerazioni preliminari, in: studi di preistoria e protostoria 2, preistoria e protostoria in veneto. g. leonardi, v. tinè (editors). iipp, firenze, 2015, isbn 9788860450562, pp. 109-116. [35] fao/infoods density database – version 2. online [accessed 11 march 2022] http://www.fao.org/infoods/infoods/tables-anddatabases/faoinfoods-databases/en/ [36] ü. h. güran, some physical and nutritional properties of hulled wheat, journal of agricultural sciences 15.1 (2009), pp. 58-64. doi: 10.1501/tarimbil_0000001073 [37] f. cavulli, d. e. angelucci, a. pedrotti, nuovi dati sui complessi strutturali in elevato di lugo di grezzana (verona), in: studi di preistoria e protostoria 2, preistoria e protostoria in veneto. g. leonardi, v. tinè (editors). iipp, firenze, 2015, isbn 9788860450562, pp. 95-107. [38] a. costa, f. cavulli, a. pedrotti, i focolari, forni e fosse di combustione di lugo di grezzana (vr), ipotesi di preistoria 12.1 (2019), pp. 27-48. doi: 10.6092/issn.1974-7985/10256 [39] a. pessina, v. tiné, archeologia del neolitico. l’italia tra vi e iv millennio a.c., carocci editore, roma, 2018, isbn 9788843092215. [40] b. a. nelson, etnoarchaeology and paleodemography: a test of turner and lofgren’s hypothesis, journal of anthropological research 37.2 (1981), pp. 107-129. doi: 10.1086/jar.37.2.3629704 [41] d. e. arnold, ceramic theory and cultural process, cambridge university press, cambridge, 1985, isbn 9780521252621. [42] n. cuomo di caprio, ceramica in archeologia 2, l’erma di bretschneider, roma, 2007, isbn 9788882653972. [43] j. vieugué, spécialisation fonctionnelle des premières productions céramiques dans les balkans (6100-5500 av. j.c.), bulletin de la société préhistorique française 109.2 (2012), pp. 251-265. doi: 10.3406/bspf.2012.14106 [44] c. orton, m. hughes, m. hughes. pottery in archaeology. cambridge university press, cambridge, 2013, isbn 9780511920066. doi: 10.1017/cbo9780511920066 [45] j. vieugué, y. garfinkel, o. barzilai, e.c.m. van den brink, pottery function and culinary practices of yarmukian societies in the late 7th millennium cal. bc: first results, in: paléorient 42.2 connections and disconnections between the northern and southern levant in the late prehistory and protohistory (12th – mid-2nd mill. bc). i. milevski, f. bocquentin, m. molist (editors). cnrs editions, paris, 2016, isbn 9782271094957, pp. 97-115. doi: 10.3406/paleo.2016.5722 https://doi.org/10.1080/00438243.1973.9979545 https://doi.org/10.3998/mpub.11396443 https://doi.org/10.2307/282143 https://doi.org/10.4995/var.2014.4173 https://kotyle.readthedocs.io/en/latest/index.html http://archaeology.huji.ac.il/depart/computerized/files/instructions_capacity_6.pdf http://archaeology.huji.ac.il/depart/computerized/files/instructions_capacity_6.pdf https://capacity.ulb.ac.be/index.php?langue=en https://www.blender.org/ https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-049.pdf https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-049.pdf https://doi.org/10.1080/00207160.2020.1777405 https://doi.org/10.4312/dp.47.11 http://www.fao.org/infoods/infoods/tables-and-databases/faoinfoods-databases/en/ http://www.fao.org/infoods/infoods/tables-and-databases/faoinfoods-databases/en/ https://doi.org/10.1501/tarimbil_0000001073 https://doi.org/10.6092/issn.1974-7985/10256 https://doi.org/10.1086/jar.37.2.3629704 https://doi.org/10.3406/bspf.2012.14106 https://doi.org/10.1017/cbo9780511920066 https://doi.org/10.3406/paleo.2016.5722 microsoft word article 5 136-727-1-ga.docx acta imeko december 2013, volume 2, number 2, 20 – 27 www.imeko.org acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 20 identification of most influential factors in a virtual reality tracking system using hybrid method fabien ezedine1, jean-marc linares2, wan mansor wan muhamad1, jean-michel sprauel2 1 universiti kuala lumpur unikl mfi, malaysia, seksen 14, jalan teras jernang, 43650 bandar baru bangi, malaysia 2 aix-marseille université, cnrs, ism umr 7287, 13288 marseille cedex 09, france section: research paper keywords: monte carlo method, design of experiment, hadamard matrix, uncertainty, virtual reality, tracking system. citation: fabien ezedine, jean-marc linares, wan mansor wan muhamad, jean-michel sprauel, identification of most influential factors in a virtual reality tracking system using hybrid method, acta imeko, vol. 2, no. 2, article 5, december 2013, identifier: imeko-acta-02 (2013)-02-05 editor: paolo carbone, university of perugia received july 13th, 2013; in final form november 18th, 2013; published december 2013 copyright: © 2013 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: (none reported) corresponding author: fabien ezedine, email: ezedinefabien@gmail.com 1. introduction carolina cruz-neira et al. [1] defined vr in 1992 as a "system which provides real-time viewer-centered head-tracking perspective with a large angle of view, interactive control, and binocular display". the concept and the relation between the vr system and the user can be summarized as follows [2]:  capture of user's actions;  computation of these data and creation of a tailored response;  transmission of the response toward the user. this concept was greatly improved during the 80's thanks to the parallel evolution of computer science, and the development of new external devices in haptic interaction and visualisation. since then, more and more applications are developed in manufacturing, research, teaching or therapeutics fields, through multiple devices such as the head-mounted display (hmd), the boom or the cave automatic virtual environment (cave). this research work deals with the cave. figure 1. model of location and orientations of the eight-cameras system in the cave. abstract this paper studies the factors that influence the accuracy of virtual reality (vr) systems, in particular for applications in a cave automatic virtual environment (cave). the cave can be used to train surgeon students. for this purpose, an application for a total knee arthroplasty surgery is investigated. to meet the requirements for a high quality training, the accuracy of the tracking system in the cave has to be improved. first, a complete model of the tracking system is created based on the extrinsic and intrinsic parameters of the eight-camera system. with this model, the uncertainty of the tracking system is determined for one location in the cave. next, a hybrid method, comprising the monte carlo method and design of experiment, is used to find the most important factors influencing the tracking accuracy in the cave. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 21 our cave consists of a four projection-sides environment (three walls and a ground), a set of cameras (eight cameras in the studied system) and four stereoscopic projectors. figure 1 models approximate locations and orientations of cameras in the cave structure. interactions between a numerical model, a system of projection, a system of tracking and a set of tracked spheres hung to user are created. a user which is immersed in this virtual world wears shutter glasses, in order to get a depth perception of the numerical model projected by projectors, via mirrors. cameras are able to locate glasses, thus defining the field of vision of the user, through spherical markers tracked by these cameras shown in figure 2. then, this information is computed and sent back to the user. the technology used by the system in place strongly influences the quality of the immersion experienced and sensed by the user. these devices provide information in order to get the location of the user in the cave, using the tracking of optical markers [3]. applications require a more or less good accuracy: for example, a visitor in a museum travelling inside an architecture, or a medical student training on a knee surgery, are differentiated by the precision needed to really feel the immersion inside this virtual world. therefore, before loading an application inside a vr system, checking the compatibility between the precision required by the application and the real accuracy delivered is a compulsory step. if this process does not bring any convincing results, then two options are possible: either the application cannot be loaded in the cave with its proper features, or a method has to be found to improve the accuracy of the tracking system, which is the purpose of this paper. the method is, first, to model the tracking system in order to define the uncertainties of the captured coordinates: these uncertainties will be derived from a covariance matrix which is found using monte carlo simulations; and second, to identify the most influential factors using a hybrid method involving monte carlo simulations, design of experiment, hadamard matrix and bayesian analysis. 2. the pin-hole model in the tracking system, arttrack cameras are modelled using the principle of the pin-hole camera and projective geometry theory, which is shown in figure 3. linear algebra is used to provide the cartesian coordinates m(ui, vi) in the ccd frame of the image of a marker tracked by each camera i. these coordinates are derived from the position s(xi,yi,zi) of the marker in the cave global coordinate system. these coordinates are given as a function of the extrinsic and intrinsic factors of the camera [4]. extrinsic factors define the position and the orientation of each camera. the position of each camera is characterized by the coordinates of the lens centre oi. the orientation of each camera is defined by azimuth and inclination angles. intrinsic factors define the proper characteristics of the camera, such as the focus distance f, the skew coefficient α, defining the angular error between the horizontal and vertical directions of pixels and the coordinates (u0i, v0i) of the intercept ci (figure 3) between the focus axis and the image plane of the camera i. the reference frames used in the model are (i refers to the index of a camera, 1 < i < 8):  ( , , , )wr o x y z    : bound to the cave, the global coordinates system ;  ( , , , )cai i cai cai cair o x y z    : bound to the camera i ;  im ( , , )ii i ir a u v   : bound to the image plane of the camera i ; the characteristic points are defined as following:  ( , , )s x y z : coordinates of s into wr ;  ),,( iii zyxs : coordinates of s into cair ;  ( , )i i im u v : coordinates of im into irim ;  0 0( , )i i ic u v : coordinates of ic into im ir ; the pin-hole model is used to provide a realistic description of a camera. in this approach, the real image plane is shifted to the location zci = f = oici and reversed, making the analysis easier [5]. the pin-hole model allows defining the image mi of figure 2. arttrack tracking cameras. figure 3. pin-hole model with f, the focus distance. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 22 any tracked point s in the local reference frame of the camera. the coordinates (ui, vi) of mi are given as a function of the focus distance f, the skew coefficient α and the scale factor h (equation 1): 0 0 . 0 . 0 0 . 0 0 1 0 1 i i i i i i i x h u f u y h v f v z h                                (1) classical matrix transformations are used to derive the local coordinates of s from the fixed coordinates in the reference frame rw of the cave:     11 12 13 21 22 23 31 32 33 1 0 0 0 1 0 . . 0 0 1 1 1 0 0 0 1 0 0 . 0 0 0 0 1 1 i xi i yi i i i zi i i i i i i i i i x x t y y t t r z z t r r r x r r r y r r r z                                                              (2) and      .i yi xir r r considering, in cartesian coordinates:  it : relative to the translation vector io o   xir : relative to the orientation of the camera around x-axis:   1 0 0 0 cos( ) sin( ) 0 sin( ) cos( ) xi xi xi xi xi r r r r r          (3)  yir : relative to the orientation of the camera around yaxis, using axial rotation (quaternion theory). considering ( , , )i xi yi zin n n n  , the transformation matrix is:   2 2 2 1 0 0 cos( ). 0 1 0 0 0 1 . . (1 cos( )). . . . . 0 sin( ). 0 0 yi yi xi xi yi yi zi yi xi yi yi yi zi xi zi yi zi zi zi yi yi zi xi yi xi r r n n n n n r n n n n n n n n n n n n r n n n n                                 (4) some distortions caused by the lens have to be considered in order to extend the pine-hole model. the vector distortion is ( , )txi yi  . these distortions can be radial, image decentring, and prismatic. equation (5) describes the detailed model r3d1p1, a non-linear polynomial distortion model for cameras [6]: 3 2. 2 2 1 1 2 2 1 3 2. 2 2 2 1 2 2 2 .[ . .( 2. ) 2. . . . ] .[ . .( 2. ) 2. . . . ] n xi mi ni mi i mi mi n i mi mi i mi n yi mi ni mi i mi mi n i mi mi i mi x r d x d x y p y r d y d x y p                              where: 2 2 2( )mi mi mix y   is the squared distance between ci and mi ; rni is the radial distortion factor (1 3)n  dmi is the image decentring distortion factor (1 2)m  relatives to xcai-axis for m = 1 and to ycai-axis for m = 2 ; pmi is the prismatic distortion factor (1 2)m  relatives to xcai-axis for m = 1 and to ycai-axis for m = 2 ; ( ; )i i mi mic m x y  . then, equation (6) describes the full model providing the coordinates of the image of a tracked marker, inside the ccd frame of the camera i, as a function of s(x,y,z) : 11 12 13 31 32 33 21 22 23 0 31 32 33 21 22 23 0 31 32 33 . . . . . . . . . . . . . . . . . . . . . i i i xi i i i i zi i i i yi xi i i i i zi i i i yi i yi i i i i zi x r y r z r t u f x r y r z r t x r y r z r t u x r y r z r t x r y r z r t v f v x r y r z r t                                  (6) 3. tracking uncertainty computation in the pin-hole model, the intrinsic and extrinsic parameters are assumed to be perfectly defined but it is not really the case. in fact, these factors are calibrated by preliminary experiments and, therefore, are only ranged in a given uncertainty interval. for this reason, the real images of any marker captured by the camera do not fit the position calculated from the nominal parameters but are subjected to some deviations. therefore, the approximated 3d coordinates of the tracked point computed by the multiple camera system do not correspond to the real ones. our aim is consequently to appraise the uncertainties of the 3d coordinates evaluated by the tracking system. for that purpose, the intrinsic and extrinsic parameters are randomly perturbed assuming a uniform distribution in their uncertainty intervals, in order to be placed in the most critical case [7]. each simulation is assumed to correspond to a real configuration of the tracking system. for this end, the coordinates of the images mi as they would be captured by the eight cameras are (5) acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 23 computed. the theoretical positions of mi are also derived from the nominal mean parameters. the difference between both points is called projection error (ei). as in a real experimental configuration, the coordinates of the tracked point s are supposed unknown. after computing the projection error for each camera using the full model, the least squares method is, therefore, used to evaluate the location of s in rw. the deviation ( , , )dm dx dy dz  between the real and evaluated locations of s is also computed. this calculation procedure is repeated 30000 times using a monte carlo simulation approach [8]. this method provides the variance covariance matrix mv of the deviations dm  . the error zone, an ellipsoidal shape, is also created. if the eigenvalues and eigenvectors of the variance covariance matrix are calculated, the dimension and orientation of the error zone can be computed. a cave with eight tracking cameras counts until one hundred and twenty eight significant factors, sixteen per camera [5]. in order to improve the accuracy of the tracking system and consequently the quality of the user immersion in the cave, it is important to target adjustable factors which have a significant influence. 3.1. the hybrid method as shown in figure 4, the flowchart explains the first step of this hybrid method which is to detect adjustable factors. it is assumed that intrinsic factors cannot be modified and/or adjusted by the user. indeed, the cameras used are off-the-shelf ones from arttrack and the only factors we can work on are the extrinsic ones. moreover, preliminary calculations have shown that the distortion factors of the camera do not have significant effect on the components of the variance covariance matrix vm. therefore, only effects of the position and the orientations of the cameras are studied. the cameras are fixed on the framework of the cave. the position referring to the camera i is defined as the location of lens optic centre using cartesian coordinates in rw (camitx, camity, camitz). the orientation of the optic axis is defined by camirx and camiry. depending on the location of cameras inside the framework, all parameters cannot be adjusted. then, only twenty six extrinsic factors are adjustable (cam1ty, cam2tz, cam3tx, cam3tz, cam4tx, cam5tx, cam6tx, cam6tz, cam7tz, cam8ty) and camirx, camiry for 1 8i  . these factors of position and orientation of cameras are detailed in figure 5, where ci refers to the location of the camera i. the processing for adjustable and non-adjustable factors is different. in the real configuration of the cave, the nonadjustable factors xi are fixed and calibrated during manufacturing. however in order to compare the effects of the adjustable factors and the unknown influences of the internal parameters of the cameras, the later values were generated randomly in the monte carlo simulation, leading to random perturbations of the calculated results. this approach permitted the application of statistical tests to discriminate the significant adjustable factors. the effects of the adjustable factors xi are studied by a doe methodology [10]. for this purpose, only the lower and higher values of the factors were considered and normalized to (-1,1). in a classical approach, it would lead to 226 simulations. but, in order to decrease the number of random generations, an optimized strategy is used, based on hadamard design matrix of dimension 28x26 [11]. the doe analysis leads to twenty eight monte carlo simulations. two responses (8), function of factors xi, are provided by the covariance matrix after monte carlo simulations: the average dimension of the error zone, y1; the distortion of the error zone, y2, such as: figure 4. flowchart of hybrid method (mcm, doe). figure 5. translation and rotation adjustments available for each camera. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 24 1 2 2 1 . ( ) 3 3 . 2 m ij y m tr v y d         where .md v m i  (8) a screening process is used to detect the influential factors. the screening models are linear systems which link y1 and y2, the response vectors to x, the experimental matrix. vectors b1, b2 of components bi1, bi2 defined the coefficients of the model and e1, e2 the error vectors: 1 1 1 2 2 2 . . y x b e y x b e      (9) the error vectors e1, e2 describe two kinds of deviations: the best fit approximation error and the random perturbations introduced by the non-adjustable factors. due to the large number of intrinsic and extrinsic parameters of the monte carlo simulation, the distribution of these errors becomes practically gaussian, even if the random generations used a uniform repartition. these systems get a number of equations greater than the number of unknowns. the least squares method is therefore used to solve the system. the estimates 1 2 ˆ ˆ,b b of vectors b1, b2 are obtained: 1 1 1 1 2 2 ˆ ( . ) . . ˆ ( . ) . . t t t t b x x x y b x x x y       (10) then, statistical tools are used to create a statement of factors, concerning their influence on responses. figure 5 summarizes the whole hybrid method used to highlight the most influential factors of the tracking system in the cave. 3.2. statistical tools two tools are used to analyse and state the influence of the factors: the graph of effects and the bayesian analysis. regarding the graph of effects, it is assumed that bi1, bi2 coefficients are propagated using a student law. the degree of freedom is one, because twenty seven factors bi and twenty eight different equations given by the hadamard matrix are considered. with 5% risk, the student table provides: 0.025 12.7t  . 95% of bij values belong to this range of confidence: 0,025 0,025 ˆ ˆ. var( ); . var( )ij ijt b t b      . (11) then, in order to look after the range of confidence of bi1, bi2 coefficients, the error matrix of two answers y1 and y2 1 2( , )e e e can be computed as follow: 1 1 1 2 2 2 ˆ. ˆ. e y x b e y x b       (12) 1 2 ˆ ˆ( ), ( )var b var b are computed using these equations : 1 2 1 2 1 1 2 2 ˆ( ) ( . ) . ˆ( ) ( . ) . t e t e var b x x var b x x         (13) regarding the bayesian analysis, the principle of the test is to consider a posteriori that every factor is active [12][13]. two parameters are considered: the probability that a factor is active a priori (p) ; the ratio between variances of active factors and variances of non-active factors (q). then, the aim is to calculate probabilities a posteriori for any combination (p,q) ranging : 0.1 0.4p  for p and 5 20q  for q. table 1. domain of factor concerning position of cameras given in millimetres. table 2. domain of factor concerning orientation of cameras, 1 8i  , i integer given in degrees. figure 6. bayesian analysis of y1. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 25 3.3. adjustable factor settings the location of the marker studied in monte carlo simulations is (0, 1200, 0), all coordinates are given in millimetres. the model was programmed in visual basic. depending on the characteristics of the computers used, a simulation may take four to six hours. the domain of every factor has to be chosen. as said previously and concerning the positions of cameras, their location influences this domain, whereas the domain of factors about orientation remains the same. the domains per adjustable factor are shown in table 1 and table 2. the unit used for the position is the millimetre and the one used for orientation is the degree. these variations are given as a function of the theoretical location and orientation of cameras inside the reference frame of the cave as shown in figure 5. 4. result analysis 4.1. analysis for y1 these two statistical analyses are strongly similar and lead to the same findings: the positions of camera inside the cave do not influence the dimension of the error zone ; only the orientations of cameras are adjustable factors influencing the dimension of the error zone. the symmetry of the cave is respected ; the influential factors over y1 are the rotations : cam1ry, cam2rx, cam3ry, cam4rx, cam5rx, cam6ry, cam7rx and cam8ry. in the case of cam4ry and cam5ry, a reflection has to be done as the geometrical symmetry of the cave is not respected. as per figure 6 and figure 7, the statistical analysis shows that the orientations of each camera might have an influence on the dimension of the error zone. figure 7. graph of effects of y1. figure 8. bayesian analysis of y2. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 26 4.2. analysis for y2 as the response mentioned above, the two analyses lead to the same findings: as the response mentioned above, the two analyses lead to the same findings: the positions of camera inside the cave do not influence the dimension of the error zone ; only the orientations of cameras are adjustable factors influencing the dimension of the error zone. the symmetry of the cave is respected ; the influential factors over y2 are the rotations : cam1rx and cam8rx. as per figure 8 and figure 9, the statistical analysis shows that the orientations of cameras 2, 5, 6 and 7 might have a low influence over the distortion of the error zone. cameras 1 and 8 strongly influence the distortion of the error zone in comparison to others. 5. conclusion this study focused on the tracking system of a cave automatic virtual environment. in order to improve the accuracy of any position captured by the system a complete modelling of the cameras has been developed. this approach permitted the simulation of the tracking device of the cave. using a hybrid method based on a doe and monte carlo simulations, the effect of the positions and the orientations of the cameras over distortion and average dimension of the error zone were studied. the final result revealed that the only adjustable factors which has to be considered for a strong influence on the dimension and the distortion of the error zone are the orientations of cameras. the next step of this research will be to analyse the possible interactions between the key factors and to create the response surfaces. acknowledgement this work was realized in collaboration with the centre de réalité virtuelle de marseille (crvm) in france. we thank all members of crvm team, especially its director prof. daniel mestre and engineers pierre mallet, vincent perrot and jeanmarie pergandi, for their help and great support. this research work was also supported by an strg grant from the universiti kuala lumpur, unikl. references [1] cruz-neira, carolina, daniel j. sandin, and thomas a. defanti. "surround-screen projection-based virtual reality: the design and implementation of the cave." proceedings of the 20th annual conference on computer graphics and interactive techniques. acm, 1993. [2] sylvain jubertie, "modèles et outils pour le déploiement d'applications de réalité virtuelle sur les architectures distribuées hétérogènes", thesis from université d'orléans, france, december 2007. [3] thomas a. defanti, gregory dawe, daniel j. sandin, jurgen p. schulze, peter otto, javier girado, falko kuester, larry smarr, ramesh rao, "the starcave, a third-generation cave and virtual reality optiportal", future generation computer systems, volume 25, issue 2, february 2009, pp. 169-178. [4] f. ezedine, w. m. wan muhamad, j.m. linares, “uncertainty calculation of a multicamera tracking system in a cave”, advanced mathematical and computational tools in metrology and testing, vol.9 (f pavese, m bär, j-r filtz, a b forbes, l pendrill, h. shirono, eds.), series on advances in mathematics for applied sciences vol. 84, world scientific, singapore, pp. 151-158, 2012. [5] a. g. buaes, "a low cost one camera tracking system for indoor wide-area augmented and virtual reality environments", post graduation program in electrical engineering, 2006. [6] c. ricolfe-viala, a.j. sanchez-salmeron, "robust metric calibration of non-linear camera lens distortion", pattern recognition 43, 2010, pp. 1688-1699. [7] m. matsumoto, t. nishimura, "mersenne twister: a 623dimensionally equidistributed uniform pseudo-random number generator", acm transactions on modeling and computer simulation, vol. 8, no. 1, january 1998, pp. 3–30. [8] m. douilly, n. anwer, p. bourdet, n. chevassus, p. le vacon, uncertainty evaluation for pose estimation by multiple camera measurement system, advanced mathematical and computational tools in metrology and testing, serie on advances in mathematics for applied science, vol. 78, pp. 73-79. figure 9. graph of effects of y2. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 27 [9] j.m. linares, j.m. sprauel, p. bourdet, "uncertainty of reference frames characterized by real time optical measurements: application to computer assisted orthopaedic surgery", cirp annals manufacturing technology, volume 58, issue 1, 2009, pp. 447-450. [10] j. chaves-jacob, j.m. linares, j.m. sprauel, “using statistical confidence boundary of a d.o.e. response surface to estimate optimal factors”, advanced mathematical and computational tools in metrology and testing, vol.9 (f pavese, m bär, j-r filtz, a b forbes, l pendrill, h. shirono, eds.), series on advances in mathematics for applied sciences vol. 84, world scientific, singapore, pp. 74-81, 2012. [11] c. koukouvinos, s. stylianou, "on skew-hadamard matrices", discrete mathematics, volume 308, issue 13, july 2008, pp. 2723-2731. [12] r. kacker, r. kessel, k.-d. sommer, "only non-informative bayesian prior distribution agree with the gum type a : evaluations of input quantities", advanced mathematical and computational tools in metrology and testing, vol.9 (f pavese, m bär, j-r filtz, a b forbes, l pendrill, h. shirono, eds.), series on advances in mathematics for applied sciences vol. 84, world scientific, singapore, pp. 216-223, 2012. [13] g.a. kyriazis, "bayesian inference in waveform metrology", advanced mathematical and computational tools in metrology and testing, vol.9 (f pavese, m bär, j-r filtz, a b forbes, l pendrill, h. shirono, eds.), series on advances in mathematics for applied sciences vol. 84, world scientific, singapore, pp. 232-243, 2012. a principal component analysis to detect cancer cell line aggressiveness acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 7 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 a principal component analysis to detect cancer cell line aggressiveness livio d’alvia1, serena carraro1, barbara peruzzi2, enrica urciuoli2, ludovica apa1, emanuele rizzuto1 1 department of mechanical and aerospace engineering, sapienza university of rome, 00184 rome, italy 2 bone physiopathology research unit, bambino gesù children’s hospital, irccs, 00146 rome, italy section: research paper keywords: measurement of dielectric properties; biosensor; non-invasive measurements; cancer cell lines; cancer aggressiveness; osteosarcoma; breast cancer; pca; principal component citation: livio d’alvia, serena carraro, barbara peruzzi, enrica urciuoli, ludovica apa, emanuele rizzuto, a principal component analysis to detect cancer cell line aggressiveness, acta imeko, vol. 12, no. 2, article 22, june 2023, identifier: imeko-acta-12 (2023)-02-22 section editor: alfredo cigada, politecnico di milano, italy, andrea scorza, università degli studi roma tre, italy received october 12, 2022; in final form february 27, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by “progetti di ricerca medi 2021” sapienza university of rome corresponding author: livio d’alvia, e-mail: livio.dalvia@uniroma1.it 1. introduction one defining feature of malignant tumors is represented by the quick creation of abnormal cells that arise beyond their usual boundaries. moreover, these cells have an uncontrollable reproduction and division rate up to constitute cancerous tissues since they do not respond to the standard signaling system of the body [1]–[3]. as stated by the world health organization, in 2020, cancer will be the primary cause of approximately 10 million deaths worldwide. by way of example, 2.26 million cases and 685 thousand deaths of breast cancer and 2.21 million cases and 1.8 million deaths of lung cancer, without forgetting the hundreds of thousands of children who develop malignant tumors each year [4]. an early diagnosis and screening can therefore contribute to reducing mortality and aid in more effective treatment. however, there is a lack of research on cancer behavior due to the various and complex molecular pathways involved in the genesis of tumors [5]. in most cases, the tumor degree – established by the cancer cells' characteristics throughout the tumor lesions' growth – is often used to make the diagnosis. actually, a series of cancer screening methods, such as biopsy, computed axial tomography (cat), or scintigraphy, exist but are costly and intrusive. biosensors in the microwave field may serve as a complementary or replacement method for early-stage noninvasive prognosis of a variety of illnesses, including malignancies. in this context, the measurement of dielectric properties of biological tissues has achieved significant benefits in biomedical and healthcare due to their high sensitivity, versatility, and reduced invasiveness [6]–[9]. indeed, this technology has consolidated its use in various fields. for example, gugliandolo et al. [10] developed a microwave microstrip resonator to measure water vapor for industrial pipeline applications. likewise, majcher et al. [11] investigated the possibility of using a dagger-shaped probe to measure soil moisture in agrifood applications. ultimately, d’alvia et al. [12]– [14] and cataldo et al. [15] proposed several applications in the cultural heritage field. abstract in this paper, we propose the use of principal component analysis (pca) as a new post-processing method for the detection of breast and bone cancer cell lines cultured in vitro using a microwave biosensor. mda-mb-231 and mcf-7 breast cancer cell lines and saos-2 and 143b osteosarcoma cell lines were characterized using a circular patch resonator in the 1 mhz – 3 ghz frequency range. the return loss of each cancer cell line was analyzed, and the differences among each other were determined through principal component analysis according to a protocol previously proposed mainly for electrocardiogram processing and x-ray photoelectron spectroscopy. our results showed that the four cancer cell lines analyzed exhibited peculiar dielectric properties when compared to each other and to the growth medium, confirming that pca could be employed as an alternative methodology to analyze microwave characterization of cancer cell lines which, in turn, may be deeply exploited as a tool for the detection of cancer cells in healthy tissues. mailto:livio.dalvia@uniroma1.it acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 on these bases, microwave-based sensors are now gaining more and more interest in the biomedical field. as highlighted in the literature [16], microwave probes offer the possibility of analyzing living tissue properties through a non-invasive measurement of scattering parameters or complex permittivity [17]–[19] and identifying eventual pathological conditions as a variation in the dielectric properties. concerning cancer cell and tissue characterization, maenhout et al. [20] evaluated the dielectric properties (dielectric loss, dielectric constant, and conductivity) of healthy non-tumorigenic cell lines, namely mcf-10a and four breast cancer cell lines (hs578t, mda-mb-231, mcf7, and t47d) using an open-ended coaxial probe in 200 mhz to 13.6 ghz range. again, zhang et al. [21] proposed a microwave biosensor capable of identifying the grade of colon cancer cell aggressiveness in the 4-12 ghz range. finally, in previous work [22], we proposed a circular patch resonator for the measurement of cancer cell line aggressiveness (saos-2, 143b, mcf7, and mda-mb-231) through the use of a lorentzian fit model for the return loss signal processing and a weighted manova (multivariate analysis of variance) to investigate the differences in the three main parameters of interest, namely return loss, resonance frequency and full width at half maximum (fwhm). this paper proposes a novel methodology to analyze microwave sensor’s return loss based on an optimized savitzkygolay filter, generally adopted for electrocardiogram processing or x-ray photoelectron spectroscopy [23], [24], and principal component analysis (pca) to extract meaningful information from the data and present a final classification based on possible similarities between analyzed materials. 2. materials and methods 2.1. cell culture and experimental procedure as previously described [22], we had the opportunity to test two pediatric human osteosarcoma cell lines, saos-2 and 143b [25]–[28], and two human breast adenocarcinoma cell lines, mcf7 and mda-mb-231 [29], [30] for their dielectric response. in particular, saos-2 and mcf7 are low-aggressive osteoblastlike osteosarcoma and low-aggressive breast cancer cell lines, while 143b and mda-mb-231 are high-aggressive lung-tropic metastatic osteosarcoma and high-aggressive bone-tropic breast cancer cell lines, respectively. cells were seeded in a standard 60 mm petri dish at an average density of 8 × 105 cells/plate and placed in an incubator at 37 °c with 5 % co2 for 24 hours to allow cells to form a homogeneous confluent monolayer. during the measurements, all cell types were maintained in 1.5 ml of dulbecco's modified eagle medium (dmem) culture medium [31], and eight different dishes were prepared for each cell line. moreover, eight samples of 1.5 ml pure dmem were prepared as controls. a circular patch resonator with a radius of 20.00 mm [22] and a subminiature ver. a (sma) connector placed on the conductive edge was employed to determine the dielectric properties of cell line samples. the key component of the measuring setup is the low-cost portable vector network analyzer minivna-tiny [32], used for measuring the return loss|s11(f)| in the operating frequency range of 1.9 – 2.6 ghz. the 700 mhz frequency span was previously evaluated to maximize the resolution of acquired data (0.5 mhz) [22]. as a result, the return loss|s11(f)|was acquired for the eight samples of the five different “materials under test” i. e. different media and cell lines. 2.2. data elaboration process principal component analysis (pca) is a multivariate analysis that permits identifying and extracting meaningful information from the data and presenting a final classification based on a multiparametric similarity test and variables reduction [33]. figure 1 shows a scheme of the applied pre-processing algorithm. all data processing was performed with originlab 2017 software. in particular, pca is a useful tool to reduce the dimension of a dataset, maintaining only those variables with the highest variance. as a result, all the vectors used to represent the acquired return loss are transposed into a new space with a dimension equal to the number of significant components determined by pca, and the acquired data may be represented as: where x is the original data matrix containing the return loss data, l is the loading matrix, s is the score matrix based on the eigenvalues derived from the x matrix decomposition, and e is the error matrix, which contains the variance load not explained by the pca model. the matrix dimension i is the number of acquired samples, j is the signal length, and k is the number of significant components. before performing the pca on the acquired return loss data, we applied a pre-processing algorithm, as proposed by es sebar et al. [34] for raman spectroscopy applications: 1) baseline removal through an interactive endpoint weighted (epw) algorithm for each column vector of x; 2) application of a savitzky-golay filter (sgf) using a window length of 14 points and fitted with a secondorder polynomial since sfg flattens peaks less than a moving average smoothing with the same window width [35]; figure 1. data processing workflow involved in pca. 𝑿(𝑖,𝑗) = 𝑺(𝑖,𝑘) × 𝑳(𝑘,𝑗) t + 𝑬(𝑖,𝑗) = �̂�(𝑖,𝑗) + 𝑬(𝑖,𝑗) , (1) acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 3) data normalization by subtracting its average value from each x column and scaling by the standard deviation [36]. for the i-th column of x, equation 2 holds: with x* the normalized matrix, x(i,c) the i-th centered vector, and σ the standard deviation of the x(i,c) vector. this normalization is also known as standard normal variate (snv) transformation. the principal pca was performed by applying equation (2) in equation (1): the discriminant analysis based on a cross-validation test was performed as the final analysis, using as many k components as those with an eigenvalue greater than or equal to 3 [37]. 3. results and discussions figure 2 presents an example of data processing for dmem, reporting the initial raw data (figure 2a) and the three steps for the signal processing (figure 2 b, c and d) baseline remotion, filtering, and normalization, respectively. in detail, the epw algorithm translates and nutes the signal so that the tails lie at zero, while the sg filter evaluates a polynomial regression around each point, creating a new smoothed value for each data point. finally, the snv transformation permits to center and scale the data without altering their overall interpretation: indeed, if two variables were equally correlated before pre-processing, they would still be strongly correlated in post-processing. therefore, for each of the forty acquired signals, the background is removed, the spectrum is filtered to improve the signal-tonoise ratio, the normalization is completed, and the pca is performed. the cumulative variance trend is shown in figure 3. it is possible to observe that the first three components represent an overall variance of about 92.3 %, given by a contribution equal to 76.6 %, 11.9 %, and 3.8 % for components 1, 2, and 3, respectively. according to the literature [37] this can be considered a satisfactory value, as a balance between cumulative variance and complexity of the system to be analyzed for the subsequent analysis, also taking into account that the fourth component contributes only for 2.5 % of the total variance, while the remaining thirty-six components account for the 5.2 % of the whole. 𝑿𝑖 ∗ = 𝑿(𝑖,𝑟𝑎𝑤) − (𝑿(𝑖,𝑟𝑎𝑤)) 𝜎 (𝑿(𝑖,𝑟𝑎𝑤) − (𝑿(𝑖,𝑟𝑎𝑤))) = 𝑿(𝑖,𝑐) 𝜎(𝑿(𝑖,𝑐)) (2) 𝑿(𝑖,𝑗) ∗ = 𝑿(𝑖,𝑗) ∗̂ + 𝑬(𝑖,𝑗) . (3) a) b) c) d) figure 2. a) example of raw return loss for the dmem and computed baseline, b) return loss for the dmem after baseline removal through epw algorithm, c) return loss for the dmem filtered with sg, and d) return loss for the dmem normalized with snv transformation, i. e. the final output. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 figure 4 shows the result of the pca, reporting the scores of the three principal components as a combination of two separately. as can be seen, in figures 4 (a) and (b), the measurements group together into two macro clusters: one containing the pure medium, highlighted by a dotted rectangle, and a second containing all the tested cell types, highlighted by a dashed rectangle. nonetheless, in both figures, it is also possible to distinguish five sub-clusters highlighted by the ellipses enclosing similar spectra with a 95 % confidence level. interesting results can be obtained by focusing on the inclination of these clusters. indeed, the pure medium revealed a different inclination than those obtained when testing all the cell lines. on the other hand, the two less aggressive cell lines (saos-2 and mcf7) have the same inclination as the two aggressive cell lines (mda-mb-231 and 143b). more in detail, the pure medium cluster and the cluster representing the highly aggressive cell lines (143b and mda-mb-231) have the same inclination (100° and 90° respectively) both when focusing on pc2 vs. pc1 and pc3 vs. pc1, while the inclination of the cluster representing the lowaggressive cell lines (saos-2 and mcf7) is 105° when representing pc2 vs. pc1 and 84° when computing pc3 vs. pc1. as a matter of fact, the inclination of the 95 % confidence interval cluster may be a parameter that can give helpful information on tumor aggressiveness. figure 4 c) shows that cell lines and pure dmem did not show proper clusterization. however, the absence of clusters in the pc2 vs. pc3 plot can be explained by considering the low variance captured by the third component (3.9 %). indeed, this component plays a crucial role in the model in linear figure 3. cumulative percentage variance for the first seven components, as obtained from the pca, with the third component highlighted in green. a) b) c) figure 4. cumulative score plots of the first three components: a) pc1-pc2, b) pc1-pc3, and c) pc2-pc3. the percent variance obtained for each component is in the axis legend. the colored ellipse highlights the five clusters representing the 95 % confidence interval. the dotted and dashed rectangles highlight the medium and “cell” clusters. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 combination with the main two components to allow for a cumulative variance higher than 90.0 % (as discussed above), thus improving the fitting of the essential peaks found in the two main components. subsequently, we evaluated the cross-validation of the pca loadings concerning the first three components, and the results are reported in table 1. this test highlighted that pure dmem was detected with a prediction accuracy of 100.0 %, saos-2, mcf7 with a prediction accuracy of 87.5 %, and mda-mb-231 with a prediction accuracy of 75.0 %. finally, 143b cells have a prediction accuracy of 50.0 %. it is worth noting that these discretized prediction accuracy results are strictly related to the number of tested samples. indeed, when testing 8 samples, every prediction accounts for 12.5 % accuracy. figure 5 reports the cumulative results, allowing a better interpretation of the pca predictions. the figure clearly shows that all the 8 dmem tested samples have been appropriately predicted, while, for example, among the 8 tested saos-2 samples (the red bar), seven have been appropriately recognized, while 1 was interpreted as mda-mb-231 cells. similarly, among the 8 tested 143b samples, 4 have been interpreted as 143b, 2 as mcf7, and 2 as mda-mb-231; of the 8 tested mcf-7 samples, 7 have been appropriately recognized and 1 as 143b, and among the 8 tested mda-mb-231 samples 6 have been appropriately interpreted and 2 as 143b cells. interestingly, the final average prediction error for the entire data set is 20.0 %. moreover, these results are in high agreement with that reported in [22], in which the different cell lines were studied with reference to the main lorentzian fit parameters (return loss, resonance frequency, and fwhm) through a manova test. in particular, in [22], we reported a statistical significance difference between dmem and all tested cell lines (p < 0.0001), and in this work, we obtained a cross-validation of 100.0 %. similarly, manova reported a significant (p < 0.5) difference between 143b vs. mcf7 and no significant difference between 143b and mda-mb-231, and pca prediction accuracy was 12.5 % and 25.0 %, respectively. therefore, the procedure reported in this work represents an alternative methodology to distinguish tumor aggressiveness without using any fitting procedure, hence only based on the raw data, whose limit at present consists of the limited number of measurements for each group. 4. conclusions this paper proposes an alternative methodology to analyze the return loss of tumor cell lines. the method allows for discriminating between groups of different tumor cells, analyzing the appropriately filtered and normalized purchase signal, leading to results in agreement with those obtained with traditional methods, such as the lorentzian fit. this methodology is based on a pre-processing algorithm, background removal associated with a savitzky-golay filter, a normalization procedure concerning the signal variation, and a subsequent principal component analysis. results showed good average accuracy of the prediction methodology, confirming the feasibility of pca also for this kind of signal, whereas it has consolidated applications for processing more complex and multi-peak signals. as a future development, we expect to realize a "split ring resonator" sensor inducing more peaks in the instrument's uniformity band to evaluate better the reliability of the methodology proposed here. figure 5. prediction rate for each group. table 1. cross-validation summary for training data and error rate predicted group dmem saos-2 mda-mb-231 mcf-7 143b total dmem 8 0 0 0 0 8 100.0 % 0.0 % 0.0 % 0.0 % 0.0 % 100.0 % saos-2 0 7 1 0 0 8 0.0 % 87.5 % 12.5 % 0.0 % 0.0 % 100.0 % mda-mb-231 0 0 6 0 2 8 0.0 % 0.0 % 75.0 % 0.0 % 25.0 % 100.0 % mcf-7 0 0 0 7 1 8 0.0 % 0.0 % 0.0 % 87.5 % 12.5 % 100.0 % 143b 0 0 2 2 4 8 0.0 % 0.0 % 25.0 % 25.0 % 50.0 % 100.0 % total 8 7 9 9 7 40 20.0 % 17.5 % 22.5 % 22.5 % 17.5 % 100.0 % error rate dmem saos mda mcf7 143b total prior 0.2 0.2 0.2 0.2 0.2 rate 0.0 % 12.5 % 25.0 % 12.5 % 50.0 % 20.0 % acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 references [1] r. reilly, breast cancer, xpharm: the comprehensive pharmacology reference (2007) pp. 1–9. doi: 10.1016/b978-008055232-3.60809-8 [2] a. n. bishop, r. jewell, skin cancer, reference module in biomedical sciences, 2014. doi: 10.1016/b978-0-12-801238-3.05621-x [3] e. f. mccarthy, bone tumors, rheumatology: sixth edition, vol. 2–2, 2015, pp. 1734–1743. doi: 10.1016/b978-0-323-09138-1.00212-6 [4] f. bray, j. ferlay, i. soerjomataram, r. l. siegel, l. a. torre, a. jemal, global cancer statistics 2018: globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries, ca: a cancer journal for clinicians, nov. 2018. [5] hongdan he, xiaoni shao, yanan li, ribu gihu, haochen xie, junfu zhou, hengxiu yan, targeting signaling pathway networks in several malignant tumors: progresses and challenges, front pharmacol 12 (2021) art. no. 1373. doi: 10.3389/fphar.2021.675675 [6] s. r. mohd shah, n. b. asan, j. velander, j. ebrahimizadeh, m. d. perez, v. mattsson, t. blokhuis, r. augustine, analysis of thickness variation in biological tissues using microwave sensors for health monitoring applications, ieee access 7 (2019), pp. 156033–156043. doi: 10.1109/access.2019.2949179 [7] f. deshours, g. alquié, h. kokabi, k. rachedi, m. tlili, s. hardinata, f. koskas, improved microwave biosensor for noninvasive dielectric characterization of biological tissues,” microelectronics j 88 (2019), pp. 137–144. doi: 10.1016/j.mejo.2018.01.027 [8] g. gugliandolo, g. vermiglio, g. cutroneo, g. campobello, g. crupi, n. donato, inkjet-printed capacitive coupled ring resonators aimed at the characterization of cell cultures, 2022 ieee international symposium on medical measurements and applications (memea), messina, italy, 22-24 june 2022, pp. 1–5. doi: 10.1109/memea54994.2022.9856582 [9] a. martellosio, m. pasian, m. bozzi, l. perregrini, a. mazzanti, f. svelto, p. e. summers, g. renne, m. bellomi, 0.5–50 ghz dielectric characterisation of breast cancer tissues, electron lett 51(13) (2015), pp. 974–975. doi: 10.1049/el.2015.1199 [10] g. gugliandolo, d. aloisio, g. campobello, g. crupi, n. donato, on the design and characterisation of a microwave microstrip resonator for gas sensing applications, acta imeko 10 (2) (2021) pp. 54–61. doi: 10.21014/acta_imeko.v10i2.1039 [11] j. majcher, m. kafarski, a. wilczek, a. szypłowska, a. lewandowski, a. woszczyk, w. skierucha, application of a dagger probe for soil dielectric permittivity measurement by tdr, measurement 178 (2921), art. no. 109368. doi: 10.1016/j.measurement.2021.109368 [12] l. d’alvia, e. piuzzi, a. cataldo, z. del prete, permittivity-based water content calibration measurement in wood-based cultural heritage: a preliminary study, sensors 22(6) (2022), art. no. 2148. doi: 10.3390/s22062148 [13] l. d’alvia, e. palermo, z. del prete, e. pittella, s. pisa, e. piuzzi, a comparative evaluation of patch resonators layouts for moisture measurement in historic masonry units, in 2019 imeko tc4 international conference on metrology for archaeology and cultural heritage, 2019. online [accessed 22 april 2023] https://www.imeko.org/publications/tc4-archaeo2019/imeko-tc4-metroarchaeo-2019-28.pdf [14] l. d’alvia, e. pittella, e. rizzuto, e. piuzzi, z. del prete, a portable low-cost reflectometric setup for moisture measurement in cultural heritage masonry unit, measurement 189 (2022) art. no. 110438. doi: 10.1016/j.measurement.2021.110438 [15] a. cataldo, e. de benedetto, r. schiavoni, a. tedesco, a. masciullo, g. cannazza, microwave reflectometric systems and monitoring apparatus for diffused-sensing applications, acta imeko 10 (3) (2021) pp. 202–208. doi: 10.21014/acta_imeko.v10i3.1143 [16] m. hussein, f. awwad, d. jithin, h. el hasasna, k. athamneh, r. iratni, breast cancer cells exhibits specific dielectric signature in vitro using the open-ended coaxial probe technique from 200 mhz to 13.6 ghz, sci rep 9(1) (2009), 8 pp. doi: 10.1038/s41598-019-41124-1 [17] c. gabriel, s. gabriel, e. corthout, the dielectric properties of biological tissues: i. literature survey, phys med biol 41(11) (1996), p. 2231. doi: 10.1088/0031-9155/41/11/001 [18] s. gabriel, r. w. lau, c. gabriel, the dielectric properties of biological tissues: ii. measurements in the frequency range 10 hz to 20 ghz, phys med biol 41(11) (1996), pp. 2251–2269. doi: 10.1088/0031-9155/41/11/002 [19] s. gabriel, r. w. lau, c. gabriel, the dielectric properties of biological tissues: iii. parametric models for the dielectric spectrum of tissues, phys med biol 41(11) (1996), art. no. 2271. doi: 10.1088/0031-9155/41/11/003 [20] g. maenhout, t. markovic, b. nauwelaers, flexible, segmented tubular design with embedded complementary split-ring resonators for tissue identification, ieee sens j 21(14) (2021), pp. 16024–16032. doi: 10.1109/jsen.2021.3075570 [21] ling yan zhang, c. b. m. du puch, a. lacroix, c. dalmay, a. pothier, c. lautrette, s. battu, f. lalloué, m.-o. jauberteau, p. blondy, microwave biosensors for identifying cancer cell aggressiveness grade, in ieee mtt-s international microwave symposium digest, 2012 doi: 10.1109/mwsym.2012.6259539 [22] l. d’alvia, s. carraro, b. peruzzi, e. urciuoli, l. palla, z. del prete, e. rizzuto, a novel microwave resonant sensor for measuring cancer cell line aggressiveness, sensors 22(12) (2022) art. no. 4383. doi: 10.3390/s22124383 [23] l. d. sharma, r. k. sunkaria, a robust qrs detection using novel pre-processing techniques and kurtosis based enhanced efficiency, measurement 87 (2016), pp. 194–204. doi: 10.1016/j.measurement.2016.03.015 [24] b. moeini, m. r. linford, n. fairley, a. barlow, p. cumpson, d. morgan, v. fernandez, j. baltrusaitis, definition of a new (doniach-sunjic-shirley) peak shape for fitting asymmetric signals applied to reduced graphene oxide/graphene oxide xps spectra, surface and interface analysis 54(1) (2022), pp. 67–77. doi: 10.1002/sia.7021 [25] m. longo, b. peruzzi, d. fortunati, v. de luca, s. denger, g. caselli, s. migliaccio, a. teti, modulation of human estrogen receptor alpha f promoter by a protein kinase c/c-src-dependent mechanism in osteoblast-like cells, j mol endocrinol 37(3) (2006), pp. 489–502. doi: 10.1677/jme.1.02055 [26] ling ren, a. mendoza, j. zhu, j. w. briggs, ch. halsey, e. s. hong, s. s. burkett, j. j. morrow, m. m. lizardo, t. osborne, s. q. li, h. h. luu, p. meltzer, ch. khanna, characterization of the metastatic phenotype of a panel of established osteosarcoma cells, oncotarget 6(30) (2015), pp. 29469–29481. doi: 10.18632/oncotarget.5177 [27] e. urciuoli, s. petrini, v. d’oria, m. leopizzi, c. della rocca, b. peruzzi, nuclear lamins and emerin are differentially expressed in osteosarcoma cells and scale with tumor aggressiveness, cancers (basel) 12(2) (2020). doi: 10.3390/cancers12020443 [28] e. urciuoli, v. d’oria, s. petrini, b. peruzzi, lamin a/c mechanosensor drives tumor cell aggressiveness and adhesion on substrates with tissue-specific elasticity, front cell dev biol 9 (2021). doi: 10.3389/fcell.2021.712377 [29] d. trivanović, s. nikolić, j. krstić, a. jauković, s. mojsilović, v. ilić, i. okić-djordjević, j. f. santibanez, g. jovčić, d. bugarski, http://dx.doi.org/10.1016/b978-008055232-3.60809-8 https://doi.org/10.1016/b978-0-12-801238-3.05621-x https://doi.org/10.1016/b978-0-323-09138-1.00212-6 https://doi.org/10.3389/fphar.2021.675675 https://doi.org/10.1109/access.2019.2949179 https://doi.org/10.1016/j.mejo.2018.01.027 https://doi.org/10.1109/memea54994.2022.9856582 https://doi.org/10.1049/el.2015.1199 https://doi.org/10.21014/acta_imeko.v10i2.1039 https://doi.org/10.1016/j.measurement.2021.109368 https://doi.org/10.3390/s22062148 https://www.imeko.org/publications/tc4-archaeo-2019/imeko-tc4-metroarchaeo-2019-28.pdf https://www.imeko.org/publications/tc4-archaeo-2019/imeko-tc4-metroarchaeo-2019-28.pdf https://doi.org/10.1016/j.measurement.2021.110438 https://doi.org/10.21014/acta_imeko.v10i3.1143 https://doi.org/10.1038/s41598-019-41124-1 https://doi.org/10.1088/0031-9155/41/11/001 https://doi.org/10.1088/0031-9155/41/11/002 https://doi.org/10.1088/0031-9155/41/11/003 https://doi.org/10.1109/jsen.2021.3075570 https://doi.org/10.1109/mwsym.2012.6259539 https://doi.org/10.3390/s22124383 https://doi.org/10.1016/j.measurement.2016.03.015 https://doi.org/10.1002/sia.7021 https://doi.org/10.1677/jme.1.02055 https://doi.org/10.18632/oncotarget.5177 https://doi.org/10.3390/cancers12020443 https://doi.org/10.3389/fcell.2021.712377 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 7 characteristics of human adipose mesenchymal stem cells isolated from healthy and cancer affected people and their interactions with human breast cancer cell line mcf-7 in vitro, cell biol int 38(2) (2014) pp. 254–265. doi: 10.1002/cbin.10198 [30] a. j. minn, y. kang, i. serganova, g. p. gupta, d. d. giri, m. doubrovin, v. ponomarev, w. l. gerald, r. blasberg, j. massagué, distinct organ-specific metastatic potential of individual breast cancer cells and primary tumors, j clin invest 115(1) (2005), pp. 44–55. doi: 10.1172/jci22320 [31] thermofisher, dmem description. online [accessed 22 april 2023] https://www.thermofisher.com/it/en/home/life-science/cellculture/mammalian-cell-culture/classicalmedia/dmem.html?sid=fr-dmem-main [32] hardware manual for minivna tiny. online [accessed 22 april 2023]. https://www.wimo.com/media/manuals/mrs/minivna_tiny _antennenanalysator_antenna-analyzer_hardwaremanual_en.pdf [33] c. syms, principal components analysis, encyclopedia of ecology, five-volume set, jan. 2008, pp. 2940–2949. doi: 10.1016/b978-008045405-4.00538-3 [34] l. e. sebar, l. iannucci, y. goren, p. fabian, e. angelini, s. grassini, raman investigation of corrosion products on roman copper-based artefacts, acta imeko 10 (1) (2021), pp. 129-135. doi: 10.21014/acta_imeko.v10i1.858 [35] a. savitzky, m. j. e. golay, smoothing and differentiation of data by simplified least squares procedures, anal chem 36 (8) (1964), pp. 1627–1639. doi: 10.1021/ac60214a047 [36] m. zeaiter, d. rutledge, preprocessing methods, comprehensive chemometrics 3 (2009), pp. 121–231. doi: 10.1016/b978-044452701-1.00074-0 [37] s. wold, cross-validatory estimation of the number of components in factor and principal components models, technometrics 20(4) (1978), pp. 397-405. doi: 10.2307/1267639 https://doi.org/10.1002/cbin.10198 https://doi.org/10.1172/jci22320 https://www.thermofisher.com/it/en/home/life-science/cell-culture/mammalian-cell-culture/classical-media/dmem.html?sid=fr-dmem-main https://www.thermofisher.com/it/en/home/life-science/cell-culture/mammalian-cell-culture/classical-media/dmem.html?sid=fr-dmem-main https://www.thermofisher.com/it/en/home/life-science/cell-culture/mammalian-cell-culture/classical-media/dmem.html?sid=fr-dmem-main https://www.wimo.com/media/manuals/mrs/minivna_tiny_antennenanalysator_antenna-analyzer_hardware-manual_en.pdf https://www.wimo.com/media/manuals/mrs/minivna_tiny_antennenanalysator_antenna-analyzer_hardware-manual_en.pdf https://www.wimo.com/media/manuals/mrs/minivna_tiny_antennenanalysator_antenna-analyzer_hardware-manual_en.pdf http://dx.doi.org/10.1016/b978-008045405-4.00538-3 https://doi.org/10.21014/acta_imeko.v10i1.858 https://doi.org/10.1021/ac60214a047 https://doi.org/10.1016/b978-044452701-1.00074-0 https://doi.org/10.2307/1267639 a 3d head pointer: a manipulation method that enables the spatial position and posture for supernumerary robotic limbs acta imeko issn: 2221-870x september 2021, volume 10, number 3, 81 90 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 81 a 3d head pointer: a manipulation method that enables the spatial position and posture of supernumerary robotic limbs joi oh1, fumihiro kato2, yukiko iwasaki1, hiroyasu iwata3 1 waseda university, graduate school of creative science and engineering, tokyo, japan 2 waseda university, global robot academic institute, tokyo, japan 3 waseda university, faculty of science and engineering, tokyo, japanl section: research paper keywords: vr/ar; hands-free interface; polar coordinate system; teleoperation; srl citation: joi oh, fumihiro kato, iwasaki yukiko, hiroyasu iwata, a 3d head pointer: a manipulation method that enables the spatial position and posture for supernumerary robotic limbs, acta imeko, vol. 10, no. 3, article 13, september 2021, identifier: imeko-acta-10 (2021)-03-13 editor: bálint kiss, budapest university of technology and economics, hungary received march 31, 2021; in final form september 6, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: joi oh, e-mail: joy-oh0924@akane.waseda.jp 1. introduction in recent years, there has been a considerable amount of research and development on the use of supernumerary robotic limbs (srls) for ‘body augmentation’. in previous studies, robotic technology, especially wearable robots, has been developed for use as prostheses for rehabilitation purposes. an srl aims to provide its users with additional capabilities, enabling them to accomplish tasks that they would otherwise be incapable of performing. in this respect, an srl is different from other types of existing wearable robots; a lightweight, sufficient torque and a highly manoeuvrable srl developed by vernonia et al. [1] is a classic example. these robots can be used in any context, from helping individuals to perform household chores to improving industrial productivity. to effectively assist in routine tasks (e.g., opening an umbrella or stirring a pot), users require an interface that indicates the target point location to the end effector of the srl without requiring them to interrupt their actions. however, such a method has not yet been established. parietti et al. [2],[3] developed a manipulation technique in which the operator's movements were monitored by a robot, following which the robotic arm performed the corresponding movements. iwasaki et al. [4] proposed an interface that allowed the operator to actively control the srl by using the orientation of the face, while sasaki et al. [5] developed a manipulation method that enabled more complicated operations of the robotic arm with the user’s feet as the controllers. previous studies have overlooked the balance between ensuring the operator’s limbs move freely and providing detailed instructions to the srl, and there are further challenges with respect to multitasking in the context of daily life. therefore, in this study, a method for manipulating srls so that two parallel tasks do not interfere with each other is proposed and then evaluated for its usefulness. in the present study, a two-stage experiment was conducted. this section describes the hypothesis of the method, and section 2 presents the method for position instruction along with the experimental results. in section 3, a manipulation method that includes posture instructions is proposed and the experimental abstract this paper introduces a novel interface ‘3d head pointer’ for the operation of a wearable robotic arm in 3d space. the developed system is intended to assist its user in the execution of routine tasks while operating a robotic arm. previous studies have demonstrated the difficulty a user faces in simultaneously controlling a robotic arm and their own hands. the proposed method combines a head-based pointing device and voice recognition to manipulate the position and orientation as well as to switch between these two modes. in a virtual reality environment, the position instructions of the proposed system and its usefulness were evaluated by measuring the accuracy of the instructions and the time required using a fully immersive head-mounted display (hmd). in addition, the entire system, including posture instructions with two switching methods (voice recognition and head gestures), was evaluated using an optical transparent hmd. the obtained results displayed an accuracy of 1.25 cm and 3.56 ° with the 20-s time span necessary for communicating an instruction. these results demonstrate that voice recognition is a more effective switching method than head gestures. mailto:joy-oh0924@akane.waseda.jp acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 82 results are presented, and the two experiments are then discussed. section 4 presents comparisons with other similar methods and discusses the limitations, and finally, section 5 presents the conclusions. the following two elements are considered essential for achieving daily support for parallel tasks: 1) undisturbed movement of the operator's limbs, 2) an indication of spatial position and posture. to date, several hands-free interfaces have been proposed to satisfy requirement 1, with some operated by the tongue [6], eye movement [7] or voice [8] and used for either screen control or robot manipulation (or both). methods to control robotic limbs with brain waves [9] are also being investigated. however, this study focuses on requirement 2 and the construction of a more intuitive instructional method. when the operator provides directions related to a 3d-space location, they must accurately indicate the target point. the field of view, within which a person can perceive the shape and position of an object, is as narrow as 15 ° from the gazing point [10]; hence, to compensate, it is necessary to direct the face and gaze in the instructional space to provide spatial position instructions. the interface proposed in this study takes advantage of this compensatory action and uses it as an instruction method. methods for using the head as a joystick have already been proposed. one method involves the manipulation of the head for instruction in a 2d plane, such as on-screen operations [11]. another method involves switching between the vertical and horizontal planes by nodding towards the plane to be manipulated, supplementing the plane manipulation by the head so that only the head is used to manage the 3d space [12]. however, these methods do not use the compensatory head motion as a manipulation technique. 2. proposal for a positioning method using head bobbing turning one’s head can be used to instruct the radial direction of the target point in polar coordinates. in this section, we propose a pointing interface that combines head bobbing with head orientation in a polar coordinate system. head bobbing is a small back and forth motion of the head that does not interfere with the operator’s movements. this research was performed using the standard morphology of a japanese man, as recorded by kouchi et al. [13]. according to these data, the head-bobbing range was determined as approximately 9.29 cm, which allows the operator to keep the zero-moment point in the torso of the body and operate a robotic arm without losing balance. a doughnut-shaped area with an innermost and outermost radius of 30 and 100 cm, respectively, around the operator was defined as an example of an srl operating range [14]. the head-bobbing depth-change factor was 70 / 9.29 = 7.53 or more. the range of motion that can be performed using head bobbing is considerably lower than that of arms. the preliminary experiments demonstrated that at high magnification, the instructional accuracy of head bobbing was lower than that of other comparable methods. additionally, the required instructions were shown to be longer, and an increase/decrease factor (idf) that gradually changes the depth of the head-bobbing task based on head velocity was therefore introduced. the idf allows precise instructions while maintaining a high magnification. in this study, the idf was constructed using the mouse-cursor change factor shown in figure 1, set by microsoft windows [15]. 2.1. evaluation test with a fully immersive head-mounted display this section examines the usefulness of the idf and 3d head pointer as a whole. this study was conducted based on the previously developed robotic arm proposed by nakabayashi et al. [14] and amano et al. [16], as shown in figure 2. the arm has a reach of up to 1 m, and its jamming hand, shown in figure 3, can be used as an end effector to grasp an object, with an error of up to 3 cm [16]. therefore, the allowable indication error at the interface in this experiment was set to 3 cm. in this study, the validation was performed in a virtual reality (vr) environment. the indication of radial direction by head orientation was measured from the front of the head-mounted display (hmd). the depth indicator was implemented by setting up a sphere with the operator at the centre, as shown in figure figure 1. microsoft's mouse-cursor speed-change settings [15]. figure 2. external view of the robotic arm proposed by nakabayashi et al. [14] and amano et al. [16]. figure 3. external view of the jamming hand. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 83 4, and by changing the radius of the sphere controlled by head bobbing. an hmd is used in the proposed method (htc vive [17]). the experimental procedure is as follows: 1) the participant wears the vive headset and grasps a vive controller in each hand, holding them up in front of their chest, as shown on the right in figure 5. this is defined as the ‘rest position’. the subject’s avatar is displayed in the vr space, as shown on the left in figure 5. 2) the 3d head pointer’s control cursor (the red ball in the centre of figure 6) appears 65 cm in front of the eyes. simultaneously, the target sphere with a 10-cm diameter (the blue transparent sphere in the upper-right corner of figure 6) appears at any of the eight locations at a ± 30-cm height, ± 20-cm width and ± 20-cm depth, and it is positioned ± 20 cm from the cursor. 3) the participant aligns the cursor with the centre of the target sphere by using the 3d head pointer. 4) when the participant perceives that they have reached the centre of the target sphere, they verbalise the completion of the instruction. as shown in figure 7, the target sphere has a reference frame with its origin at the centre of the sphere. the participant adjusts the position of the cursor accordingly. 5) steps (1)–(4) are performed for all eight target sphere positions. in the present study, the above-mentioned procedure was performed by two groups of six participants each. the experiments were performed once under different conditions for each group. table 1 shows the experimental conditions and group distribution. group 1 was asked to perform the tasks described above, but with a predefined time limit for instruction execution, while group 2 was asked to perform the experiment either with or without an idf. figure 8 shows the relationship between head-bobbing speed and magnification. ‘idf not available’ is a condition in which the rate of change in depth due to head bobbing is fixed at 10 times without using the idf. based on the aforementioned experiments, the usefulness of the 3d head pointer was evaluated using the average indication error condition (a) shown in table 1, the relationship between the indication accuracy error and operation time in conditions (a)–(f) and the maximum arm sway of the subject measured by the vive controller according to condition (a). at the same time, the usefulness of the idf was tested by comparing the instructional error between conditions (a) and (g). 2.2. results and discussion on the fully immersive hmd in this study, the wilcoxon signed-rank test was used to verify the significant differences between two conditions. this is a nonparametric test used when the population does not follow a normal distribution. the difference in zi = yi − xi between the experimental values of two conditions xi and yi performed on the i-th participant was obtained. next, zi was arranged in order of decreasing absolute value, and rank ri was assigned to the smaller value. the wilcoxon signed-rank test quantity of w was then calculated as follows: 𝑊 = ∑ ∅𝑖 𝑛 𝑖=1 𝑅𝑖 . (1) however, in this case, ∅i was calculated as figure 4. 3d image of the head pointer operation. figure 5. the experimental interface operation. left: instructional target spheres and participants within the vr; right: participant wearing the hmd and holding the controllers. figure 6. subjective view of the user's experience. figure 7. target sphere and cursor visibility. table 1. the experimental conditions and group distribution. condition requirement group (a) no requirements 1, 2 (b) 2-s time limit for instruction 1 (c) 3-s time limit for instruction 1 (d) 4-s time limit for instruction 1 (e) 6-s time limit for instruction 1 (f) 8-s time limit for instruction 1 (g) the rate of change in depth due to head bobbing is fixed at 10 times 2 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 84 ∅𝑖 = { 1(𝑍𝑖 > 0) 0(𝑍𝑖 < 0) . (2) significant differences were calculated by comparing test quantity w to the wilcoxon signed-rank table [18]. in this experiment, instead of the table, the excel statistics function (microsoft inc.) was used to calculate significant differences. 2.2.1. indication error the instructional error of the distance from the centre of the target sphere to the control cursor was measured upon completion of the instruction. this was done in vr by using an idf-based 3d head pointer for 12 people, divided equally into two groups (1 and 2). the results are presented in table 2. in this study, a jamming hand [16] capable of grasping an object with an error of up to 3 cm in target point indication was used as a reference-index end effector. the average error of the instructions in this experiment was approximately 1.32 cm, with the highest instructional error of 2.5 cm. these results suggest that the indication error of the 3d head pointer is within the range of absorbable error in the case of grasping and manipulating an object with the specific end effector. the standard deviation of the indication error was 0.65 cm, and the error varied widely from person to person. this result may be related to the familiarity level of each individual in the use of a vr space. the results were validated by considering vr experience. 2.2.2. change in indication error at each indication time the experiment was conducted under conditions (a)–(f) for six members of group 1. the relationship between the instruction error and instruction time is shown in figure 9. the average operation time under condition (a), with no time limit, was 6.2 s. when the operation time was limited, the indication error decreased rapidly with the increase in time limit from 2 to 3 s. when the time was greater than 4 s, this error remained almost constant regardless of the time taken. this suggests that the operation with the 3d head pointer itself had already been completed by 4 s. 2.2.3. maximum arm sway the maximum arm sway of the six participants in group 1 was measured from the movement of the vive controller while standing upright and compared to the maximum arm sway when the 3d head pointer was manipulated in condition (a). the results are presented in figure 10. the comparison results demonstrated that the maximum arm sway was greater with a 3d head pointer. however, the wilcoxon signed-rank test did not show any significant difference between the two conditions (n = 6, p < 0.1), suggesting that the proposed method allows a user to continue performing regular arm movements while following the instructions. because the proposed method requires visibility of the target space for performing tasks with srl, multitasking is sometimes impossible, and interruption of the task being performed by the user is unavoidable. however, if the operator’s hand position can be maintained while using the 3d head pointer, the interrupted table 2. average instruction error. subject instructional error (cm) 1 1.20 2 2.50 3 1.54 4 2.19 5 2.41 6 1.06 7 1.06 8 0.882 9 0.757 10 0.905 11 0.695 12 0.668 average 1.32 figure 8. change in head-bobbing magnification with and without idf. figure 9. instruction error per operating time in the evaluation test. figure 10. maximum arm sway when standing upright and operating the 3d head pointer. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 85 task can be resumed quickly following instructions to the srl; this is significantly more efficient than performing the two tasks separately. 2.2.4. differences in indication error with and without idf we conducted the experiment under conditions (a) and (g) for the six members of group 2 and measured the instruction errors of the 3d head pointer and the depth-only instruction errors for head bobbing. the results are shown in figure 11 and figure 12, respectively. the use of idf reduced the average instruction error by approximately 77.6 % for the depth instruction by head bobbing and approximately 67.0 % for total error in the three axes (x, y, z). additionally, a significant difference was observed between the two conditions with and without idf in the case of the wilcoxon signed-rank test (n = 6, p < 0.05). it was therefore confirmed that the introduction of the idf greatly improved the accuracy and demonstrated its usefulness. nevertheless, it is still necessary to verify whether the accuracy can be further improved with the additional fine-tuning of the parameters related to the magnification change ratio. 3. proposal for combining the position and posture indication method the previous section showed the effectiveness of the position indications for srl. however, without posture instructions at the interface, the srl cannot perform complex routine tasks (e.g., holding an umbrella at an angle to strong winds, pouring the contents of a bottle into a cup). some objects can only be grabbed from certain directions. in this study, a method is proposed that uses the head for srl to provide posture indications. because it is difficult to provide stereotactic and posture instructions simultaneously with the head, a ‘switching indication’ function was also proposed, which switches between position and posture indications. 3.1. proposal for a posture-indication method using isometric input figure 13 shows that the human head can rotate in three axes using unity-chan as the model (a humanoid model created by unity technologies japan [19]). the use of head-rotation axes for srl posture indication (yaw, pitch and roll) facilitates intuitive instructions. however, the head has limited angles of yaw, pitch and roll ranging from −60 ° to +60 °, −50 ° to +60 ° and −50 ° to +50 °, respectively [20]. if the displacement of the head is used as an input device, the srl cannot be instructed to posture at an angle beyond the limits of the angle of the head. in addition, according to requirement (2) in section 1, if the head moves more than 15 °, the operation target will be out of the operator’s effective field of view. in this study, the three-axis rotation of the head was used as an isometric input-device parameter that determines the rotational velocity of the pointer according to the rotational angle of the head [21]. the maximum input angle of the head was set to 15 °, which is the maximum angle limit of the effective field of view. to avoid incorrect input, head rotations of ≤ 3 ° were not detected as inputs. the changes in the rotational velocity were spherically interpolated using trigonometric functions. figure 14 shows the relationship between the amount of rotation of the head and the rotation speed of the posture indicator. the figure 11. depth error based on head bobbing with and without idf. figure 12. total error in the three axes due to the 3d head pointer with and without idf. figure 13. the three different rotation axes of the head. figure 14. relationship between head rotation angle and posture rotation speed. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 86 reference angle for head rotation is the direction that the user is facing when switching to the posture indication. 3.2. proposal for a mode-switching method using voice recognition an increase in the number of body parts used for manipulation is undesirable because it leads to an increase in the body load. the switching method was constructed using the head or voice. in this study, two types of switching instruction methods were proposed and then compared in an evaluation test. 3.2.1. voice-recognition-based switching indication method a switching method based on voice recognition is less physically demanding and has less impact on the operator’s limbs than physical operations. table 3 lists the commands used for voice indications. 3.2.2. head-gesture-based switching indication method a method for switching between posture and position instructions using head gestures was also proposed. in this method, a ‘head tilt’ motion was performed to switch from position to posture instructions (top of figure 15), while the ‘head bobbing’ motion was performed to switch from posture instruction to position instruction (bottom of figure 15). because the user only has to indicate the operation mode required, the head-gesture-based switching method requires little cognitive load, and switching can be done intuitively. 3.3. evaluation test with optical transparent hmd this section presents an evaluation of the usefulness of the posture and switching instructions in the 3d head pointer as well as an evaluation of the usefulness of the 3d head pointer in real space. to operate the srl on a real machine, the tip of the srl and target object must be visible. there are two ways to see the tip of the srl on a real machine: by using a video transparent hmd or an optical transparent hmd [22]. the video transparent system may not be able to cope when the srl malfunctions because of the delay in viewing the actual device. in this experiment, the proposed method was constructed using an optical transparent hmd (hololens2 [23]) to evaluate the usefulness of the entire 3d head pointer. to provide posture instructions, the pointing cursor was changed from a red sphere to a blue–green bipyramid, as shown in figure 16. the indication of the radial direction based on head orientation was measured from the front of the hmd. the depth indicator was implemented by changing the radius of the sphere through head bobbing, as described in section 2.1. the amount of head rotation in the posture indication was determined by measuring the posture of the hmd. compared to position indication, it is difficult to evaluate the amount of operator input required for posture indication. to visually display the user’s head rotation, the user interface (ui) is displayed during posture instruction, as shown in figure 17. the white point on the ui is aligned with the centre and moves up, down, left and right according to the amount of yaw and pitch fed as the input. the roll-angle input is displayed as a white circle in the ui, and the circle rotates according to the amount of roll input. this ui allows the operator to visually understand how much operator input is. for speech recognition, microsoft’s mixed reality toolkit was used [24]. in this experiment, a pointing task was set up as the target appearing in the air. the experimental procedure is described as follows: table 3. voice command list. voice command function indicate position switch from posture indication to position indication indicate posture switch from position instructions to posture instructions finish signals that the indication has been completed. (used for evaluation tests) figure 15. top: switch to posture instruction; bottom: switch to position instruction. figure 16. pointer cursor corresponding to posture indication. figure 17. auxiliary user interface for posture instruction. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 87 1) the subjects stood upright while wearing the hmd and bluetooth headset in a room with white walls. 2) the 3d head pointer cursor (blue–green bipyramid in figure 18) and the target (purple bipyramid in figure 18) were displayed in front of the participant. the target appeared at a random position within 15 ° to the left and right of the subject’s direction of gaze and at a depth of between 30 and 100 cm, as shown in figure 19. the direction of the target was determined randomly from six possible directions: up, down, left, right, front and back. 3) the participant moved the cursor to the same position and posture as the target using a 3d head pointer. when the subject perceived that the operation had been completed, they verbalised ‘instruction complete’ into the bluetooth headset. markers were displayed at the centre of the cursor and at the target position and rotation, as shown in figure 18. these markers were always visible to the participant regardless of the position and posture of the cursor and target, and the operator relied on these markers for position and posture indications. 4) steps 1)–3) were performed 12 times in succession in one experiment. the evaluation experiment was conducted under the following two conditions: a) switching indications by voice recognition, b) switching indications by head gesture. a verbal questionnaire was administered after the operation was complete. the experiment was conducted using a total of six men and six women in their 20s and 30s, with the order of conditions a) and b) randomised. procedures 1)–4) were performed at least once as a practice run before conducting the experiment, and additional practice was conducted until the subject judged that they were proficient. based on the above experiments, the usefulness of posture indication was verified according to the posture error and operation time. the usefulness of the switching instruction was verified by comparing the position error, posture error and operation time in each condition. finally, the usefulness of the 3d head pointer as a whole was verified based on the position error, posture error and operation time. section 3.4 describes these results. 3.4. results and discussion on the optical transparent hmd 3.4.1. position error and posture error the average values of the position and angle errors for each condition for the six subjects are shown in figure 20. in this experiment, the tolerance was set assuming the same use of srl as in the experiment discussed in section 2.2.1, and the tolerance of the position indication was 3 cm. in the jamming hand of the srl, when reaching vertically to a cylindrical or spherical object, the success rate for grasping did not decrease if the angular error was within 30 ° [16]. the average position error of the instructions in this experiment was approximately 1.25 cm for the voice switching method and approximately 2.82 cm for the head-gesture switching method, and a significant difference was observed between the two conditions in the wilcoxon signed-rank test. this result demonstrates that the voice recognition method is more accurate in terms of indicating the position. since the instruction error of the position instruction alone in section 2.2.1 was 1.32 cm, this result shows that the head-gesture switching method has a negative effect on the accuracy of the location instruction. the increased error in the head-gesture result can be attributed to the shift in the position indication; when the head is tilted to switch from position to posture instructions, the direction of the face moves accordingly. in addition, in the figure 18. cursor and target in the experiment. figure 19. area where the target appears (blue area in the figure). figure 20. top: error in position indication; bottom: error in posture indication. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 88 questionnaire, there were several comments noting that it was difficult to tilt the head without changing the direction of the face while indicating using the head gesture. the average error for the posture instruction was approximately 3.56 ° for the voice switching method and approximately 1.78 ° for the head-gesture switching method, and a significant difference was observed between the two conditions in the wilcoxon signed-rank test. this result shows that the accuracy of posture indication is higher when using head gestures. this can be attributed to the fact that posture instruction is an isometric input; as long as the head is rotated from the origin, the posture of the cursor will continue to rotate. if the operator uses head gestures, the instruction can be rapidly switched to stereotactic instructions, and consequently, the cursor posture can be fixed at the moment the continuously rotating cursor reaches the target posture. in the voice-based switching method, there is a delay between the time the voice command is uttered and the time the uttered voice is recognised as a command by voice recognition. the voice-based switching method might cause the cursor to rotate during the time the user wants to switch; however, a time delay occurs when the operation actually switches to the position instruction, resulting in a posture error. these results show that voice-based switching is effective in terms of position indication, and head-gesture-based switching is effective in terms of posture indication. furthermore, when switching using voice, the posture error increases; but even for the subject with the largest error, the average error was 5.56 °, which is within the acceptable range of 30 °. however, the subject with the largest error in the case of head-gesture-based switching had an average position error of 6.74 cm, which is far beyond the acceptable error of position instruction. thus, it can be concluded that the voice-based switching method is more useful in terms of instructional accuracy, as all the values are within the acceptable error range for the srl assumed in this experiment. 3.4.2. operation time the mean values of the operation time for each condition for the six participants are shown in figure 21. the average operating time was approximately 20.3 s for the voice switching method and approximately 20.8 s for the head-gesture switching method. there was no significant difference between the two conditions in the wilcoxon signed-rank test. this indicates that there is no significant difference between the two switching methods in terms of operation time. when combined with the results of instructional accuracy, the results suggest that voice switching is more practical. moreover, the average operation time for position instructions alone, as discussed in section 2.2.2, was 6.2 s. in this experiment, the operation time was three times higher than when only position instructions were used owing to the addition of posture and switching indications. in addition, compared to the participant with the shortest average operation time, the participant with the longest average operation time had an operation time that was three times longer. when the subjects were asked about the cause of the increase in operation time in the questionnaire, some of them explained that the operation took longer when the posture indication did not perform well. the causes of the delay for posture indication were as follows: 1) when giving posture instructions, incorrect rotation was mistakenly fed as input, 2) compared to position instructions, it was difficult to correct errors when they occurred, 3) it was difficult to understand the posture of the cursor or target during rotation instructions. since posture manipulation by intentionally moving the neck along the three axes is not performed in daily life, the reason for cause 1) was verified. the reason for cause 2) was the length of time it took to correct the error because the error had to be corrected by indicating the amount of displacement in the posture indication. this is in contrast to position indication, which can directly specify the correct position when an error occurs. the reason for cause 3) was related to depth perception and size perception in the peripheral vision. the permissible eccentricity for recognising the position and shape of an object in the peripheral vision is 15 ° [10], but the perceptible eccentricity for depth is less than 12.5 °, and the perceptible eccentricity for size is less than 5 ° [25]. in addition, the accuracy of both depth perception and size perception decreased with eccentricity from the gazing point. because posture indication recognises the posture of an object from changes in the size and depth of each side of the cursor or target, it required more visual information than position indication. these reasons made it difficult to recognise the posture of the object when the face was turned away by up to 15 ° during posture manipulation. 3.4.3. evaluation of the usefulness of the 3d head pointer as a whole in the case of the voice switching method, the error in both position and posture indications was within the acceptable range, suggesting that the accuracy of the 3d head pointer is also effective for indications in real space through an optical transparent hmd. in terms of operation time, there was a large variation, and the indication time was not stable, indicating room for improvement. the improvement in posture instruction, which is the most significant factor for the increase in operation time, is considered to be effective, and from the results of the questionnaire, the improvements to be made are as follows: 1) construct the manipulation method using routine head movements, 2) use isotonic input, 3) do not leave the operator’s gaze point. of these, 1) and 2) can be solved by using face orientation for posture indication, but there is a potential problem in how to provide posture instructions by rotating the head beyond its movable angle limit. in terms of finding a solution for 3), when the operator removes their gaze point from the cursor and target object in the posture indication state, the target object and cursor figure 21. the mean values of the operation time. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 89 can be improved by continuing to display them in front of the operator in augmented reality (ar). however, displaying real objects in ar in real time is a demanding process for ar devices. in order to display ar in real time, it is necessary to devise a way to reduce using the processing power, such as detecting the mesh of objects in real space and displaying them. 4. discussion on the practical application of a 3d head pointer in this section, the practical application of the proposed method presented in this study is discussed. the advantages of the 3d head pointer can be clarified by comparing this method with other manipulation methods. following the comparison, concerns about using this interface in real life are discussed. 4.1. comparison with other similar methods based on the results of the previous section, the proposed method was compared with other similar methods. a. physical controller some srls, such as those made by vernonia [1], use a physical controller that is similar to a gamepad, with an analogue stick and buttons as the method of operation. the advantage of the 3d head pointer is that its operation is more intuitive and easier to understand than that of a physical controller, and it can be operated hands free. b. srl manipulation method using the feet the proposed method can operate the srl in any standing or seated position unlike methods operated by the feet [5]. however, manipulation with the feet can simultaneously indicate the position and attitude of the srl. a short operation time is the main advantage of foot operation. c. head joystick and nodding to switch between the vertical and horizontal planes because the 3d head pointer uses the compensatory motion of the head, it has a lower operational burden than methods that use the head as a joystick [11],[12]. in contrast, the nodding method [12] allows for digital input from the head alone and may be used in conjunction with the 3d head pointer. 4.2. limitations in this study, voice recognition was used to give instructions, such as for switching, but voice recognition has the disadvantage of not being able to operate in a noisy environment or while the operator is having a conversation. some prior examples of command-type instructions use gaze to provide instructions [26],[27]. the combination of pointing instructions with the head and gaze-based instructions could provide a more flexible environment for srl indications. if there is a need to use srl for complex or long movements in daily life, the movements must be registered and played back. registering and replaying behaviours require many commands, but the number of command-type instructions that can be intuitively memorised and selected is as few as six [28]. when building a system with seven or more commands, it is necessary to devise a way to remember commands, such as displaying a menu screen in the hmd. 5. conclusions in this study, a spatial position and posture indication interface for srls was proposed to improve functional efficiency in the execution of routine tasks. the required functions for indicating spatial position and posture have been described, and a position indication method, the 3d head pointer, has been proposed, which combines head-bobbing-type depth indication for spatial position and polar direction indication by face orientation. in a vr environment, evaluation tests of the 3d head pointer and idf were conducted. the results showed that the 3d head pointer had sufficient accuracy without requiring the operator to interrupt their actions. in addition, to provide not only position but also posture guidance by using a 3d head pointer, a posture guidance method using head rotation as isometric input and two types of switching guidance methods using voice recognition and head gestures were proposed. in addition, a comparative study of two switching instruction methods using an optical transparent hmd and a test to evaluate the usefulness of the 3d head pointer as a whole was conducted. the results showed that the switching method based on voice recognition was effective for using the assumed srl, and it was confirmed that the 3d head pointer was sufficiently accurate to be useful for operating robotic arms using an optical transparent hmd. these results provide useful knowledge for improving the srl interface. in the future, an intuitive posture instruction method will be developed that is not affected by compensatory head movements and that will incorporate a command instruction method that replaces voice recognition. in addition, an srl will be considered as an interface for use as a third arm in situations, such as banquets and construction sites, where an individual’s hands are not sufficient. acknowledgement this research is supported by waseda university global robot academic institute, waseda university green computing systems research organization and by jst erato grant number jpmjer1701, japan. references [1] c. veronneau, j. denis, l. lebel, m. denninger, v. blanchard, a. girard, j. plante, multifunctional 3-dof wearable supernumerary robotic arm based on magnetorheological clutches, ieee robotics and automation letters 5 (2020) pp. 2546-2553. doi: 10.1109/lra.2020.2967327 [2] c. davenport, f. parietti, h. h. asada, design and biomechanical analysis of supernumerary robotic limbs, proc. of the ieee/asme international conference on advanced intelligent mechatronics, fort lauderdale, florida, united states, 17-19 october 2012, pp. 787-793. doi: 10.1115/dscc2012-movic2012-8790 [3] h. h. asada, f. parietti, supernumerary robotic limbs for aircraft fuselage assembly: body stabilization and guidance by bracing, proc. of ieee international conference on robotics and automation, hong kong, china, 2014, pp. 119-125. doi: 10.1109/icra.2014.6907002 [4] y. iwasaki, h. iwata, a face vector the point instruction-type interface for manipulation of an extended body in dual-task situations, ieee international conference on cyborg and bionic systems, shenzhen, china, 25-27 oct. 2018, pp. 662-666. doi: 10.1109/cbs.2018.8612275 [5] t. sasaki, m. saraiji, k. minamizawa, m. inami, metaarms: body remapping using feet-controlled artificial arms, proc. of the 31st https://doi.org/10.1109/lra.2020.2967327 http://dx.doi.org/10.1115/dscc2012-movic2012-8790 https://doi.org/10.1109/icra.2014.6907002 https://doi.org/10.1109/cbs.2018.8612275 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 90 annual acm symposium on user interface software and technology, new york, united states, 14 october 2018, pp. 6574. doi: 10.1145/3242587.3242665 [6] s. g. terashima, j. sakai, t. ohira, h. murakami, e. satho, c. matsuzawa, s. sasaki, k. ueki, development of a tongue operative joystick for proposal of development of an integrated tongue operation assistive system (i-to-as) for seriously disabled people, the society of life support engineering 24 (2012), pp. 201-207. doi: 10.5136/lifesupport.24.201 [7] r. barea, l. boquete, m. mazo, e. lopez, system for assisted mobility using eye movements based on electrooculography, ieee transactions on neural systems and rehabilitation engineering 10 (2002), pp. 209-218. doi: 10.1109/tnsre.2002.806829 [8] r. c. simpson, s. p. levine, voice control of a powered wheelchair, ieee transactions on neural systems and rehabilitation engineering 10 (2002) pp. 122-125. doi: 10.1109/tnsre.2002.1031981 [9] s. nishio, c. i. penaloza, bmi control of a third arm for multitasking, science robotics 3 (2018) 20. doi: 10.1126/scirobotics.aat1228 [10] t. miura, behavioral and visual attention, kazama shobo, chiyoda, japan, 1996, isbn 978-4-7599-1936-3. [11] r. hasegawa, device for input via head motions, patents wo 2010/110411 al, japan, 30 september 2010. [12] a. jackowski, m. gebhard, a. gräser, a novel head gesture based interface for hands-free control of a robot, proc. of the ieee international symposium on medical measurements and applications, benevento, italy, 15-18 may 2016, pp. 1–6. doi: 10.1109/memea.2016.7533744 [13] m. kouchi, m. mochimaru, aist anthropometric database, pub. national institute of advanced industrial science and technology, japan, january 2005. online [accessed 4 september 2021] https://www.airc.aist.go.jp/dhrt/91-92/fig/9192_anthrop_manual.pdf [14] l. drohne, k. nakabayashi, y. iwasaki, h. iwata, design consideration for arm mechanics and attachment positions of a wearable robot arm, proc. of the. ieee/sice international symposium on system integration, paris, france, 14-16 january 2019, pp. 645-650. doi: 10.1109/sii.2019.8700355 [15] windows dev center hardware, pointer ballistics for windows xp, 2002. online [accessed 4 september 2021] http://archive.is/20120907165307/msdn.microsoft.com/enus/windows/hardware/gg463319.aspx#selection-165.0-165.33 [16] k. amano, y. iwasaki, k. nakabayashi, h. iwata, development of a three-fingered jamming gripper for corresponding to the position error and shape difference, robosoft (2019), ieee international conference on soft robotics, seoul, korea (south), 14-18 april 2019, pp. 137-142. doi: 10.1109/robosoft.2019.8722768 [17] htc vive, 2011. online [accessed 4 september 2021] https://www.vive.com/eu/product/vive/ [18] c. zaiontz, wilcoxon signed-ranks table, 2020. online [accessed 4 september 2021] http://www.real-statistics.com/statistics-tables/wilcoxon-signedranks-table/ [19] unity technologies japan/ucl, unity-chan!, 2014. online [accessed 4 september 2021] https://unity-chan.com/ [20] committee on physical disability, japanese orthopaedic association, joint range of motion display and measurement methods, japanese journal of rehabilitation medicine 11 (1974) pp. 127-132. doi: 10.2490/jjrm1963.11.127 [21] s. a. douglas, a. k. mithal, the ergonomics of computer pointing devices, springer, london, 1997. [22] j. p. rolland, r. l. holloway, h. fuchs, comparison of optical and video see-through, head-mounted displays, proc. of the international society for optical engineering, 21 december 1995, pp. 292-307. doi: 10.1117/12.197322 [23] hololens2. online [accessed 4 september 2021] https://www.microsoft.com/en-us/hololens/buy [24] mixed reality toolkit. online [accessed 4 september 2021] https://hololabinc.github.io/mixedrealitytoolkitunity/readme.html [25] a. yasuoka, m. okura, binocular depth and size perception in the peripheral field, journal of the vision society of japan 23 (2011) pp. 103-114. doi: 10.24636/vision.23.2_103 [26] m. yamato, a. monden, y. takada, k. matsumoto, k. tori, scrolling the text windows by looking, transactions of the information processing society of japan 40 (1999), pp. 613-622. online [accessed 4 september 2021] https://ipsj.ixsq.nii.ac.jp/ej/?action=pages_view_main&active_a ction=repository_view_main_item_detail&item_id=12841&item _no=1&page_id=13&block_id=8 [27] t. ohno, quick menu selection task with eye mark, transactions of the information processing society of japan 40 (1999), pp. 602612. online [accessed 4 september 2021] https://ipsj.ixsq.nii.ac.jp/ej/?action=pages_view_main&active_a ction=repository_view_main_item_detail&item_id=12840&item _no=1&page_id=13&block_id=8 [28] y. iwasaki, h. iwata, research on a third arm: analysis of the cognitive load required to match the on-board movement functions, poster presented at: the japanese society for wellbeing science and assistive technology, 6-8 september 2018, tokyo, japan, session no. 2-4-1-2. https://doi.org/10.1145/3242587.3242665 https://doi.org/10.5136/lifesupport.24.201 https://doi.org/10.1109/tnsre.2002.806829 https://doi.org/10.1109/tnsre.2002.1031981 https://doi.org/10.1126/scirobotics.aat1228 https://doi.org/10.1109/memea.2016.7533744 https://www.airc.aist.go.jp/dhrt/91-92/fig/91-92_anthrop_manual.pdf https://www.airc.aist.go.jp/dhrt/91-92/fig/91-92_anthrop_manual.pdf https://doi.org/10.1109/sii.2019.8700355 http://archive.is/20120907165307/msdn.microsoft.com/en-us/windows/hardware/gg463319.aspx#selection-165.0-165.33 http://archive.is/20120907165307/msdn.microsoft.com/en-us/windows/hardware/gg463319.aspx#selection-165.0-165.33 https://doi.org/10.1109/robosoft.2019.8722768 https://www.vive.com/eu/product/vive/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ https://unity-chan.com/ https://doi.org/10.2490/jjrm1963.11.127 https://doi.org/10.1117/12.197322 https://www.microsoft.com/en-us/hololens/buy https://hololabinc.github.io/mixedrealitytoolkit-unity/readme.html https://hololabinc.github.io/mixedrealitytoolkit-unity/readme.html https://doi.org/10.24636/vision.23.2_103 https://ipsj.ixsq.nii.ac.jp/ej/?action=pages_view_main&active_action=repository_view_main_item_detail&item_id=12841&item_no=1&page_id=13&block_id=8 https://ipsj.ixsq.nii.ac.jp/ej/?action=pages_view_main&active_action=repository_view_main_item_detail&item_id=12841&item_no=1&page_id=13&block_id=8 https://ipsj.ixsq.nii.ac.jp/ej/?action=pages_view_main&active_action=repository_view_main_item_detail&item_id=12841&item_no=1&page_id=13&block_id=8 https://ipsj.ixsq.nii.ac.jp/ej/?action=pages_view_main&active_action=repository_view_main_item_detail&item_id=12840&item_no=1&page_id=13&block_id=8y https://ipsj.ixsq.nii.ac.jp/ej/?action=pages_view_main&active_action=repository_view_main_item_detail&item_id=12840&item_no=1&page_id=13&block_id=8y https://ipsj.ixsq.nii.ac.jp/ej/?action=pages_view_main&active_action=repository_view_main_item_detail&item_id=12840&item_no=1&page_id=13&block_id=8y led-to-led wireless communication between divers acta imeko issn: 2221-870x december 2021, volume 10, number 4, 80 89 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 80 led-to-led wireless communication between divers fabio leccese1, giuseppe schirripa spagnolo2 1 dipartimento di scienze, università degli studi “roma tre”,via della vasca navale n. 84, 00146 roma, italy 2 dipartimento di matematica e fisica, università degli studi “roma tre”, via della vasca navale n. 84, 00146 roma, italy section: research paper keywords: underwater communication; visible light communications; optical wireless communication, bidirectional communication; led; photo detector citation: fabio leccese, giuseppe schirripa spagnolo, led-to-led wireless communication between divers, acta imeko, vol. 10, no. 4, article 15, december 2021, identifier: imeko-acta-10 (2021)-04-15 section editor: francesco lamonaca, university of calabria, italy received october 4, 2020; in final form december 6, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: fabio leccese, e-mail: fabio.leccese@uniroma3.it 1. introduction currently, the use of wireless communications is very common in a wide range of terrestrial devices. in the underwater world, wireless information transfer is of great interest to the military. it plays an important role in military raids carried out by a team of divers. for safety and to coordinate actions, a secure and reliable bidirectional communication system is useful. nowadays, underwater wireless communications are implemented almost exclusively via acoustic waves due to their relatively low attenuation [1], [2]. communication by measuring light waves (visible light communication vlc) is a technology that employments light spectra from 400 to 700 nm as data carriers. vlc techniques transmit data wirelessly by pulsing visible light. this new technology, called li-fi, can replace the wi-fi connection, based on radio frequency waves [3]-[6]. beer’s law is usually utilized to correlate the absorption of diffuse light to the properties of the medium through which the light is traveling. from a mathematical point of view, we can write [7], [8]: 𝑃(λ, 𝑟) = 𝑃0 ∙ e −𝐾d(λ)∙𝑟 , (1) where 𝑃0 is the initial transmitted power, 𝑃(𝜆, 𝑟) is the residual power after the light beam with wavelength 𝜆 has traveled the distance 𝑟 through the medium with diffuse attenuation coefficient 𝐾d (𝜆). figure 1 shows the attenuation coefficient of three typical ocean waters i, ii and iii and five coastal waters 1, 3, 5, 7 and 9; the lower numbers correspond to clearer waters. the classification corresponding of jerlov water types [9]-[11]. light with longer wavelengths is absorbed more quickly than that with shorter wavelengths. because of this, the higher energy light with short wavelengths, such as blue-green, is able to penetrate more deeply. in open ocean, below 100 m depth, only blue-green radiation is present [12]. however, the blue component of sunlight can also reach depths of up to 1000 m; although the quantity is so low that photosynthesis is not allowed [12]. figure 1 shows that the minimal absorption is between 460 nm and 580 nm; depending on the type of water. therefore, vlc technology is extensively studied as an alternative solution for short range underwater communication links [13]-[28]. really, underwater optical wireless communication (uowc) is not a new idea. after the pioneering works of the 1980s [29][31], in 2009, doniec et al. [32] have developed a 5-meter abstract for military divers, having a robust, secure, and undetectable wireless communication system available is a fundamental element. wireless intercoms using acoustic waves are currently used. these systems, even if reliable, have the defect of being easily identifiable and detectable. visible light can pass through sea water. therefore, light can be used to develop short-range wireless communication systems. to realize secure close-range underwater wireless communication, the underwater optical wireless communication (uowc) can be a valid alternative to acoustic wireless communication. uowc is not a new idea, but the problem of the presence of sunlight and the possibility of using near-ultraviolet radiation (near-uv) has not been adequately addressed in the literature yet. in military applications, the possibility of using invisible optical radiation can be of great interest. in this paper, a feasibility study is carried out to demonstrate that uowc can be performed using near-ultraviolet radiation. the proposed system can be useful for wireless voice communications between military divers as well as amateur divers. mailto:fabio.leccese@uniroma3.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 81 underwater wireless optical communication link (called aquaoptical) with a 1 mbps data rate. later in 2015, rust et al. [33] have implemented an uowc system for use in remote controlled vehicles (rovs) used for the inspection of nuclear power plants. in addition, some systems are currently commercially available [34]-[37]. unfortunately, the performance of uowc is currently limited to short range [38]. however, in some specific situations, short-range communication is more than enough. on the other hand, there are circumstances where short range communication is needed without the need for large bandwidth. a typical example is the communication between divers. for communication between divers, the most common form is through hand signals [39], and underwater writing slates [40], [41]. figure 2 shows two example of standard diver hand signals and a dive slate. the dialect of the diver's hand signals includes only plain and precise gestures easily identifiable. this allows only simple communications and require extensive memorization. on the other hand, slates do not allow communication in real time; it takes time to write and to attract the attention of the underwater partners. recently, full face diving masks with snorkels have been introduced that allow the diver to breathe and speak normally inside the mask [42], [43]. for this type of masks, reliable underwater intercoms have been developed to allow divers to talk each other underwater [44], [45]. a transducer is attached to the diver's face mask. this transducer converts the voice into an ultrasonic signal. each diver of the team has an ultrasonic receiver, which accepts the signal and converts it back to a sound that the divers can hear, enabling communication. this type of communication system can be used by amateur or professional divers. figure 3 shows two commercially available systems of underwater intercoms. during military raids with divers, it is very important that the various components of the command can communicate with each other. unluckily, hand signs do not allow for complex information to be communicated, and the use of dive slate can be incompatible with the times. an audio communication is essential for complex communications needed in military actions. another key problem in military communications is that they must be secure and undetectable. unfortunately, the acoustic waves that travel in water are easily detectable. therefore, their use is not convenient during critical military missions. in this scenario, uowc is a good alternative to acoustic communication [46]. it has the advantage that it cannot be intercepted. this specific application does not require long range and high band communications. therefore, the usable systems can be simple, small, lightweight and with low power. figure 4 shows a typical uowc between divers. the information could be transmitted through a special torch and be captured by sensors positioned on the diving suit. unfortunately, communications with visible light suffer from noise generated by solar background noise or artificial light sources. special precautions must be taken to minimize this noise [47]. it would be convenient to implement uowc systems that use optical radiation different from that normal present in water. figure 1. diffuse attenuation coefficient 𝐾𝑑(𝜆) for several oceanic and coastal water types according jerlov classification. curves obtained from the data present in [9]-[11]. figure 2. examples of standard diver hand signals and of a dive slate. figure 3. examples of standard diver hand signals and of a dive slate. figure 4. optical communication between divers. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 82 in addition, during communications between the military divers it would be useful to use light not visible from normal video surveillance systems. the main purpose of the paper is to verify the feasibility of a communication system that can be used by the military divers. the system must be simple, robust, consuming few energy and not affected by ambient light, and, above all, difficult to detect and/or intercept by video surveillance systems sensitive to visible radiation. to obtain these performances it is necessary to avoid the use of blue-green radiation, present in the solar radiation that penetrates into the water. in addition, visible radiation must be avoided, which is easily detectable at night by underwater video surveillance systems. in this paper, an underwater near-ultraviolet light communication is proposed. the proposed system uses as emitter (tx) an uv led with peak wavelength λ = 385 nm and half width δλ = 15 nm. instead, a photodiode, made with an led like the one used as a transmitter, is used as a receiver (rx). this system is intrinsically low sensitive to ambient light and produces an invisible communication channel. since there are video surveillance systems that have good sensitivity in the bluegreen spectral band [48], the use of radiation in the near uv allows having a relatively good penetration of the radiation into the water and at the same time to be invisible to these video surveillance systems. the system works well in short range communications where large bandwidth is not required. for example, if we are only interested to speech transmission, a bandwidth of 32 kbps is generally acceptable. with this type of communication, it is possible to create simple, small, light, robust and energy efficient systems. 2. underwater communications by uv-a radiation a part of the solar radiation spectrum overlaps with the radiation commonly used for the visible light communication (vlc) [49]. therefore, it is very difficult to attenuate the effects of sunlight without loss of useful signal. in the presence of sunlight, the receivers see very high white noise and can often go into saturation. to try to solve the problems deriving from solar radiation (in general of the ambient lights present), it is possible to use near-ultraviolet radiation for the communication channel. generally, solar intensity decreases with depth. by examining how light is absorbed in water (see figure 1), we see that the best wavelengths to use in uowc are 450 nm – 500 nm for clear waters and 570 nm – 600 nm for coastal waters. this same attenuation is also true for the solar spectrum [50]. in any case, at a depth of a few meters, the solar radiation in the near ultraviolet is practically absent. furthermore, in relatively clear waters this radiation is relatively poorly attenuated especially in ocean waters but less in coastal waters (according to figure 1). for these reasons, submarine communication systems, which use uv-a band communication channels, are extremely interesting. we must also observe two other important characteristics of optical communication that uses the near ultraviolet. this communication channel is difficult to detect and intercept, particularly attractive feature for military applications. furthermore, the use of ultraviolet radiation allows wireless connections to be made without requiring perfect alignment between transmitter and receiver (nlos uv scattering communication) [51], [52]. very useful characteristic for wireless transmission between moving objects. 3. led usable as light detector in addition to emitting light, leds can be employed also as light sensor/detector [53]-[60]. figure 5 schematically, shows this application in addition, the led can also be used as a temperature sensor [61]. to verify the possibility of underwater communication through uv radiation, we have chosen to use a reverse-polarized led as a detector. the choice was made to have an inexpensive photodetector that is not very sensitive to the light radiations present in the environment; without the need for filters that cut visible radiation. led can be also used as avalanche photodiode (apd) [62], [63]. unlike normal photodiodes, leds can detect a narrow band of wavelengths, they are spectrally selective detectors. in contrast, normal photodiodes have a wide spectral response and require costly filters to detect a specific wavelength. both leds and photodiodes have sensitivity stable over time. however, the filters have a limited life. in a p-n diode, inside the junction, there are free charges generated by thermal energy. when a p-n junction diode is reverse biased, these charges are accelerated. this movement of charges produces the reverse current of the diode. if the reverse polarization potential is increased, the free charges can acquire enough energy to ionize some atoms of the crystal lattice. this ionization produces additional free charges. moreover, these additional charges are accelerated by the polarization potential. this creates an avalanche effect, producing a large reverse current (breakdown current). the polarization voltage at which this arises is called zener potential [64]. if you want to use an led as light detector, generally, the photocurrents generated are linear but very small. in uowc applications, we have currents in the range of nano amps. therefore, for their correct subsequent signal processing, it is necessary to transform the detect current into a suitable voltage. for this operation, transimpedance amplifiers are commonly used [65]. the amplitude of the signal received by the led, and subsequently amplified by transimpedance amplifiers, depends on many external parameters. for this reason, the transmitted optical signal must be suitably digitized and modulated. our system uses a modulation format based on pulse width modulation (pwm). 4. uv led-to-led communication system in underwater optical wireless transmission, the signal reaching the receiver has low intensity. for this reason, extensive studies are underway to use very sensitive detectors such as avalanche photodiode (apd) or single photon avalanche diode [66]-[75]. with the use of very responsive photosensors, the problem of the presence of ambient light is very important [76]. with the use of a led-to-led transmission system that uses uv leds, it is possible to implement an underwater figure 5. basic led used as light emitter and receiver. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 83 communication system with invisible radiation that is not very sensitive to ambient light. led-to-led communication systems are characterized by low cost, low complexity and, above all, low energy consumption. on the other hand, they can be used only when the exchange of messages occurs at a small distance without the need for large bandwidth [76], [77]. as already mentioned, underwater, practically, ultraviolet radiation is absent. therefore, using uv led as light emitter and a uv led, used as apd, as receiver allows to have a system that is not very sensitive to environment light; an led can detect radiation with a wavelength slightly shorter than or equal to that emitted (internal photoelectric effect) [56], [57], [78]. the same type of led can be used as a receiver and as a transmitter. the use of the same type of led is useful in half duplex communication systems; the same led can be used as a transmitter or as a receiver. in this work, we have used a bivar uv5tz-385-30 led as a transmitter and receiver [79]. this led has viewing angle of 30° and an aperture area of 25⸱10-6 m2. seawater light transmission model is shown in figure 6. the optical power on the receiver can be written as [80]-[83]: 𝑃rx=𝑃tx∙𝜂tx∙𝜂𝑅𝑥 ∙exp [− 𝐾d (𝜆)∙𝑧 cos 𝜃 ] ∙ 𝐴rx ∙ cos 𝜃 2π∙𝑧2(1 − cos 𝜃0) , (2) where 𝑃tx is the transmitted power, 𝜂tx and 𝜂rx are the optical efficiencies of the 𝑇𝑥 and 𝑅𝑥 correspondingly, 𝐾d(𝜆) is the attenuation coefficient, 𝑧 is the perpendicular distance between the 𝑇𝑥 plane and the 𝑅𝑥 plane, 𝜃0 is the 𝑇𝑥 beam divergence angle, 𝜃 is the angle between the perpendicular to the 𝑅𝑥 plane and the 𝑇𝑥‐ 𝑅𝑥 trajectory, and 𝐴rx is the receiver aperture area. in our system, they experimentally verified that the received signal is correctly reconstructed if the misalignment is 𝜃 < 20°. the transmitter led was driven with 25 ma by means of a pulse generator. while the current generated by the led used as a receiver, reverse biased with a voltage of 15 v, was read through a transimpedance amplifier. two ultralow noise precision high speed op amps [84] were used to implement the transimpedance amplifier. the amplifier, as shown in figure 7, is made in two states. this is to be able to obtain a passband greater than 100 khz. the rx and tx leds, together with the relative control electronics, were inserted in a tank filled with real seawater (water taken from the tyrrhenian coast anzio italy). the leds are placed at 50 cm and facing one towards the other. figure 8 shows the experimental setup used for the tests. the experimental tests were carried out in the laboratory and outdoors in different configurations of ambient brightness. figure 8(a) shows the system working in laboratory. figure 8(b) shows the system working outdoors in full sun. all the tests carried out confirmed that the system is practically insensitive to ambient light (both artificial and natural). the experimental setup is realized to be able to obtain three different lengths of the optical channel. the different lengths of the optical path are obtained by means of mirrors, as shown in figure 9. the figure 10 show the signal used to drive the tx led (cyan trace) and the corresponding output signal (vout) from the rx circuit (yellow trace). the implemented system uses only one led as a transmitter and another as a receiver. in our application, there are no restrictions on using a led cluster to transmit information. as well as it can be useful to use led array to receive the signal. by using many diodes as tx, as well as rx, systems with better performance can be obtained. we used the simplest possible configuration as the aim was to demonstrate the possibility of figure 6. seawater light transmission model. figure 7. rx and tx led driver circuit. figure 8. experimental setup used for the tests: (a) system working in laboratory; (b) system working outdoors. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 84 implementing an underwater led-to-led transmission using near ultraviolet radiation. 5. system description in any reliable communication system, data must be suitably modulated. modulation consists in varying one or more properties of a relatively high frequency signal (carrier). we used pwm modulation to implement our system. furthermore, considering that high sound quality is not required for audio communication between divers, this type of modulation is more than enough to test the feasibility of wireless audio communication via uv-a optical channel. obviously, more performing, and more robust modulation systems with respect to noise can be used. the pwm consists of the information signal (in our case the audio signal) that causes the modification of the time duration of the pulse carrier. this pulse signal turns the transmitter led on and off at the rate of the carrier’s frequency. in other word, with pwm technique we change the duty cycle of a square wave with constant frequency and amplitude; as shown in figure 11. the average value of a pwm signal, period by period, can be expressed as: 𝑉average = 1 𝑇 (∫ 𝑉max ∙ 𝑑𝑡 𝐷∙𝑇 0 + ∫ 𝑉min ∙ 𝑑𝑡 𝑇 𝐷∙𝑇 ) . (3) if 𝑉min = 0, the equation (3), can be simplified as: 𝑉average =𝐷 ∙ 𝑉max . (4) equation (4) indicates that if the amplitude of the carrier is constant (along a period), the average value of the pwm signal is directly proportional to the duty cycle. if the duty cycle is proportional to the information to be transmitted, it can be extracted through a simple averaging operation on the pwm signal. average is easily obtainable through opportune low pass filtering. to verify the real possibility of realizing an audio communication, we have respectively coupled a modulator and a demodulator to the led transmitter and to the led receiver [85]-[87]. the block diagram of our audio modulator and led driver is shown in figure 12. this led driver has a restricted baud rate. the main reason is the limited switching speed of silicon devices. a maximum data rate of 100 kbps can be achieved with this driver. in any case, this speed of data transmission is more than enough to implement an excellent audio connection. the pwm is achieved by means of a timing circuits ne555 [88] and a comparator lt1011 [89]. our circuit is powered with 6.5 v and produces a sawtooth waveform with frequency of approximately 100 khz and peak to peak voltage about 4.3 v. the sawtooth waveform is applied to the non-inverting input of the comparator. the audio signal works as the reference voltage and is applied to the inverting input. to realize a duty cycle of 50%, the audio signal is offset at the average voltage of the sawtooth waveform (1/3 of the power supply voltage). the comparator output is equal to the supply voltage when the sawtooth output is a higher voltage than the audio signal. figure 9. schematic of the water canal. the different optical path is obtained with the help of mirrors. figure 10. the cyan line represents the signal used to drive the tx led. the yellow line represents the corresponding rx output signal. (a) distance between transmitter and receiver 0.5 m. (b) distance between transmitter and receiver 1.5 m. (c) distance between transmitter and receiver 2.5 m. figure 11. pwm signal. square wave with constant frequency and amplitude by variable duty cycle. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 85 the led driver is based on the integrated mic3289 [90]. it is a pwm boost-switching regulator that is optimized for constant-current led driver applications. figure 16 shows the input and output signals in a pulse width modulation process. to recover the transmitted audio, a receiver unit, mainly composed of photodetector and signal conditioning devices, is used. the photodiode receives the transmitted optical signal and converts the optical signals into the electrical signals. then, the electrical signal is fed into the recovery circuits and pwm demodulator. figure 14 shows the block diagram of the receiver unit. the transimpedance amplifier (shown in the figure 7) is coupled with a low pass filter. this filter, with cut-off frequency about 1 mhz, uses to reduce the high frequency noise present at the transimpedance amplifier output. the output signal of filter has an amplitude that depends on many external parameters, as well as the distance and misalignment between transmitter and receiver. to obtain a correct reconstruction of the pwm signal, a comparator with variable threshold is used. by means of an integrator circuit, a voltage proportional to the average value of the amplitude of the received signal is obtained. this voltage is used as the threshold of the comparator. the integrator that was used provides an average signal at the output, which is about one third of the amplitude of the signal coming from the filter. in this way, the reconstruction of the pwm signal is obtained which is practically independent of the amplitude of the signal received by the led used as photodiode. finally, the reconstructed pwm signal (signal with constant amplitude and variable duty cycle) is demodulate by a low pass filter with a cutoff frequency of 8 khz. instead, figure 15 shows the schematic drawing of the receiving unit. a low pass filter is sufficient to decode the audio information contained in the pwm signal. by choosing a low-pass filter with an appropriate cut-off frequency, it will be possible to remove the high-frequency component in the pwm signal while keeping only the low-frequency signal (the audio information). our demodulator is a 4th order butterworth low pass filter. it consists of two non-identical 2nd order low pass filter. the human ear can perceive sounds with frequencies between 20 hz and 20 khz. in any case, the human voice produces sound that are confined to within 8 khz. therefore, for verbal communications, a low-pass filter with a cut-off frequency around 8 khz is sufficient. the 4th order butterworth filter we use has a cutoff frequency of approximately 7.8 khz. therefore, it is irrelevant for all the sound frequencies emitted by the human voice. on the other hand, at 100 khz, the filter has an attenuation of 83 db. this figure 12. audio modulator and led driver. figure 13. input and output signals in a pulse width modulation process. figure 14. input and output signals in a pulse width modulation process. figure 15. schematic of the optical receiver circuit. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 86 indicates that the high frequency carrier is highly suppressed. figure 16 shows the frequency response of the filter used to retrieve the audio information from the pwm signal preliminary tests were conducted to verify the real applicability of our system in underwater wireless voice transmission. first, a 4 khz tone was used to check the entire modulation, optical transmission, optical detection, and demodulation system. in figure 17(a) trace 1 (yellow) is the sinusoidal 4 khz tone going into the transmitter. while trace 2 (blue) shows the pwm modulation used to drive the tx led current. in figure 17(b) trace 2 (blue) shows the reconstructed pwm signal in the receiving unit. finally, trace (1) (yellow) of figure 17(b) shows the reconstruction of the sinusoid at 4 khz. the reconstruction of the sinusoid is more than acceptable, even if there is the presence of “noise”, related to the harmonics of the carrier signal. subsequently, the system was tested by transmitting an audio speech signal. with 2.5 m between tx and rx the speech transmitted is perfectly understandable. figure 18 shows the audio tracks, spectrograms, and frequency analysis of the transmitted audio signal and of the message reconstructed downstream of the receiver. 6. conclusions underwater optical wireless communication (uowc) has recently emerged as a unique opportunity. many studies are present in the literature, however underwater optical communication via near-ultraviolet (uv-a) radiation is not addressed. in this paper, we have shown that in short range when broadband communication is not needed, it is possible to implement a uowc system that makes use of uv-a radiation. a uv underwater optical wireless audio transceiver was proposed for wireless communication in close range between divers. we have also verified that this system can be realized via an led-led connection. this makes the system simple, economical, light, compact and, above all, not energy-intensive. the study is mainly designed for military applications. in military applications, it is very important to have systems that cannot be intercepted and possibly not easily identifiable. in addition to have low energy consuming systems. for these reasons, we have developed a system that uses non-visible optical radiation and led-to-led transmission, which is energy efficient. however, considering the simplicity and cost-effectiveness of the developed system, it can easily be used for communications between amateur divers. in our study, we faced the problem of verifying the feasibility of transmitting a signal, with sufficient bandwidth to transmit an audio signal, at a distance of about 2.5/3 meters. further studies and tests in the real marine environment are needed. references [1] h. sari, b. woodward, digital underwater voice communications. in: r. s. h. istepanian, m. stojanovic (eds) underwater acoustic digital signal processing and communication systems. springer, boston, ma. 2002. doi: 10.1007/978-1-4757-3617-5_4 [2] j. w. giles, i. n. bankman, underwater optical communications systems. part 2: basic design considerations. proceedings of the milcom 2005 -ieee military communications conference. atlantic city, nj, usa, 17-20 october 2005. doi: 10.1109/milcom.2005.1605919 [3] h. haas, l. yin, y. wang, c. chen, what is lifi?, journal of lightwave technology 34(6) (2016), pp. 1533-1544. doi: 10.1109/jlt.2015.2510021 [4] m. uysal, c. capsoni, z. ghassemlooy, a. boucouvalas, e. udvary, optical wireless communications: an emerging technology. springer: new york, ny (usa), 2016. doi: 10.1007/978-3-319-30201-0 [5] m. z. chowdhury, m. shahjalal, m. hasan, y. m. jang, the role of optical wireless communication technologies in 5g/6g and iot solutions: prospects, directions, and challenges, applied sciences 9(20) (2019) art. no. 4367. doi: 10.3390/app9204367 [6] g. schirripa spagnolo, l. cozzella, f. leccese, s. sangiovanni, l. podestà, e. piuzzi, optical wireless communication and li-fi: a new infrastructure for wireless communication in saving energy era, in proceedings of 2020 ieee international workshop on metrology for industry 4.0 & iot, roma, italy, 2020, pp. 674-678. doi: 10.1109/metroind4.0iot48571.2020.9138180 [7] h. g. pfeiffer, h. a. liebhafsky, the origins of beer’s law, journal of chemical education 28(3) (1951), pp. 123. doi: 10.1021/ed028p123 [8] h. r. gordon, can the lambert‐beer law be applied to the diffuse attenuation coefficient of ocean water? limnology and oceanography 34(8) (1989), pp. 1389-1409. doi: 10.4319/lo.1989.34.8.1389 figure 16. our 4th order butterworth low pass filter frequency response. figure 17. (a) pulse-width modulation waveform. trace yellow: tone of 4 khz; trace blue relative pwm modulation. (b) trace blue: pwm signal recovered in the rx unit; trace yellow 4 khz tone present at rx output. figure 18. (a) audio track, spectrograms, and frequency analysis of the transmitted audio. (b) audio track, spectrograms, and frequency analysis of the of the retrieved audio signal. figures obtained by audacity® software [91]. https://doi.org/10.1007/978-1-4757-3617-5_4 https://doi.org/10.1109/milcom.2005.1605919 https://doi.org/10.1109/jlt.2015.2510021 https://doi.org/10.1007/978-3-319-30201-0 https://doi.org/10.3390/app9204367 https://doi.org/10.1109/metroind4.0iot48571.2020.9138180 https://doi.org/10.1021/ed028p123 https://doi.org/10.4319/lo.1989.34.8.1389 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 87 [9] n. g. jerlov, irraiance, marine optics vol. 14, ch.10 (1968), pp.115132, elsevier oceanography series, amsterdam, netherlands. doi: 10.1016/s0422-9894(08)70929-2 [10] m. g. solonenko, c. d. mobley, inherent optical properties of jerlov water types, appl. opt. 54(17) (2015), pp. 5392–5401. doi: 10.1364/ao.54.005392 [11] n. g. jerlov, optical classification of ocean water. physical aspects of light in the sea, edited by john e. tyler, honolulu: university of hawaii press, 2021, pp. 45-50. doi: 10.1515/9780824884918-009 [12] j. t. o. kirk, light and photosynthesis in aquatic ecosystems, cambridge, uk: cambridge university press, 2013. doi: 10.1017/cbo9781139168212 [13] mit news. advancing undersea optical communications. online [accessed 05 december 2021]. http://news.mit.edu/2018/advancing-undersea-opticalcommunications-0817 [14] c. m. g. gussen, p. s. r. diniz, m. l. r. campos, w. a. martins, f. m. costa, j. n. gois, a survey of underwater wireless communication technologies. j. of commun. and info. sys. 31(1) 2016, pp. 242–255. doi: 10.14209/jcis.2016.22 [15] h. kaushal, g. kaddoum, underwater optical wireless communication. ieee access 4 (2016), pp. 1518-1547. doi: 10.1109/access.2016.2552538 [16] c. shen, y. guo, h. m. oubei, t. k. ng, g. liu, k. h. park, k. t. ho, m. s. alouini, b. s. ooi, 20-meter underwater wireless optical communication link with 1.5 gbps data rate. optics express 24(22) (2016), pp. 25502-25509. doi: 10.1364/oe.24.025502 [17] j. xu, y. song, x. yu, a. lin, m. kong, j. han, n. deng, underwater wireless transmission of high-speed qam-ofdm signals using a compact red-light laser. optics express 24(8) (2016), pp. 8097-8109. doi: 10.1364/oe.24.008097 [18] c. wang, h. y. yu, zhu, a long distance underwater visible light communication system with single photon avalanche diode, ieee photonics journal 8(5) (2016) art. no. 7906311. doi: 10.1109/jphot.2016.2602330 [19] r. ji, s. wang, q. liu, w. lu, high-speed visible light communications: enabling technologies and state of the art. appl. sci. 8(4) (2018) art. no. 589. doi: 10.3390/app8040589 [20] h. m. oubei, c. shen, a. kammoun, e. zedini, k. h. park, x. sun, g. liu, c. h. kang, t. k. ng, n. s. alouini, light based underwater wireless communications. japanese journal of applied physics 57(8s2) (2018), 08pa06. doi: 10.7567/jjap.57.08pa06 [21] m. f. ali, d. n. k. jayakody, y. a. chursin, s. affes, s. dmitry, recent advances and future directions on underwater wireless communications, archives of computational methods in engineering 27 (2020), pp. 1379–1412. doi: 10.1007/s11831-019-09354-8 [22] g. d. roumelas, h. e. nistazakis, a. n. stassinakis, c. k. volos, a. d. tsigopoulos, underwater optical wireless communications with chromatic dispersion and time jitter, computation 7(3) (2019) art. no. 35. doi: 10.3390/computation7030035 [23] n. saeed, a. celik, t. y. al-naffouri, m. s. alouini, underwater optical wireless communications, networking, and localization: a survey, ad hoc networks 94 (2019) art. no. 101935. doi: 10.1016/j.adhoc.2019.101935 [24] z. hong, q. yan, z. li, t. zhan, y. wang, photon-counting underwater optical wireless communication for reliable video transmission using joint source-channel coding based on distributed compressive sensing, sensors 19(5) (2019) art. no. 1042. doi: 10.3390/s19051042 [25] g. schirripa spagnolo, l. cozzella, f. leccese, underwater optical wireless communications: overview, sensors 20 (2020) art. no. 2261. doi: 10.3390/s20082261 [26] s. zhu, x. chen, x. liu, g. zhang, p. tian, recent progress in and perspectives of underwater wireless optical communication, progress in quantum electronics 73 (2020) art. no. 100274. doi: 10.1016/j.pquantelec.2020.100274 [27] g. schirripa spagnolo, l. cozella, f. leccese, a brief survey on underwater optical wireless communications. in 2020 imeko tc-19 international workshop on metrology for the sea, naples, italy, october 5-7, 2020, pp. 79-84. online [accessed 05 december 2021] https://www.imeko.org/publications/tc19-metrosea2020/imeko-tc19-metrosea-2020-15.pdf [28] j. sticklus, p. a. hoeher, r. röttgers, optical underwater communication: the potential of using converted green leds in coastal waters. ieee journal of oceanic engineering 44(2) (2018), pp. 535-547. doi: 10.1109/joe.2018.2816838 [29] t. wiener, s. karp, the role of blue/green laser systems in strategic submarine communications, ieee transactions on communications 28(9) (1980), pp. 1602-1607. doi: 10.1109/tcom.1980.1094858 [30] l. w. e. wright, blue-green lasers for submarine communications, naval engineers journal 95(3) (1983), pp. 173-177. doi: 10.1111/j.1559-3584.1983.tb01635.x [31] r. e. chatham, blue-green lasers for submarine communications, in conference on lasers and electro-optics, g. bjorklund, e. hinkley, p. moulton, and d. pinnow, eds., osa technical digest (optical society of america, 1986), paper wm1. doi: 10.1364/cleo.1986.wm1 [32] m. doniec, i. vasilescu, m. chitre, c. detweiler, m. hoffmannkuhnt, d. rus, aquaoptical: a lightweight device for high-rate long-range underwater point-to-point communication, mar. technol. soc. j. 44 (2010), pp. 1–6. doi: 10.4031/mtsj.44.4.6 [33] i. c. rust, h. h. asada, a dual-use visible light approach to integrated communication and localization of underwater robots with application to non-destructive nuclear reactor inspection. in proceedings of the ieee international conference on robotics and automation, saint paul, mn, usa, 14–18 may 2012; pp. 2445–2450. doi: 10.1109/icra.2012.6224718 [34] hydromea. online [accessed 05 december 2021]. https://www.hydromea.com [35] acquatec group. online [accessed 05 december 2021]. https://www.aquatecgroup.com [36] marine link. online [accessed 05 december 2021]. https://www.marine-link.com [37] sonardyne. online [accessed 05 december 2021]. https://www.sonardyne.com [38] b. cochenour, k. dunn, a. laux, l. mullen, experimental measurements of the magnitude and phase response of highfrequency modulated light underwater, appl. opt. 56(14) (2017), pp. 4019-4024. doi: 10.1364/ao.56.004019 [39] recreational scuba training council (rstc), common hand signals for recreational scuba diving, 2015. online [accessed 05 december 2021]. http://www.neadc.org/commonhandsignalsforscubadiving.pd f [40] underwater slate. online [accessed 05 december 2021] https://www.mares.com/en/underwater-slate [41] electronic underwater slate. online [accessed 05 december 2021]. https://duslate.com [42] o. rusoke-dierich, basic diving equipment. in: diving medicine 2018, pp. 15-19, springer, cham. doi: 10.1007/978-3-319-73836-9_2 https://doi.org/10.1016/s0422-9894(08)70929-2 https://doi.org/10.1364/ao.54.005392 https://doi.org/10.1515/9780824884918-009 https://doi.org/10.1017/cbo9781139168212 http://news.mit.edu/2018/advancing-undersea-optical-communications-0817 http://news.mit.edu/2018/advancing-undersea-optical-communications-0817 https://doi.org/10.14209/jcis.2016.22 https://doi.org/10.1109/access.2016.2552538 https://doi.org/10.1364/oe.24.025502 https://doi.org/10.1364/oe.24.008097 https://doi.org/10.1109/jphot.2016.2602330 https://doi.org/10.3390/app8040589 https://doi.org/10.7567/jjap.57.08pa06 https://doi.org/10.1007/s11831-019-09354-8 https://doi.org/10.3390/computation7030035 https://doi.org/10.1016/j.adhoc.2019.101935 https://doi.org/10.3390/s19051042 https://doi.org/10.3390/s20082261 https://doi.org/10.1016/j.pquantelec.2020.100274 https://www.imeko.org/publications/tc19-metrosea-2020/imeko-tc19-metrosea-2020-15.pdf https://www.imeko.org/publications/tc19-metrosea-2020/imeko-tc19-metrosea-2020-15.pdf https://doi.org/10.1109/joe.2018.2816838 https://doi.org/10.1109/tcom.1980.1094858 https://doi.org/10.1111/j.1559-3584.1983.tb01635.x https://doi.org/10.1364/cleo.1986.wm1 https://doi.org/10.4031/mtsj.44.4.6 https://doi.org/10.1109/icra.2012.6224718 https://www.hydromea.com/ https://www.aquatecgroup.com/ https://www.marine-link.com/ https://www.sonardyne.com/ https://doi.org/10.1364/ao.56.004019 http://www.neadc.org/commonhandsignalsforscubadiving.pdf http://www.neadc.org/commonhandsignalsforscubadiving.pdf https://www.mares.com/en/underwater-slate https://duslate.com/ https://doi.org/10.1007/978-3-319-73836-9_2 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 88 [43] neptune space predator t-divers full face mask-free shipping. ocean reef inc. 2510 island view way vista, california 92081 usa, 2021. online [accessed 05 december 2021]. https://diving.oceanreefgroup.com/wpcontent/uploads/sites/3/2018/10/neptune-spacepredator-t-divers-rel-1.3.pdf [44] ocean technology systems, wireless underwater communications. online [accessed 05 december 2021]. https://www.oceantechnologysystems.com/learningcenter/through-water-communications/ [45] divelink underwater communications ltd. online [accessed 05 december 2021]. http://www.divelink.net/purchase/aga/8-home [46] h. a. nowak, underwater led-based communication links. master’s thesis, naval postgraduate school monterey, ca, usa, april 21, 2020. online [accessed 05 december 2021]. https://apps.dtic.mil/sti/pdfs/ad1114685.pdf [47] t. hamza, m.-a. khalighi, s. bourennane, p. léon, j. opderbecke, investigation of solar noise impact on the performance of underwater wireless optical communication links, optics express 24(22) (2016), 25832. doi: 10.1364/oe.24.025832 [48] r. amin, b. l. richards, w. f. x. e. misa, j. c. taylor, d. r. miller, a. k. rollo, c. demarke, h. singh, g. c. young, j. childress, j. e. ossolinski, r. t. reardon, k. h. koyanagi, the modular optical underwater survey system. sensors 17 (2017), 2309. doi: 10.3390/s17102309 [49] n. e. farr, c. t. pontbriand, j. d. ware, l.-p. a. pelletier, nonvisible light underwater optical communications, in proceedings of ieee third underwater communications and networking conference (ucomms), lerici, italy, 2016, pp. 1-4. doi: 10.1109/ucomms.2016.7583454. [50] j. marshall, vision and lack of vision in the ocean, current biology 27(11) (2017), pp. r494-r502. doi: 10.1016/j.cub.2017.03.012 [51] x. sun, et al., 375-nm ultraviolet-laser based non-line-of-sight underwater optical communication, optics express 26(10) (2018), pp. 12870-12877. doi: 10.1364/oe.26.012870 [52] x. sun, x. et al., non-line-of-sight methodology for high-speed wireless optical communication in highly turbid water, opt. comm. 461 (2020) art. no. 125264. doi: 10.1016/j.optcom.2020.125264 [53] v. lange, r. hönl, led as transmitter and receiver in pof based bidirectional communication systems, in international ieee conference and workshop in óbuda on electrical and power engineering (cando-epe), 2018, pp. 000137-000142. doi: 10.1109/cando-epe.2018.8601162 [54] s. li, a. pandharipande, f. m. j. willems, adaptive visible light communication led receiver, in proceeding of 2017 ieee sensors, pp. 1-3. doi: 10.1109/icsens.2017.8234237 [55] l. matheus, l. pires, a. vieira, l. f. vieira, m. a. vieira, j. a. nacif, the internet of light: impact of colors in led‐to‐led visible light communication systems, internet technology letters 2(1) (2019), e78. doi: 10.1002/itl2.78 [56] g. schirripa spagnolo, f. leccese, m. leccisi, led as transmitter and receiver of light: a simple tool to demonstration photoelectric effect. crystals 9(10) (2019) art. no. 531. doi: 10.3390/cryst9100531 [57] g. schirripa spagnolo, a. postiglione, i. de angelis, simple equipment for teaching internal photoelectric effect, phys. educ. 55 (2020) art. no. 055011 doi: 10.1088/1361-6552/ab97bf [58] j. sticklus, p.a. hoeher, m. hieronymi, experimental characterization of single-color power leds used as photodetectors, sensors 20 (2020) art. no. 5200. doi: 10.3390/s20185200 [59] m. galal, w. p. ng, r. binns, a. abd el aziz, characterization of rgb leds as emitter and photodetector for led-to-led communication. in proceedings of the 12th ieee/iet international symposium on communication systems, networks and digital signal processing csndsp, porto, portugal, 20–22 july 2020. doi: 10.1109/csndsp49049.2020.9249617 [60] m. galal, w. p. ng, r. binns, a. abd el aziz, experimental characterization of rgb led transceiver in low-complexity led-to-led link, sensors 20 (2020) art. no. 5754. doi: 10.3390/s20205754 [61] g. schirripa spagnolo, f. leccese, led rail signals: full hardware realization of apparatus with independent intensity by temperature changes, electronics 10(11) (2021) art. no. 1291. doi: 10.3390/electronics10111291 [62] d. j. starling, b. burger, e. miller, j. zolnowski, j. ranalli, an actively quenched single photon detector with a light emitting diode, modern applied science 10(1) (2016) art. no.114. doi: 10.5539/mas.v10n1p114 [63] l. mccann, introducing students to single photon detection with a reverse-biased led in avalanche mode, in 2015 conference on laboratory instruction beyond the first year of college, july 2224, college park, md, usa. doi: 10.1119/bfy.2015.pr.016 [64] s. m. sze, kwok k. ng, physics of semiconductor devices, john wiley & sons, inc., hoboken, nj, usa, 2007. doi: 10.1002/0470068329 [65] g. ferrari, m., sampietro, wide bandwidth transimpedance amplifier for extremely high sensitivity continuous measurements, review of scientific instruments 78(9) (2007) art. no. 094703. doi: 10.1063/1.2778626 [66] f. zappa, s. tisa, a. tosi, s. cova, principles and features of single-photon avalanche diode arrays, sensors and actuators a: physical 140(1) (2007), pp. 103-112. doi: 10.1016/j.sna.2007.06.021 [67] j. kirdoda, d. c. s. dumas, k. kuzmenko, p. vines, z. m. greener, r. w. millar, m. m. mirza, g. s. buller, d. j. paul, geiger mode ge-on-si single-photon avalanche diode detectors. 2019 ieee 16th international conference on group iv photonics (gfp), singapore, 28-30 aug 2019 doi: 10.1109/group4.2019.8853918 [68] s. donati, t. tambosso, single-photon detectors: from traditional pmt to solid-state spad-based technology, ieee journal of selected topics in quantum electronics 20(6) (2014), pp. 204211. doi: 10.1109/jstqe.2014.2350836 [69] c. wang, h. y. yu, y. j. zhu, a long distance underwater visible light communication system with single photon avalanche diode, ieee photonics journal 8(5) (2016) art. no 7906311. doi: 10.1109/jphot.2016.2602330 [70] t. shafique, o. amin, m. abdallah, i. s. ansari, m. s. alouini, k. qaraqe, performance analysis of single-photon avalanche diode underwater vlc system using arq, ieee photonics journal 9(5) (2017), pp. 1-11. doi: 10.1109/jphot.2017.274300 [71] hadfield, r. single-photon detectors for optical quantum information applications, nature photon 3 (2009), pp. 696–705. doi: 10.1038/nphoton.2009.230 [72] d. chitnis, s. collins, a spad-based photon detecting system for optical communications, journal of lightwave technology 32(10) (2014), pp. 2028-2034. doi: 10.1109/jlt.2014.2316972 [73] e. sarbazi, m. safari, h. haas, statistical modelling of singlephoton avalanche diode receivers for optical wireless communications, ieee transactions on communications 66(9) (2018), pp. 4043-4058. doi: 10.1109/tcomm.2018.2822815 [74] t. shafique, o. amin, m. abdallah, i. s. ansari, m. s. alouini, k. qaraqe, performance analysis of single-photon avalanche diode https://diving.oceanreefgroup.com/wp-content/uploads/sites/3/2018/10/neptune-space-predator-t-divers-rel-1.3.pdf https://diving.oceanreefgroup.com/wp-content/uploads/sites/3/2018/10/neptune-space-predator-t-divers-rel-1.3.pdf https://diving.oceanreefgroup.com/wp-content/uploads/sites/3/2018/10/neptune-space-predator-t-divers-rel-1.3.pdf https://www.oceantechnologysystems.com/learning-center/through-water-communications/ https://www.oceantechnologysystems.com/learning-center/through-water-communications/ http://www.divelink.net/purchase/aga/8-home https://apps.dtic.mil/sti/pdfs/ad1114685.pdf https://doi.org/10.1364/oe.24.025832 https://doi.org/10.3390/s17102309 https://doi.org/10.1109/ucomms.2016.7583454 https://doi.org/10.1016/j.cub.2017.03.012 https://doi.org/10.1364/oe.26.012870 https://doi.org/10.1016/j.optcom.2020.125264 https://doi.org/10.1109/cando-epe.2018.8601162 https://doi.org/10.1109/icsens.2017.8234237 https://doi.org/10.1002/itl2.78 https://doi.org/10.3390/cryst9100531 https://doi.org/10.1088/1361-6552/ab97bf https://doi.org/10.3390/s20185200 https://doi.org/10.1109/csndsp49049.2020.9249617 https://doi.org/10.3390/s20205754 https://doi.org/10.3390/electronics10111291 https://doi.org/10.5539/mas.v10n1p114 https://doi.org/10.1119/bfy.2015.pr.016 https://doi.org/10.1002/0470068329 https://doi.org/10.1063/1.2778626 https://doi.org/10.1016/j.sna.2007.06.021 https://doi.org/10.1109/group4.2019.8853918 https://doi.org/10.1109/jstqe.2014.2350836 https://doi.org/10.1109/jphot.2016.2602330 https://doi.org/10.1109/jphot.2017.274300 https://doi.org/10.1038/nphoton.2009.230 https://doi.org/10.1109/jlt.2014.2316972 https://doi.org/10.1109/tcomm.2018.2822815 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 89 underwater vlc system using arq, ieee photonics journal 9(5) (2017), pp. 1-11. doi: 10.1109/jphot.2017.2743007 [75] m. a. khalighi, t. hamza, s. bourennane, p. léon, j. opderbecke, underwater wireless optical communications using silicon photo-multipliers, ieee photonics journal 9(4) (2017), pp. 1-10. doi: 10.1109/jphot.2017.2726565 [76] j. sticklus, m. hieronymi, p. a. hoeher, effects and constraints of optical filtering on ambient light suppression in led-based underwater communications, sensors 18(11) (2018) art. no. 3710. doi: 10.3390/s18113710 [77] d. giustiniano, n. o. tippenhauer, s. mangold, low-complexity visible light networking with led-to-led communication, in proceedings of 2012 ifip wireless days, 2012 pp. 1-8. doi: 10.1109/wd.2012.6402861 [78] g. schirripa spagnolo, f. leccese, system to monitor ir radiation of led aircraft warning lights, 2021 ieee 8th international workshop on metrology for aerospace (metroaerospace), 2021, pp. 407-411. doi: 10.1109/metroaerospace51421.2021.9511723 [79] bivar uv5tz leds datasheet. online [accessed 05 december 2021] https://www.mouser.it/datasheet/2/50/biva_s_a0002780821_1 -2262009.pdf [80] l. k. gkoura, g. d. roumelas, h. e. nistazakis, h. g. sandalidis, a. vavoulas, a. d. tsigopoulos, g. s. tombras, underwater optical wireless communication systems: a concise review, turbulence modelling approaches current state, development prospects, applications, konstantin volkov, intechopen, july 26th 2017. doi: 10.5772/67915 [81] r. a. khalil, m. i. babar, n. saeed, t. jan, h. s. cho, effect of link misalignment in the optical-internet of underwater things, electronics 9(4) (2020) art. no. 646. doi: 10.3390/electronics9040646 [82] s. arnon, d. kedar, non-line-of-sight underwater optical wireless communication network, j. opt. soc. am. a 26(3) (2009), pp. 530–539. doi: 10.1364/josaa.26.000530 [83] s. arnon, underwater optical wireless communication network, optical engineering 49(1) (2010) art. no. 015001. doi: 10.1117/1.3280288 [84] linear technology lt1028 datasheet. online [accessed 05 december 2021] https://www.analog.com/media/en/technicaldocumentation/data-sheets/1028fd.pdf [85] a. ayala, underwater optical wireless audio transceiver, senior project electrical engineering department, california polytechnic state university, san luis obispo (2016). online [accessed 05 december 2021] https://digitalcommons.calpoly.edu/eesp/352/ [86] r. k. schreyer, g. j. sonek, an optical transmitter/receiver system for wireless voice communication, ieee transactions on education 35(2) (1992), pp. 138-143. doi: 10.1109/13.135579 [87] j. t. b. taufik, m. l. hossain, t. ahmed, development of an audio transmission system through an indoor visible light communication link, international journal of scientific and research publications 9(1) (2019) 432-438. doi: 10.29322/ijsrp.9.01.2019.p8556 [88] texas instruments, timing circuit ne555 datasheet. online [accessed 05 december 2021] https://www.ti.com/lit/ds/symlink/ne555.pdf [89] linear technology, voltage comparator lt1011. online [accessed 05 december 2021] https://www.analog.com/media/en/technicaldocumentation/data-sheets/lt1011.pdf [90] pwm boost-switching regulator. online [accessed 05 december 2021] https://ww1.microchip.com/downloads/en/devicedoc/mic32 89.pdf [91] audacity® software is copyright © 1999-2021 audacity team. web site: https://audacityteam.org/. it is free software distributed under the terms of the gnu general public license. the name audacity® is a registered trademark. https://doi.org/10.1109/jphot.2017.2743007 https://doi.org/10.1109/jphot.2017.2726565 https://doi.org/10.3390/s18113710 https://doi.org/10.1109/wd.2012.6402861 https://doi.org/10.1109/metroaerospace51421.2021.9511723 https://www.mouser.it/datasheet/2/50/biva_s_a0002780821_1-2262009.pdf https://www.mouser.it/datasheet/2/50/biva_s_a0002780821_1-2262009.pdf https://doi.org/10.5772/67915 https://doi.org/10.3390/electronics9040646 https://doi.org/10.1364/josaa.26.000530 https://doi.org/10.1117/1.3280288 https://www.analog.com/media/en/technical-documentation/data-sheets/1028fd.pdf https://www.analog.com/media/en/technical-documentation/data-sheets/1028fd.pdf https://digitalcommons.calpoly.edu/eesp/352/ https://doi.org/10.1109/13.135579 https://doi.org/10.29322/ijsrp.9.01.2019.p8556 https://www.ti.com/lit/ds/symlink/ne555.pdf https://www.analog.com/media/en/technical-documentation/data-sheets/lt1011.pdf https://www.analog.com/media/en/technical-documentation/data-sheets/lt1011.pdf https://ww1.microchip.com/downloads/en/devicedoc/mic3289.pdf https://ww1.microchip.com/downloads/en/devicedoc/mic3289.pdf https://audacityteam.org/ analysis of peak-to-average power ratio in filter bank multicarrier with offset quadrature amplitude modulation systems using partial transmit sequence with shuffled frog leap optimization technique acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 5 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 analysis of peak-to-average power ratio in filter bank multicarrier with offset quadrature amplitude modulation systems using partial transmit sequence with shuffled frog leap optimization technique ch. thejesh kumar1, amit bindaj karpurapu2, mayank mathur3 1 department of ece,raghu institute of technology,visakhapatnam-531162, andhra pradesh, india 2 department of ece,swarnabharathi institute of science and technology, khammam-507002, telangana, india 3 university of technology, jaipore 303903, rajasthan, india section: research paper keywords: fbmc-oqam; pts; leap frog optimization; papr; ber citation: ch. thejesh kumar, amit bindaj karpurapu, mayank mathur, analysis of peak-to-average power ratio in filter bank multicarrier with offset quadrature amplitude modulation systems using partial transmit sequence with shuffled frog leap optimization technique, acta imeko, vol. 11, no. 1, article 29, march 2022, identifier: imeko-acta-11 (2022)-01-29 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received november 29, 2021; in final form march 5, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: ch. thejesh kumar, e-mail: thejeshkumar.ch@gmail.com 1. introduction the technology of orthogonal frequency division multiplexing (ofdm) technology emerged as the suitable multicarrier modulation technique for modern wireless communications [1]. they are responsible for the effective spectral frequency usage while minimizing inter-symbol interface. this is usually performed employing the guard interval insertion. in the multipath channel. adding the guard interval, however, reduces the efficiency of spectra and power. in order to tackle the issue, the filter bank multicarrier with offset quadrature amplitude modulation (fbmc-oqam) based technique has been proposed. the technique improves spectral efficiency. however, this comes at the cost of computational complexity. further, it has a noticeable impact yielding to reduced out-of-band power leakage [2]. the fbmc-oqam is being evaluated as a contender for the 5g mobile communication standard because to its benefits like excellent temporal and frequency localisation. further, other significant features are the minimization of out-of-band emission and resistance to phase noise [3]. in the fbmc-oqam approach [4], a prototype filter is used to reduce the orthogonality of subcarriers. this is often used to avoid the gi insertion. as a result, this can achieve greater spectral efficiency. citing the recent developments in wireless communication, the demand for mobile and broad-band service in 5g is evergrowing. in this line, the fbmc-oqam approach can be considered as the effective to support the ofdm technique [5]. abstract because of its low adjacent channel leakage ratio, the filter bank multicarrier with offset quadrature amplitude modulation (fbmcoqam) has sparked the attention of many researchers in recent times. however, the problem of high peak-to-average power ratio (papr) measurement has a detrimental influence upon the fbmc system's energy measurement efficiency. we study papr reduction of fbmc-oqam signals using the partial transmit sequence (pts) methods in this work. the pts with shuffled frog leap phase optimization method is proposed in this paper to reduce the larger papr measurement, which is the major disadvantage of the filter bank multicarrier with offset quadrature amplitude modulation system. according to the simulation software findings, the suggested approach has a considerably superior papr, which may reduce the papr by about 2 db at complementary cumulative distribution function 10-3 when compared to the traditional pts method, and it has a much lower computational complexity than the previous ways. the experimental parameters are measured, and results are evaluated using matlab tool. mailto:thejeshkumar.ch@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 the significant problem persistent in ofdm and fbmcoqam is the peak-to-average power ratio (papr) enhancement. as a result of this issue, greater non-linear signal distortion has developed at the output of a power amplifier, resulting in a catastrophic reduction of bit-error rate (ber) performance. to address this issue, various papr reduction techniques for multicarrier modulation systems have been developed, including partial transmit sequence (pts) [6], selective mapping [7], and others. further, the manuscript is organized to provide the information on papr reduction techniques in section 2. the algorithm is explained in section 3 while the demonstration of the application is given in section 4. the simulated results, computations and discussions are presented in section 5. overall conclusions are given in section 6. 2. papr reduction methods initially, the concept of papr reduction has been considered as an essential step in communications especially pointing to applications like radar and speech synthesis. typical radar systems have several limitations in terms of peak power. this is very common issue in every communication system. similarly, the speech synthesis application suffers degradation as it is severely bounded by peak power. this is due to the reason that the larger the signal, land up in the situation where the computing machine voice sounds very hard. these features should be avoided especially while handling human speech. when a multicarrier-based communication system is considered, the primary issue to deal with is the inherent papr. hence, the papr reduction can be considered as the essential part of improving the quality of the communication. several schemes and methods are proposed in due course of time to handle the papr reduction process with ease. it is essential to mention that the papr should be dealt through a non-complex phenomenon. for an ofdm system, several papr reduction schemes are proposed precisely following this rule. to list some of the most successful schemes so far, are clipping, tone injection and reservation. similarly, other techniques based on constellations and transmit sequences are also popular. along with these techniques like selective mapping along with block coding are some to mention in the popular list. a comparative study of the techniques mentioned above essentially gave a deep insight on the potential application and strength of each and every method. one of the most common papr reduction methods is the pts method, which is characterized as a distortion-less approach. the minimal papr value in the pts technique is obtained by multiplying the data signal cluster of each data symbol choosing the available and suitable phase combination. while demodulating, the corresponding side information provided by the transmitter is essential. challenging this, a solution to this papr enhancement issue has been provided in specific to the fbmc-oqam system. this can outperform the standard pts technique in terms of papr performance while reducing the computational cost of optimizing phase combination. the quiet characteristics are that the bigger papr is minimized using the overlapping-pts technique [8], [9] and the phase combination is optimised utilizing the shuffled frog leap (sfl) phase optimization. the receiver, like the pts technique, need transmitters to provide relevant side information which is useful for further the data demodulation. 3. shuffled frog leap optimization the sfl algorithm (sfla) belongs to the class of metaheuristic algorithms. it performs a learned heuristic approach for searching a solution. especially, it performs a combinatorial optimization employing several heuristic functions. these heuristic functions are non-complex mathematical models. the evolution of the memes is the factor of inspiration while structuring the algorithm. it mimics the way established to exchange the information among them. the sfla is a hybrid of deterministic and random methods. as in particle swarm optimization, the deterministic method enables the algorithm to effectively employ the surface data which is considered to be highly responsive. this has often been the potential driving source for the heuristic search. the random elements enable the search pattern's flexibility and resilience. like any other algorithm, the initial step to choose the population. this refers to number of frogs which are randomly generated. this consistently includes the whole swamp. the population has been classified into multiple parallel communities. these are referred as memeplexes. independent evolution of the individual is encouraged to search the space in multiple directions. the frogs inside each memeplex are infected by the ideas of other frogs, resulting in memetic development. memetic evolution increases the quality of an individual's meme and improves the individual frog's performance toward a goal. to keep the infection process competitive, frogs with better memes (ideas) must contribute more to the creation of new ideas than frogs with weak ideas. using a triangular probability distribution to choose frogs gives superior ideas a competitive edge. during evolution, frogs do change their memes. this is usually taking in to account the corresponding best available information. this information can be usually obtained from the memeplex. however, in many cases it can be fetched from the whole population basing on the quality. this change refers to the jump and the corresponding magnitude is its step size. this yields the new meme relative to the updated position of the frog. every frog is reintroduced to the community once it has improved its status. the knowledge acquired from a position shift is instantly available for improvement. this rapid access to fresh information distinguishes this technique from genetic algorithm. usually in the later, the whole population is affected. however, in the sfla, the scenario is different. every frog in the population is considered to be the potential solution. this depicts the concepts of idea dissemination. the potential part of the algorithm refers to the comparative study as drawn by the team of innovators suitably developing the concept. this can also be visualised as the engineer who continuously works towards bettering the designs. 3.1. sfla steps step1. initialization. selection of 𝑚 and 𝑛. here 𝑚 refers to number of memeplexes [9]. similarly, 𝑛 denotes number of frogs available every memeplex. the value of 𝐹 is computed as overall swamp size 𝐹 = 𝑚 × 𝑛. step2. generate a virtual population. step3. sort basing on ranks of the frogs. the sorting should follow the process to keep the best on top. step4. partition of frogs into memeplexes. partition array x into 𝑚 memeplexes 𝑌1, 𝑌2, … . , 𝑌m , each containing 𝑛 frogs, such that acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 𝑌𝑘 = 𝑈(j)k, f(𝑗)𝒌|u(𝑗)𝒌 = 𝑈(𝑘 + 𝑚(𝑗 − 1)), f(𝑗)𝒌 = f(𝑘 + 𝑚(𝑗 − 1)), 𝑗 = 1, … , 𝑛; 𝑘 = 1, … , 𝑚 . (1) for instance, if 𝑚 = 3, the corresponding rank 1 goes to memeplex-1. similarly, the rank-2 is assigned to memeplex-2. subsequently the rank 3 awarded to meplex-3 while the 4th rank designated to memeplex-1. the process goes on. step5. this step refers to memetic evolution. this takes place inside every memeplex. yield every memeplex 𝑌𝑘 , 𝑘 = 1, … , 𝑚. this is in accordance with the sfla. step6. all the memes are shuffled. following the finite iterative process, 𝑌1, … , 𝑌m should be replacing into 𝑋. this is with respect to 𝑋 = {𝑌𝑘 , 𝑘 = 1, … , 𝑚}. further, the step also includes sorting 𝑋 in the decreasing order basing on the respective performance index. this finally updates the position of the best frog to 𝑃x. step7. check whether convergence is achieved. termination should be called in once the conference is witnessed. if not go to step#3 once again. the termination criterion can be the computational time, number of iterations or the convergence. the convergence refers to ‘best memetic pattern’ without change. basing on this the computation of the function evaluation takes place. 4. papr reduction using pts-sfl 4.1. papr papr is the ratio of the peak power of a multicarrier signal to its average power. as the papr value rises, it shows that the system requires high-power amplifiers (hpas) to function in their saturation area, which is essential since any additional increase in signal amplitude forces the hpa to operate in its nonlinear region, resulting in signal distortion. the reduction methods in ofdm systems are performed separately to each symbol, and the papr of each symbol is computed independently. the overlapping of data blocks is not considered for fbmc/oqam signal which is good sign. to compute the papr correctly, the overlapping of the symbol blocks must be taken into account [10]. the papr of pure fbmc/oqam in db can be expressed as follows: 𝑃𝐴𝑃𝑅(db) = 10 log10 max (𝑚 − 1) ≤ 𝑡 ≤ 𝑚 𝑇 |𝑠(𝑡)m| 2 e[|𝑠(𝑡)|2] (2) from above equation (2) the numerator represents the peak power in the duration of the input 𝑚thdata block, 𝑇 (𝑚 − 1) ≤ 𝑡 ≤ 𝑚 𝑇, and the denominator e[|𝑠(𝑡)|2] represents the average power of the fbmc signal. 4.2. ptps in the conventional pts method, an input block is divided into 𝑉 sub-blocks. each sub-block is zero-padded to create a vector of length 𝑁. inverse fast fourier transform is performed separately for each sub-block, significantly increasing the computation complexity. adjacent, interleaving, and pseudorandom are some of the widely used partitioning schemes [11]. the adjacent method is used in the proposed method, owing to its simplicity and effectiveness. from figure 1, it can be observed that the resulting sequences are optimized by phase rotation factors 𝑏 = [𝑏1, 𝑏2, … , 𝑏v], where 𝑏v = ej 2 π 𝑣/𝑊 and 𝑣 = 0, 1, … , 𝑊 − 1 , to create symbol candidates referred to as partial transmit sequences. this operation results in a variation of the peak values for the signal candidates, and the one with the minimum papr is chosen for transmission. the total number of signal candidates depends on v and w, where w is the number of the phase factors allowed for a single sub-block. the process of producing the optimum phase factor vector that reduces the papr is as follows [12]-[17] {�̃�1, �̃�2, … �̃�v} = arg min [𝑏1, … , 𝑏v] max 0 ≤ 𝑡 ≤ 𝑇 |∑ 𝑠m 𝑣 𝑏𝑣 𝑉 𝑣=1 | 2 . (3) the transmit signal after reducing the papr can be expressed as follows: �̃�m(𝑡) = ∑ 𝑠m 𝑣 �̃�𝑣 𝑉 𝑣=1 . (4) direct application of this method to each fbmc symbol separately is not effective. as the symbols overlap, the parts where the symbols overlap will have a peak regrowth, increasing the papr again [18]. 4.3. pts-slf it is evident that the proposed yielded excellent papr reduction. this comes with a performance that reported lower computational complexity [19]-[22]. the process of sfl in optimizing the phase is performed according to the algorithm discussed in section 2. the proposed block diagram is shown in figure 1. 5. results and discussion following the implementation of the sfla to the problem of papr reduction, the simulation-based experiment has been carried out and the results are presented in this section. the experimental computations are responsible for the deep insights which can be drawn on the concept of papr reduction. clearly, the induction of the pts-sfl and the effect on the papr are the part of study. the probabilistic approach and the computations based on complementary cumulative distribution function (ccdf) are performed as a part of the simulation. it is evident that the corresponding parameter has a direct impact on the papr. the results demonstrate various papr reduction techniques and are compared with proposed model. figure 1. the proposed pts-sfl technique. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 the signals are optimized independently inconsideration of the overlapping signals, the complexity of the algorithm can be reduced using fewer combinations of phase factors. herein, we explore the effects of different combinations of phase factors on the performance of the proposed technique. figure 2 shows the ccdf curves for the proposed technique without clipping and with four sub-blocks. from figure 2 it is shown that papr is around 8.1db using sfl optimization technique. further, 8 subblocks are considered and papr results are evaluated and is shown in figure 3. by considering 𝑉 = 8, the threshold papr obtained using pts is 7.5 db, using segment-based optimization the papr value is around 7.4 db and using ant colony optimization technique is around 7.1 db. the papr value obtained using sfl optimization is 6.5 db which is almost 38 % reduction of papr compared to original fbmc model. from figure 4, it is shown that for different 𝑉 values our proposed sfla performance is better comparing to other existing techniques. the performance of ber is shown in figure 5. at snr of 10 db, the ber is very low for proposed method compared to clipping signal and original fbmc and also other two techniques i.e., satin bowerbird optimization (sbo) and ant colony optimization (aco). the bit error rate values at snr of 10 s db are tabulated and are shown in table 1. 6. conclusions in this paper, the technique of overlapping-pts has been formulated and demonstrate. the technique has been successfully simulated and applied to the problem of papr reduction in the considered system. the proposed technique featured with reduced computational complexity. it is also reported that the phase optimization can be achieved in the perceived technique using the artificial bee colony phase optimization. the analysis of the simulated reports yielded an excellent insight which is sufficient to consider that the technique can achieve dominating papr performance. this can subsequently suppress the papr by 2 db at least. the corresponding suppression reported as per the experimentation seems to be having an average of 38 %. this significant part is that the papr performance does not degrades the ber. references [1] t. hwang, c. yang, g. wu, s. li, g. y. li, ofdm and its wireless applications: a survey, ieee transactions on vehicular technology, 58(4) (2009), pp. 1673-1694. doi: 10.1109/tvt.2008.2004555 figure 2. comparison of papr for different techniques. figure 3. comparison of papr with respect to threshold values. figure 4. comparison of papr for v = 4 and v = 8 using different methods. figure 5. comparison of ber with respect to snr. table 1. ber obtained using different techniques for snr of 10 db (s surge of information). technique ber clipping at s = 90 10-2* clipping at s = 30 10-2.8 clipping at s = 15 10-3 fbmc original 10-3.4 sbo 10-3.45 aco 10-3.6 proposed sfla 10-3.8 https://doi.org/10.1109/tvt.2008.2004555 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 [2] h.q. wei, a. schmeink, comparison and evaluation between fbmc and ofdm systems, proc. of international itg workshop on smart antennas, ilmenau, germany, 3-5 march 2015, pp. 1-7. [3] p. banelli, s. buzzi, g. colavolpe, a. modenini, f. rusek, a. ugolini, modulation formats and waveforms for 5g networks: who will be the heir of ofdm: an overview of alternative modulation schemes for improved spectral efficiency, ieee signal processing magazine, 31(26) (2014), pp. 80-93. doi: 10.1109/msp.2014.2337391 [4] s. frank, filterbank based multi carrier transmission (fbmc) – evolving ofdm, in: proc. of european wireless conference, lucca, italy, 12-15 april 2010, pp. 1051-1058. doi: 10.1109/ew.2010.5483518 [5] f. schaich, t. wild, waveform contenders for 5g ofdm vs fbcm vs ufmc, proc. of international conf. symposium on communications, control and signal processing, athens, greece, 21-23 may 2014, pp. 457-460. doi: 10.1109/isccsp.2014.6877912 [6] y. a. jawhar, l. audah, m. a. taher, k. n. ramli, n. s. m. shah, m. musa, m. s. ahmed, a review of partial transmit sequence for papr reduction in the ofdm systems, ieee access, 7(2019), pp. 18021-18041. doi: 10.1109/access.2019.2894527 [7] a. mohammed, s. hussein, r. amr, a. saleh, a novel iterativeslm algorithm for papr reduction in 5g mobile fronthaul architecture, ieee photonics journal, 11(1) (2019), pp. 1-12. doi: 10.1109/jphot.2019.2894986 [8] n. shi, w. shouming, a partial transmit sequences based approach for the reduction of peak-to-average power ratio in fbmc system, in: proc. of wireless and optical communications conf., 2006. pp.1-3. doi: 10.1109/wocc.2016.7506550 [9] muzaffar eusuff, kevin lansey, fayzul pasha, shuffled frogleaping algorithm: a memetic meta-heuristic for discrete optimization, engineering optimization, 38 (2) (2006), pp.129-54. doi: 10.1080/03052150500384759 [10] n. al harthi, z. zhang, d. kim, s. choi, peak-to-average power ratio reduction method based on partial transmit sequence and discrete fourier transform spreading. electronics 2021, 10, 642. doi: 10.3390/electronics10060642 [11] y. a. jawhar, l. audah, m. a. taher, k. n. ramli, n. s. m. shah, m. musa, m. s. ahmed, a review of partial transmit sequence for papr reduction in the ofdm systems. ieee access 2019, 7, 18021–18041. doi: 10.1109/access.2019.2894527 [12] h. wang, x. wang, l. xu, w. du, hybrid papr reduction scheme for fbmc/oqam systems based on multi data block ptsand tr methods. ieee access 2016, 4, 4761–4768. doi: 10.1109/access.2016.2605008 [13] l. yang, r. s. chen, y. m. siu, k. k. soo, papr reduction of an ofdm signal by use of pts with low computational complexity, ieee trans. broadcast., 52(1) (2006), pp. 83–86. doi: 10.1109/tbc.2005.856727 [14] p. boonsrimuang, k. mori, t. paungma, h. kobayashi, proposal of improved pts method for ofdm signal, 18th ieee int. symp. on personal, indoor and mobile radio communications, 2007, pp. 1–5. doi: 10.1109/pimrc.2007.4393989 [15] s. j. ku, c. l. wang, c. h. chen, a reduced-complexity ptsbased papr reduction scheme for ofdm systems, ieee trans. wireless commun., 9(8) (2010), pp. 2455–2460. doi: 10.1109/twc.2010.062310.100191 [16] j. hou, j. ge, j. li, peak-to-average power ratio reduction of ofdm signals using pts scheme with low computational complexity, ieee trans. broadcast., 57(1) (2011), pp. 143–148. doi: 10.1109/tbc.2010.2079691 [17] l. j. cimini, n. r. sollenberger, peak-to-average power ratio reduction of an ofdm signal using partial transmit sequences with embedded side information, ieee global telecommunications conf., 2 (2000), pp. 746–750. doi: 10.1109/glocom.2000.891239 [18] c. c. feng, c. y. wang, c. y. lin, y. h. hung, protection and transmission of side information for peak-to-average power ratio reduction of an ofdm signal using partial transmit sequences, 58th ieee vehicular technology conf., 4 (2003), pp. 2461–2465. doi: 10.1109/vetecf.2003.1285976 [19] a. d. s. jayalath, c. tellambura, side information in par reduced pts-ofdm signals, 14th ieee int. symp. on personal, indoor and mobileradio communications, 1 (2003), pp. 226–230. doi: 10.1109/pimrc.2003.1264266 [20] paweł kwiatkowski, digital-to-time converter for test equipment implemented using fpga dsp blocks, elsevier: measurements, 177 (2021), pp. 1-11. doi: 10.1016/j.measurement.2021.109267 [21] andrás kalapos, csaba gór, róbert moni, istván harmati, vision-based reinforcement learning for lane-tracking control, acta imeko, 10(3) (2021), pp. 7-14. doi: 10.21014/acta_imeko.v10i3.1020 [22] eulalia balestrieri, luca de vito, francesco picariello, sergio rapuano, ioan tudosa, a review of accurate phase measurement methods and instruments for sinewave signals, acta imeko, 9(2) (2020), pp. 52-58. doi: 10.21014/acta_imeko.v9i2.802 https://doi.org/10.1109/msp.2014.2337391 https://doi.org/10.1109/ew.2010.5483518 https://doi.org/10.1109/isccsp.2014.6877912 https://doi.org/10.1109/access.2019.2894527 https://doi.org/10.1109/jphot.2019.2894986 https://doi.org/10.1109/wocc.2016.7506550 https://doi.org/10.1080/03052150500384759 https://doi.org/10.3390/electronics10060642 https://doi.org/10.1109/access.2019.2894527 https://doi.org/10.1109/access.2016.2605008 https://doi.org/10.1109/tbc.2005.856727 https://doi.org/10.1109/pimrc.2007.4393989 https://doi.org/10.1109/twc.2010.062310.100191 https://doi.org/10.1109/tbc.2010.2079691 https://doi.org/10.1109/glocom.2000.891239 https://doi.org/10.1109/vetecf.2003.1285976 https://doi.org/10.1109/pimrc.2003.1264266 https://doi.org/10.1016/j.measurement.2021.109267 http://dx.doi.org/10.21014/acta_imeko.v10i3.1020 http://dx.doi.org/10.21014/acta_imeko.v9i2.802 application of butterworth high pass filter as an approximation of wood anderson seismometer frequency response to earthquake signal recording acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 379 acta imeko issn: 2221-870x december 2020, volume 9, number 5, 379 382 application of butterworth high pass filter as an approximation of wood anderson seismometer frequency response to earthquake signal recording hamidatul husna matondang1, endra joelianto2, sri widiyantoro3 1 bmkg, jakarta, indonesia, hamidatul.husna@bmkg.go.id 2 itb, bandung, indonesia, ejoel@tf.itb.ac.id 3 itb, bandung, indonesia, sriwid@geoph.itb.ac.id abstract: the method for generating maximum amplitude and signal to noise ratio values by using second order high pass butterworth filter on local seismic magnitude scale calculations is proposed. the test data are signals from local earthquake that have been occurred in sunda strait on april 8th 2012. based on the experimental results, a 8 hz cutoff frequency and a gain of 2200 of second order butterworth high pass filter as an approach to simulating the frequency response of wood anderson seismometer can provide maximum amplitude value, snr, and the magnitude better than simulated wood anderson frequency response. keywords: high pass butterworth filter; wood anderson seismometer; frequency response simulation; instrument correction 1. introduction initially, the calculation of the local seismic magnitude scale was based on the calculation of the magnitude scale obtained from wood anderson's seismometer recordings. however, the wood anderson seismometer is a short period instrument that has analog recording, in which, the analog recording speed on paper is limited. this resulted in an earthquake recording on the recording station that was close to the location of the earthquake experiencing clipping. therefore, recording earthquake signals from the wood anderson seismometer is no longer used in the calculation of local magnitude scales. thus, the study of local magnitude has been shifted to the use of digital recording. in order to produce digital recordings as if coming from a wood anderson seismometer, the digital recordings are processed to simulate the wood anderson seismometer frequency response. the recording of the wood anderson frequency response simulation provides more accurate data from the original instrument, because there is no clipping on earthquake recordings [1]. however, in practice, the butterworth filter application is still used to eliminate microseismic noise [2]. considerable research has been done on simulating seismometer responses. among them, designing a simulation of the frequency response of wood anderson seismometer from recursive filters with differential equations has been considered in [3]. the seismometer frequency response is considered as a second order high pass filter. if the filter is applied to an earthquake signal, the 2nd order butterworth high pass filter with a cutoff frequency of 2 hz can approximate the local magnitude in 0.1 magnitude units [1]. the seismometer frequency response approach can also be obtained from a 5th order butterworth low pass filter cascaded with a 3rd order butterworth high pass filter [4]. these filters are used to correct short period and broadband seismometer responses. pole and zero determination was provided for recursive filters used as a correction for seismometer instruments [5]. different from previous studies, this paper tests earthquake signals using a second order butterworth high pass filter with a cutoff frequency of 0.1 to 12 hz. the test data used in this paper are local earthquake signals and noise signals recorded by four earthquake recording stations. the do the authors mean to correct short period and broadband seismometer responses recording stations are cigelis jawa indonesia (cgji), lembang (lem), cisompet (cisi) and karang pucung jawa indonesia (kpji). local earthquake recorded signals used are the ones related to the earthquake that have occurred in the sunda strait on 8th april 2012 at 08: 06: 47.1 am utc+07.00 with a strength of 4.6 magnitudes, at coordinates longitude 105.859998 and latitude -6.94 with depth of 100 km. while noisy signals were taken in the morning at 03.00 am utc+07.00, noon at 12.00 pm utc+07.00 and at night at 10.00 pm utc+07.00 [6]. http://www.imeko.org/ mailto:hamidatul.husna@bmkg.go.id mailto:ejoel@tf.itb.ac.id mailto:sriwid@geoph.itb.ac.id acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 380 from the test results, the gain value and cut off frequency of the filter are configured considering the recorded data characteristic. the filter is then applied to the signal. the results of the filter application are expected to produce higher amplitude and signal to noise ratio (snr) values than the simulated wood anderson’s seismometer frequency response. the butterworth filtered results are used as an approach to simulate earthquake recordings that would be obtained from a typical frequency response of a wood anderson's seismometer. 2. proposed methodology the design is based on the similarity of responses between the butterworth high pass filter and the wood anderson seismometer which is a second order system [7]. the filter is applied to earthquake signals that have been used for local seismic magnitude scale calculations. from the filtered signal, the maximum amplitude is obtained in the s signal phase and signal to noise ratio (snr). for comparison, the maximum amplitude and snr of filtered earthquake signals are compared with the maximum amplitude and snr of the earthquake signal on a simulated wood anderson seismogram. the research flow chart is shown in figure 1. start data collection catalogue data of bmkg waveform data conversion of seed to sac resampling determination of the butterworth highpass filter coefficient application of high pass butterworth filter frequency response design results component rotation readings of maximum amplitude and snr calculation component rotation finished application of high pass butterworth filter frequency response design results readings of maximum amplitude and snr calculation reference figure 1: flowchart of the method 3. analysis the results of the butterworth filter test applied to bmkg data show an almost similar amplitude value for each earthquake recording station. the average maximum amplitude value for the four recording stations is 24,000,000 nm, produced by the butterworth filter with a gain of 3000 and a cutoff frequency of 0.1 hz. while the minimum amplitude is obtained from the gain with a value of 1000 and the cut-off frequency around 0.1 hz. see figure 2. figure 2: cut off frequency, gain and maximum amplitude curves it is also known that the frequency response of second order butterworth filter with gains of 2000, 2200, and 2300 have similarities with frequency response of wood anderson’s seismometer. however, the second-order butterworth filter frequency response with a gain of 2200 and an 8 hz cut-off frequency that is close to the maximum amplitude of wood anderson's seismometer. the second-order butterworth filter frequency response with a gain of 1000 has a magnitude smaller than the magnitude generated by the wood anderson seismometer. whereas the second order butterworth filter frequency response with a gain of 3000 has a magnitude greater than the seismic magnitude generated by the wood anderson seismometer. see figure 3. table 1: signal to noise ratio of wood anderson seismometer and butterworth filter simulated the largest maximum amplitude value of each recording station is obtained from the butterworth filter simulation results, while the results from the station simulated wood anderson butterworth filtered signal 3:00 am (%) 12:00 pm (%) 22:00 pm (%) 3:00 am (%) 12:00 pm (%) 22:00 pm (%) cgji 6,7 3,9 1,8 5,1 6,2 6,6 lem 4,9 1,5 1,1 -1,3 -1,3 1,3 cisi 2,1 1,4 4,1 4,6 3,7 5,2 kpji -9,8 -7,8 2,9 3,0 2,8 3,1 http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 381 wood anderson seismometer frequency response simulation have lower values. in calculating the signal to noise ratio values, the butterworth filter frequency response simulation gives an increase in signal to noise ratio values at each recording station, but at the lem and kpji recording stations snr decreases in the morning. this is caused by noise due to human activities. at the kpji station, the percentage of snr values decreased in the signal from the wood anderson seismometer frequency response simulation results. meanwhile, the signal from the simulation results using the butterworth filter is increased. however, this does not affect the results of local magnitude calculations. simulation of butterworth filter frequency response produces local magnitude values close to the seismic magnitude value released by bmkg of 4.6 [8]. while the local magnitude value is generated by wood anderson's frequency response seismometer simulation is 4.5. it can also be seen that the local magnitude of the signal from the butterworth filter simulation results has a more consistent value. figure 3: comparison of wood anderson seismometer frequency response with butterworth filter cut off frequencies 7 to 9 hz 4. summary in the paper, an earthquake signal processing algorithm was designed so that the signal was presented in physical displacement units. the second-order butterworth filter with a gain of 2200 and a cut off frequency of 8 hz had a maximum amplitude value and a higher snr than the wood anderson seismometer. hence, the butterworth filter can be used as a wood anderson seismometer approach. this research has the opportunity to be further developed so that it can be applied to the calculation of earthquake magnitude. however, it is necessary to calculate the local magnitude more accurately. in addition, it is necessary to have a greater number of observation stations. http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 382 5. references [1] havskov. j, and ottemoler. l, “routine data processing in earthquake seismology”, springer, pp. 14-160, 2010. [2] ottemoller. l, and sargeant. s.t, “a local magnitude scale ml for the united kingdom”, bulletin of the seismological society of america, vol 103, pp. 2884-2893, 2013. [3] kanamori. h, maechling. p, and hauksson. e, “continuous monitoring of ground-motion parameters”, bulletin of the seismological society of america, vol. 89, pp. 311-316, 1999. [4] haney, m.m, power, j, west, m, and michaels. p, “causal instrument corrections for short period and broadband seismometers”, seismological research letters, vol. 83, pp. 834-845, 2012. [5] anderson. j. f, lees. j. m, “instruments corrections by time-domain deconvolution”, seismological research letters, vol. 85, pp. 197-201, 2014. [6] http://202.90.198.92/arclink/query?sesskey [7] hutton. l.k, and boore. d.m, “the ml scale in southern california”, bulletin of the seismological society of america , vol. 77, pp. 2074-2094, 1987. [8] http://172.19.3.51 http://www.imeko.org/ http://202.90.198.92/arclink/query?sesskey feasibility and performance analysis in 3d printing of artworks using laser scanning microprofilometry acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 7 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 feasibility and performance analysis in 3d printing of artworks using laser scanning microprofilometry sara mazzocato1, giacomo marchioro2, claudia daffara1 1 department of computer science, university of verona, str. le grazie 15, i-37134, verona, italy 2 department of cultures and civilisations, university of verona, v.le dell'università 4, i-37129, verona, italy section: research paper keywords: optical profilometry; 3d printing; conoscopic holography; surface analysis; non-destructive testing citation: sara mazzocato, giacomo marchioro, claudia daffara, feasibility and performance analysis in 3d printing of artworks using laser scanning microprofilometry, acta imeko, vol. 11, no. 1, article 19, march 2022, identifier: imeko-acta-11 (2022)-01-19 section editor: fabio santaniello, university of trento, italy received march 7, 2021; in final form march 22, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was partially supported by scan4reco project, european union horizon 2020 framework programme for research and innovation, grant agreement no 665091 and by temart project, por fesr 2014-2020. corresponding author: sara mazzocato, e-mail: sara.mazzocato@univr.it 1. introduction 3d sensors and 3d printing technologies are gaining more and more attention in many fields, from industry to medicine, to cultural heritage [1]-[5] also thanks to fact that 3d printers are being easily accessible and have gradually gained better levels of accuracy. in the field of cultural heritage, these technologies arouse interest because of the possibility to reproduce artworks or part of them for museum exhibition, restoration and conservation reasons, haptic fruition, and other purposes [6]-[8]. clearly, the first step to obtain realistic and accurate 3d printed objects is the data acquisition process. in this context, non-contact 3d optical systems play an important role in such applications, from quality control to robotics, where a remote measurement is preferable and/or the object is fragile. their capability to measure the surfaces in contact-less and noninvasive way led them to be key instruments especially in the field of cultural heritage, where the surface of the object or the 3d shape at the various scales is the central and essential part of the artwork itself [9]-[14] and integrated diagnostics is performed [15]. each surface has an intrinsic multiscale nature that can be represented as a superimposition of a large number of spatial wavelengths. the natural question that arises spontaneously is if (and the extent to which) the 3d printing process preserves this intrinsic multiscale nature of the surface. despite the rapid growth of the use of 3d printing technologies, the accuracy of the 3d printed models has not been thoroughly investigated and not all the technologies are studied [16]-[19]. in this paper, we first present the prototype of the optical scanning microprofilometer as scanning system that allows high accuracy acquisition of the object [20]. the system is based on the interferometric method of conoscopic holography that enables to acquire surface data with micrometric resolution. thanks to the adaptability of the conoscopic holography sensors and the scanning setup, the system is able to measure irregular shapes, composite materials, and polychrome surfaces, thus abstract we investigated optical scanning microprofilometry and conoscopic holography sensors as nondestructive testing and evaluation tools in archeology for obtaining an accurate 3d printed reproduction of the data. the modular microprofilometer prototype allows a versatile acquisition of different materials and shapes producing a high-quality dataset that enables surface modelling at micrometric scales from which a "scientific" replica can be obtained through 3d printing technologies. as exemplar case study, an archeological amphora was acquired and 3d printed. in order to test the feasibility and the performance of the whole process chain from the acquisition to the reproduction, we propose a statistical multiscale analysis of the surface signal of object and replica based on metrological parameters. this approach allows to demonstrate that the accuracy of the 3d printing process preserves the range of spatial wavelengths that characterizes the surface features of interest within the technology capabilities. this work extends the usefulness of the replicas from museum exposition to scientific applications. mailto:sara.mazzocato@univr.it acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 leading to a multiscale and multimaterial approach in surface analysis [21]. here, the scanning profilometry is used to acquire accurate data of an archaeological object, which are then processed in order to obtain a 3d printed replica of the object itself. thus, we have come full circle by measuring the 3d printed object and by comparing the surfaces on a metrological basis. after a brief presentation of the optical microprofilometer developed in our laboratory (section 2), we demonstrate the application in a real case study, processing the dataset acquired on an ancient amphora and obtaining a mesh file suitable for the 3d printer (section 3). in section 4 we propose a statistical and multiscale analysis of the original object and its 3d printed replica in order to assess the accuracy of the whole process from the acquisition to the 3d printing. finally, in the concluding section, the main results are discussed. 2. optical microprofilometry the optical microprofilometer is based on the conoscopic holography technique that exploits interference patterns to obtain a pointwise measurement of distances with micrometric accuracy [22], [23]. in detail, a laser backscattered from the surface is splitted into ordinary and extraordinary rays after passing through an optically anisotropic crystal. the two beams share the same geometric paths but have orthogonal polarization modes. the refractive index of the crystal depends on the angle of incidence of the beam and on its polarization state determining a difference in the optical path lengths. once the two beams exit the crystal, they interfere to each other, and the pattern is recorded. the analysis of the generated pattern allows to obtain information about the distance of the sampled surface from the light source [5], [6]. conoscopic holography sensors enable to perform contactless measurements at sub-millimeter spatial resolution with a precision down to a few micrometers on different kind of materials, from the specular reflective surfaces to the diffusive ones. the developed system has a multi-probe module including a sensor for diffusive materials, a sensor for reflective materials, and a sensor for specular or transparent surfaces (all sensors by optimet). interchangeable lenses allow performing acquisitions in different working ranges, i.e., the maximum scanned height, tailored to the scale of the object. this aspect is very important in profilometry applied to the variegate 3d archaeological manufacts. the different combination of sensors and lenses allows the analysis of reflective materials with a maximum accuracy of 1 µm and working range of 1 mm up to a working range of 9 mm with an accuracy of 4.5 µm. while for diffusive material it is possible to achieve an accuracy of 2 µm with a working range limited to 0.6 mm or extending the working range up to 180 mm maintaining a sub-millimeter accuracy of 100 µm. the surface dataset is obtained as sequence of single point measurements of the depth distance (z) in the x-y plane, with the probe triggered to a set of motorized stages moving the object plate, as can be seen in figure 1. the advantage is the creation of profiles and surface maps with custom “field of view” that is only limited by the axis range. our scanning setup is composed by a motion system with linear micrometric axis stages (by pi) orthogonally mounted to form the acquisition grid (x, y axis). the axes have a maximum travel range of 300 mm and a step precision of 0.1 µm with an accuracy of 1 µm over the entire length. the probe operates in pulse-mode, receiving pulses from an external trigger sent by the scanning system: for each pulse the sensor measures the distance to the sampled object. as depicted in figure 1 the object surface is effectively measured within maximum the working range of the depth sensor. in order to improve the capability of scanning complex shapes, a third motorized axis controlling the position of the probe along the line of sight can be added. 3. surface acquisition and 3d printing the case study concerns a portion of an archaeological amphora (figure 2). the object is acquired with the conopoint-3 sensor with a 75mm lens. this probe-lens coupling allows a working range of 18 mm with a stand-off distance of 70 mm and a laser spot size of 47 µm. we acquired a selected region of interest of 55.2 × 80.1 mm2 with a scanning step (x-y sampling grid) of 100 µm. the interest in this case arises from its surface geometrical structure being a superimposition of a large number of scales. the usual approach in surface metrology is to separate the surface in three main scales: the roughness, that represents the irregularities at smaller wavelengths and exhibits a random nature often related to the behaviour of the material; the waviness, i.e. the more widely spaced variation often associated with the traces left by the tools used for the creation of the object; and the form, i.e. the 3d shape of the object. obviously, even if these features are intrinsic characteristics of the object, the acquired surface signals depend on the bandwidth of the measurement: the distribution of peaks and valleys in a surface profile is influenced figure 1. scanning setup of the microprofilometer with the specification of the working range and an exemplification of a scan line. figure 2. part of the archaeological amphora with the scanned region highlighted. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 by the sampling step that, together with the nyquist criterium, determines the smallest spatial structure. surface signal decomposition is of particular interest in archaeology, as the waviness pattern can be put in relation with the manufacturing tools. a polynomial fitting enables to separate the form and the texture to obtain the so-called conditioned surface (with zero mean), while the roughness is separated from the waviness using a gaussian filter. 3.1. 3d printing from the acquired data we can tailor the creation of a mesh to be used for reproducing the object with 3d printing technology. most of the profilometers do not store the data as a point clouds or mesh so they cannot be printed directly. therefore, we developed our own tools for building a mesh from the height maps collected by the microprofilometer. in detail, from the generated grid of equally spaced point we obtain the point cloud data with the triplet (x, y, z) representing the vertices of the mesh, we create the faces and hence a cuboid with the same dimension of the scan and we substitute the top face with the scan. eventually, we can programmatically create and export the mesh to a stl file (using, for example, the trimesh library [24]), the typical file format used by the 3d printers, storing the triangulated surface. figure 3 shows the 3d printed object, created using stereolithography technology with a resolution of 50 µm in the z-axis of the printer. the resolution in the x-y printer plane is not specified even if it is specified the laser spot size of 140 µm [25]. in this kind of printing process, the object is printed layer-bylayer by polymerization of a liquid resin thanks to the exposure to laser light. each single layer in the x-y plane is created by a raster-scanning of the focused laser light spot within the plane. the accuracy of the process depends also on the printing direction [26]. on this regard, it is worth to note that the best orientation of the model during the printing process should be the one that reflects the acquisition process. in fact, the optical microprofilometer is a line-of-sight technique that provides a height map as result of the measure and, like most of the conventional surface measurement techniques, it is not able to acquire re-entering features (like a “c-shape” along the scan directions). 4. surface analysis and comparison as mentioned above, a surface can be represented as a superimposition of different frequencies; adding a process in the chain can alter the resulting surface. in particular, the 3d printing process could be seen as a filter that cuts or badly passes some components. therefore, the question to which we are interested, that naturally arises after the printing, regards the level of accuracy of the printing process in the creation of the replica. to answer this question, the 3d printed object was scanned with the microprofilometer in the same measurement condition, i.e. with the same probe, lens, and scan velocity. the comparative analysis of the surfaces was performed on the basis of surface metrology parameters [27]-[30]. figure 4 shows the comparison between the amplitude parameters of the original object and the 3d printed replica. in particular, the root mean square roughness sq, the average roughness sa, the skewness ssk and the kurtosis sku are evaluated. it is worth noting that the sku parameter, representing the sharpness of the surface peaks, deviates from 0.7 to 175.4, while the averaged texture amplitude sa varies from 112 µm to 110 µm, sq changes from 134 µm to 142 µm and the skewness ssk varies from 0 to -5. in order to understand the differences between the surface datasets of the real object and the replica, the same signal decomposition is carried out, namely the form is separated from the texture through a second order polynomial fitting and then the roughness is highlighted through a gaussian filter keeping the same parameters in both cases. the figures below (figure 5, figure 6) show the texture of the original and 3d printed objects. once the decomposition is done, the texture and the roughness are analysed using a multiscale approach [31]. this way, the texture and the roughness features are inspected with the variation of the observation scale. in order to have an insight of the average behaviour of the surface, the amplitude parameter sq and the extreme values parameters sz (maximum height of the peaks), sv (maximum depth of the valleys) and st (maximum peak to valley) are calculated as a function of the evaluation length. in particular, the multiscale analysis is developed as follows: the surface is divided in square subregions of specific side, i.e. the evaluation length, skipping a margin around the surface to avoid artefacts due to edge effects. in each subregion the aforementioned parameters are calculated and are then averaged on the patches. plotting the texture parameters against the evaluation length gives an idea of their variation with the observation scale. here, the evaluation length starts from 2 mm (400 sampled points in each subregion) and increases of 2 mm each time. from figure 7 it emerges that the root mean square sq of the original and the 3d printed object follows a similar behaviour with the length scale. as expected, sq first increases with the evaluation length and then it converges to a stable value. figure 3. 3d printed region of the amphora. figure 4. comparison of amplitude parameters between original and 3d printed object: root mean square deviation (sq), mean absolute deviation (sa), skewness (ssk) and kurtosis (sku) of the surface heights distribution. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 figures below show the variation of the extreme parameters with the evaluation length. as can be seen in figure 8, the maximum peak to valley distance st follows a similar evolution, but it is interesting to note how the values regarding the 3d printed object are lower than those concerning the original object. more in details, while the maximum height of the peaks sz (figure 9) has a similar trend, the maximum depth of valleys sv (figure 10) differs. beside resolution limits, a possible explanation is the combined effects of the gravity and the washing that is performed after the 3d printing process. in fact, during the printing process, the object grew upside down and after the object was completed, it was subjected to washing and post-curing processes. the washing has the goal of removing uncured resin from the surface of printed parts by simultaneously soaking and moving them in a solvent, but it is possible that some particles of liquid remain trapped within the valleys. another comparative analysis was performed, in term of frequencies, through the power spectrum density function (psd). the psd is calculated as the averaged one-dimensional psds along the scan direction of the surface height profiles. in order to have a double comparison, the psd is calculated also in a second scan of the 3d printed object rotated of 180°. the psd is plotted versus the wavevectors defined as qi = 2/i, where i is the wavelength of the surface component. figure 11 shows the power spectrum of the whole surfaces while figure 12 shows the power spectrum of the texture, i.e. the previous surfaces once the form is removed. as can be seen, the power spectrum of the total signal shows that the information is preserved for the lower frequencies, with a variation in the figure 5. texture of the original object acquired with a scan step of 100 µm and an accuracy in the z-axis of 10 µm. figure 6. texture of the 3d printed object acquired with a scan step of 100 µm and an accuracy in the z-axis of 10 µm. figure 7. variation of the sq of the textures calculated at different scan size. figure 8. variation of the peak-to-valley distances of the texture (st) calculated at different scan size. figure 9. variation of the maximum peak height of the texture (sz) calculated at different scan size. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 behaviour of the higher frequencies at q  15 mm-1 (  0.42 mm). this can be better analysed in the texture signal (figure 12) where the power spectra of the real and printed objects intersect. up to that point, the signal of the real object is higher, while from that point onward it emerges that the psd of the printed object is no more informative. the roughness analysis is done once the roughness signal is decomposed from the texture. the critical issues related to the use of the gaussian filter is the choice of the cut-off wavelength. the figure below shows the variation of sq of the roughness signal of the original and the printed object versus the cut-off wavelength. as can be seen, the trend follows a similar behaviour with a lower sq in the replica due the smoothing of the 3d printing process (figure 13a). as the cut-off becomes longer, the sq increases including also the waviness contribution (figure 13b). figure 14 shows an example of the texture decomposition with a cut-off of 6 mm. it emerges that the waviness signal is well preserved while the roughness signal highlights the smoothing effect described above. 5. conclusions in this work we presented the application of scanning optical microprofilometry for the acquisition, the analysis, and the 3d printer reproduction of archaeological objects. thanks to the capability to perform contact-less and full-field measurements with micrometric precision, this technique is a powerful tool for non-destructive testing and evaluation in cultural heritage. the versatile instrument configuration based on scanning stages and conoscopic holography sensors allows for setting optimal sampling parameters for different needs, enabling multiscale and multi-material surface measurements. the “field of view” of the figure 10. variation of the maximum valley depth of the texture (sv) calculated at different scan size. figure 11. comparison between the average power spectra of the total surface of original object and 3d printed object. figure 12. comparison between the average power spectra of the texture signals of the original object and the 3d printed object. a) b) figure 13. variation of the sq with the cut-off wavelength of the gaussian filter. the top plot a) represents the roughness signal extracted with a cut-off step of 100 µm while the second plot b) shows the increase in sq by varying the cut-off from 1 mm to 45 mm. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 x-y scanning (up to 30 × 30 cm2) was designed for acquiring microsurface data in medium-sized objects, within a z depth range that is determined by the lens (e.g. the range can vary from 0.6 mm to 180 mm maintaining the same probe), thus allowing the surface modelling of “flat” objects like ancient coins or significant part of 3d objects in a measuring session. reaching a micrometric resolution is of fundamental importance in the acquisition and in the analysis of the material surface. in this study we focused on the measurement of an archaeological object representing a complex and exemplar case study, with sharp evidence of the three main surface signal components: shape, waviness, and roughness. we demonstrated how we obtained a high-fidelity and high-resolution 3d printed replica of the object starting from the microprofilometry dataset. moreover, we have come full circle acquiring the printed object and assessing how the process preserves the surface signals. in order to perform this task, a multiscale analysis of the original and 3d printed object is proposed by inspecting the inband roughness through the power spectrum, as well as the variation of the areal amplitude parameters (sa, sq, ssk, and sku) with the observation length. it was demonstrated how the printing process is accurate even if the replica is affected by less marked valleys and by a loss of signal in the high frequency surface components, thus resulting in a smoothing effect. using a scanning step of 100 µm in the microprofilometer with the conoscopic probe setting a laser spot of 47 µm and a depth accuracy of 10 µm, and using a printing resolution of 50 µm (along the printer z-axis), it is shown that, beside the form of the archaeological manufact, the surface texture is accurately acquired and 3d printed without artifacts affecting the waviness and the roughness appearance. the focus of this work was the feasibility and performance analysis in 3d printing of artworks using laser scanning microprofilometry. we have turned our attention to a 3d replica obtained with the stereolithography technology and a photopolymer resin. however, the method could be used to compare the performance of different kind of 3d printer technologies as well as various orientation settings of the model during the printing process or several printers-resin combinations. acknowledgement the work was partly supported by scan4reco project funded by the european union horizon 2020 framework programme for research and innovation under grant agreement no 665091 and partly by temart project, por fesr 20142020. references [1] g. sansoni, m. trebeschi, f. docchio, state-of-the-art and applications of 3d imaging sensors in industry, cultural heritage, medicine, and criminal investigation, sensors, vol. 9, no. 1, jan. 2009, pp. 568–601. doi: 10.3390/s90100568 [2] c. balletti, m. ballarin, f. guerra, 3d printing: state of the art and future perspectives, j. cult. herit., vol. 26, jul. 2017, pp. 172–182. doi: 10.1016/j.culher.2017.02.010 [3] yu ying clarrisa choong, hong wei tan, deven c. patel, wan ting natalie choong, chun-hsien chen, hong yee low, ming jen tan, chandrakant d. patel, chee kai chua, the global rise of 3d printing during the covid-19 pandemic, nat. rev. mater., vol. 5, no. 9, sep. 2020, pp. 637–639. doi: 10.1038/s41578-020-00234-3 figure 14. surface signal decomposition: waviness with a cut-off of 6 mm and roughness with a cut-off of 600 µm. https://doi.org/10.3390/s90100568 https://doi.org/10.1016/j.culher.2017.02.010 https://doi.org/10.1038/s41578-020-00234-3 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 [4] v. m. vaz l. kumar, 3d printing as a promising tool in personalized medicine, aaps pharmscitech, vol. 22, no. 1, jan. 2021, p. 49. doi: 10.1208/s12249-020-01905-8 [5] t. singh, s. kumar, s. sehgal, 3d printing of engineering materials: a state of the art review, mater. today proc., vol. 28, 2020, pp. 1927–1931. doi: 10.1016/j.matpr.2020.05.334 [6] r. scopigno, p. cignoni, n. pietroni, m. callieri, m. dellepiane, digital fabrication techniques for cultural heritage: a survey, comput. graph. forum, vol. 36, no. 1, jan. 2017, pp. 6–21. doi: 10.1111/cgf.12781 [7] balletti ballarin, an application of integrated 3d technologies for replicas in cultural heritage, isprs int. j. geo-information, vol. 8, no. 6, jun. 2019, p. 285. doi: 10.3390/ijgi8060285 [8] s. mazzocato c. daffara, experiencing the untouchable: a method for scientific exploration and haptic fruition of artworks microsurface based on optical scanning profilometry, sensors, vol. 21, no. 13, jun. 2021, p. 4311. doi: 10.3390/s21134311 [9] g. schirripa spagnolo, l. cozzella, f. leccese, fringe projection profilometry for recovering 2.5d shape of ancient coins, acta imeko, vol. 10, no. 1, mar. 2021, p. 142. doi: 10.21014/acta_imeko.v10i1.872 [10] p. dondi, l. lombardi, m. malagodi, m. licchelli, 3d modelling and measurements of historical violins, acta imeko, vol. 6, no. 3, p. 29, sep. 2017. doi: 10.21014/acta_imeko.v6i3.455 [11] r. fontana et al., three-dimensional modelling of statues: the minerva of arezzo, j. cult. herit., vol. 3, no. 4, oct. 2002, pp. 325–331. doi: 10.1016/s1296-2074(02)01242-6 [12] f. remondino, heritage recording and 3d modeling with photogrammetry and 3d scanning, remote sens., vol. 3, no. 6, may 2011, pp. 1104–1138. doi: 10.3390/rs3061104 [13] a. mironova, f. robache, r. deltombe, r. guibert, l. nys, m. bigerelle, digital cultural heritage preservation in art painting: a surface roughness approach to the brush strokes, sensors, vol. 20, no. 21, nov. 2020, p. 6269. doi: 10.3390/s20216269 [14] m. callieri et al., alchemy in 3d: a digitization for a journey through matter, in 2015 digital heritage, sep. 2015, pp. 223–230. doi: 10.1109/digitalheritage.2015.7413875 [15] j. striova, l. pezzati, e. pampaloni, r. fontana, synchronized hardware-registered vis-nir imaging spectroscopy and 3d sensing on a fresco by botticelli, sensors, vol. 21, no. 4, feb. 2021, p. 1287. doi: 10.3390/s21041287 [16] e. george, p. liacouras, f. j. rybicki, d. mitsouras, measuring and establishing the accuracy and reproducibility of 3d printed medical models, radiographics, vol. 37, no. 5, pp. 1424–1450, sep. 2017. doi: 10.1148/rg.2017160165 [17] e. kluska, p. gruda, n. majca-nowak, the accuracy and the printing resolution comparison of different 3d printing technologies, trans. aerosp. res., vol. 2018, no. 3, sep. 2018, pp. 69–86. doi: 10.2478/tar-2018-0023 [18] r. m. carew, r. m. morgan, c. rando, a preliminary investigation into the accuracy of 3d modeling and 3d printing in forensic anthropology evidence reconstruction, j. forensic sci., vol. 64, no. 2, mar. 2019, pp. 342–352. doi: 10.1111/1556-4029.13917 [19] v. bonora, g. tucci, a. meucci, b. pagnini, photogrammetry and 3d printing for marble statues replicas: critical issues and assessment, sustainability, vol. 13, no. 2, jan. 2021, p. 680. doi: 10.3390/su13020680 [20] n. gaburro, g. marchioro, c. daffara, a versatile optical profilometer based on conoscopic holography sensors for acquisition of specular and diffusive surfaces in artworks, jul. 2017, p. 103310a. doi: 10.1117/12.2270307 [21] n. gaburro, g. marchioro, c. daffara, conoscopic laser microprofilometry for 3d digital reconstruction of surfaces with sub-millimeter resolution, in 2017 ieee international conference on environment and electrical engineering and 2017 ieee industrial and commercial power systems europe (eeeic / i&cps europe), jun. 2017, pp. 1–4. doi: 10.1109/eeeic.2017.7977779 [22] g. y. sirat, conoscopic holography i basic principles and physical basis, j. opt. soc. am. a, vol. 9, no. 1, jan. 1992, p. 70. doi: 10.1364/josaa.9.000070 [23] i. álvarez, j. enguita, m. frade, j. marina, g. ojea, on-line metrology with conoscopic holography: beyond triangulation, sensors, vol. 9, no. 9, sep. 2009, pp. 7021–7037. doi: 10.3390/s90907021 [24] trimesh [computer software]. online [accessed 23 march 2022] https://trimsh.org/ [25] formlabs, form2 tech specs. online [accessed 23 march 2022] https://formlabs.com/3d-printers/form-2/tech-specs/ [26] t. hada et al., effect of printing direction on the accuracy of 3dprinted dentures using stereolithography technology, materials (basel)., vol. 13, no. 15, aug. 2020, p. 3405. doi: 10.3390/ma13153405 [27] international organization for standardization, iso 251781:2016, geometrical product specifications (gps) — surface texture: areal — part 1: indication of surface texture, geneva, 2016. [28] international organization for standardization, iso 251782:2012, geometrical product specifications (gps) — surface texture: areal — part 2: terms, definitions and surface texture parameters, geneva, 2012. [29] international organization for standardization, iso 251783:2012, geometrical product specifications (gps) — surface texture: areal — part 3: specification operators, geneva, 2012. [30] f. blateyron, the areal field parameters, in characterisation of areal surface texture, berlin, heidelberg: springer berlin heidelberg, 2013, pp. 15–43. doi: 10.1007/978-3-642-36458-7_2 [31] m. bigerelle, t. mathia, s. bouvier, the multi-scale roughness analyses and modeling of abrasion with the grit size effect on ground surfaces, wear, vol. 286–287, may 2012, pp. 124–135. doi: 10.1016/j.wear.2011.08.006 https://doi.org/10.1208/s12249-020-01905-8 https://doi.org/10.1016/j.matpr.2020.05.334 https://doi.org/10.1111/cgf.12781 https://doi.org/10.3390/ijgi8060285 https://doi.org/10.3390/s21134311 https://doi.org/10.21014/acta_imeko.v10i1.872 https://doi.org/10.21014/acta_imeko.v6i3.455 https://doi.org/10.1016/s1296-2074(02)01242-6 https://doi.org/10.3390/rs3061104 https://doi.org/10.3390/s20216269 https://doi.org/10.1109/digitalheritage.2015.7413875 https://doi.org/10.3390/s21041287 https://doi.org/10.1148/rg.2017160165 https://doi.org/10.2478/tar-2018-0023 https://doi.org/10.1111/1556-4029.13917 https://doi.org/10.3390/su13020680 https://doi.org/10.1117/12.2270307 https://doi.org/10.1109/eeeic.2017.7977779 https://doi.org/10.1364/josaa.9.000070 https://doi.org/10.3390/s90907021 https://trimsh.org/ https://formlabs.com/3d-printers/form-2/tech-specs/ https://doi.org/10.3390/ma13153405 https://doi.org/10.1007/978-3-642-36458-7_2 https://doi.org/10.1016/j.wear.2011.08.006 a training centre for intraocular pressure metrology acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 6 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 a training centre for intraocular pressure metrology dominik pražák1, vítězslav suchý1, markéta šafaříková-pštroszová1,2, kateřina drbálková1,2, václav sedlák1,2, šejla ališić3, anatolii bescupscii4, vanco kacarski5 1 czech metrology institute, okružní 31, 63800 brno, czechia 2 slovak university of technology, f. of mechanical engineering, nám. slobody 2910/17, 81231 bratislava, slovakia 3 institut za mjeriteljstvo bosne i hercegovine, branilaca sarajeva 25, 71000 sarajevo, bosnia and herzegovina 4 i. p. institutul naţional de metrologie, eugen coca 28, 2064 chişinău, moldova 5 bureau of metrology, bull. jane sandanski 109a, 1000 skopje, north macedonia section: research paper keywords: training centre; metrological traceability; eye-tonometer; intraocular pressure citation: dominik pražák, vítězslav suchý, markéta šafaříková-pštroszová, kateřina drbálková, václav sedlák, šejla ališić, anatolii bescupscii, vanco kacarski, a training centre for intraocular pressure metrology, acta imeko, vol. 11, no. 4, article 4, december 2022, identifier: imeko-acta-11 (2022)-04-04 section editor: eric benoit, université savoie mont blanc, france received june 27, 2022; in final form august 17, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: empir project 20scp02 cefton corresponding author: dominik pražák, e-mail: dprazak@cmi.cz 1. introduction the iop belongs to the basic diagnostic indicators in ophthalmology and optometry. although this quantity is monitored also in the veterinary medicine, this paper deals exclusively with the human medicine. the screenings of the intraocular hypertension serve, first of all, for the prevention and early diagnosis of the glaucoma. hence, there is a high societal interest in the correctness of these measurements which are performed by the eye-tonometers, [1]-[4]. some european countries (czechia, germany and lithuania) consider these instruments so crucial that they are a subject of the legal metrology. however, harmonization in this area is relatively low across europe, [5]. the iop is still measured and referenced in the millimetres of mercury (1 mmhg ≈ 133.3 pa) for the historical and practical reasons, [6]. there is a consensus that the normal values should be within the range from 10 mmhg to 20 mmhg. however, the task of the metrology is to ensure traceability in the complete physiological and pathophysiological range up to 80 mmhg. 2. project history to overcome the obstacles in building the iop metrology, the national metrology institutes (nmi) of austria, czechia, germany, poland, slovakia and turkey, together with technical university in bratislava and palacký university in olomouc formed a consortium within empir programme, solving project intense, which ran from june 2017 to may 2020, [5]-[9]. the scope was much broader, of course, but the main results from the point of view of the cooperation of the european nmis in this field is the foundation of a smart specialisation concept (ssc) for the iop metrology, [10], and a training centre for iop metrology in the premises of the cmi in the city of most which was built with an essential help of the german colleagues. the advanced trainings of the cmi personnel were also accomplished and the needed technical expertise was successfully audited. an important part was a satisfying bilateral comparison in the iop between the cmi and the slovak technical university (stu) in the beginning of 2020 during which two clinically tested noncontact tonometers nidek nt-2000 served as the laboratory standards and a set of the silicone eyes and an artificial eye served abstract the eye-tonometers are the important medical devices with measuring function which are necessary for the screenings of the intraocular hypertension (a serious risk factor for the glaucoma). however, it is not an easy task to ensure their correct metrological traceability. there is needed not only a wide range of various equipment but also the relevant know-how. hence, a training centre for this quantity was established at the czech metrology institute (cmi) within the framework of a smart specialisation concept for the intraocular pressure (iop) metrology. the paper briefly outlines its history, scope, methodologies and future development plans. mailto:dprazak@cmi.cz acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 as the transfer-standards. the centre is now able to provide the metrological traceability and the relevant trainings to the other european nmis. from the beginning it was envisaged that the ssc would be extended in future geographically beyond the central europe and thematically beyond the iop metrology. hence, the training centre also plays a crucial role in the follow-up empir project cefton, which runs from september 2021 to february 2023, [11]-[12], which focuses on the iop metrology know-how transfer to the nmis of the selected central europe free trade agreement (cefta) countries. cmi joined the forces with the nmis of bosnia and hercegovina, moldova, and north macedonia that will serve as the pathfinders for the remaining cefta countries. the project has no research ambitions being entirely focused on the capacity building and engaging in the ssc. the scope and content of the offered trainings as well as the plans for future will be presented. 3. outline of the training centre 3.1. scope of the trainings first, it must be highlighted that the centre does not provide training of use of the eye-tonometers on the patients. this is the task of the medical doctors and nurses. the centre aims to provide training for the metrologists, i.e., the ways how to establish a correct traceability (calibrations and verifications) of the eye-tonometers and of the respective instrumental standards. the training is based on the good practice guidelines developed during the project intense, [7], which in turn reflect the relevant international standards and recommendations as well as the german and czech regulations, [13]-[17]. in con-temporary, the scope of the training centre covers the contact (impression and applanation) tonometers [18]-[22], the non-contact tonometers [23]-[25], the rebound tonometers [25], [26] and the contour tonometers [21], [27]. 3.2. impression tonometers impression (or indentation or schiøtz) tonometer is the oldest (more than 120 years) eye-tonometry principle still in practical utilisation, see figure 1. it determines the iop by the depth of corneal indentation caused by a plunger with the exactly defined weight and dimensions. in order to measure the very high iops, extra weights can be loaded. all these instruments are manufactured following the common standardized specifications. hence, their traceability consists of the checks of all the prescribed geometrical (e.g., the curvatures of the contact areas) and mechanical (e.g., weights and friction) requirements and tolerance limits, see figure 1. the weights can be checked by a mechanical or by an electronic balance in a special set-up. the laboratory is equipped with both. 3.3. applanation tonometers applanation (or goldmann) tonometer is also a long-time established principle but still considered to be a “golden standard.” it determines the iop by measuring a force needed to reach an applanation (i.e., flattening) of a cornea caused by a transparent probe with a known contact area (a circle of 3.06 mm diameter). the traceability of these instruments is again ensured in a classical way, by checking their geometrical specifications and optical quality and by calibrating their force sensor. also in this case, the force can be defined by a mechanical balance or by an electronic sensor, see figure 2. again, the laboratory is equipped with both. the local acceleration due to gravity in the laboratory must be known with a sufficient precision. 3.4. non-contact tonometers the non-contact (or air-puff) tonometers are the most widely utilized tonometers in contemporary, because there is no mechanical contact with the eye during measurement resulting in no need of a topical eye anaesthesia. these instruments also aim at an applanation of a cornea, but they do not reach it by a direct mechanical contact (as goldmann tonometers do), using instead a short and rapid pulse of air directed from a nozzle to the middle of a cornea. the moment of reaching the applanation is detected by a reflection of an infrared beam from the cornea. (in fact, we should speak about reaching a slightly concave shape instead of a real applanation.) the state-of-the-art devices are able to determine also other important ophthalmological measurands (e.g., central corneal thickness). in contrast to the contact tonometers, there is no possibility of a direct classical traceability in this case. their traceability must be ensured to another non-contact tonometer which is clinically tested (the training laboratory is equipped with one) via a suitable transfer standard. there are three possible types of transferstandards available: a set of rubber (silicone) eyes, see figure 3, an electronic eye and a flapper (ptb-jig), see figure 4. the laboratory is equipped with all these devices, because none of these can be utilized universally with all the types of the non figure 1. an impression tonometer placed on a precise calibration sphere. figure 2. calibration of an applanation tonometer (detail). acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 3 contact tonometers produced by the various manufacturers. the laboratory also took part in the mentioned above interlaboratory comparison in this quantity with the stu, [9]. a virtual digital model of the eye cornea was created at the stu during intense project. then a real mechanical model (artificial eye) corresponding to the virtual model was constructed for the experimental verifications. the stu used this artificial eye as one of the transfer-standards in the mentioned above comparison. the target is to develop a „universal iop transfer-standard“ with exchangeble artificial corneas of various thicknesses with a hydraulical or pneumatically regulated inner pressure, [8], [9], [24], [28]-[33], figure 5 and figure 6. 3.5. rebound tonometers the rebound tonometers emerged in the beginning of the 21st century and are becoming popular due to their ease of use (home diagnostic also possible). in this case, a very light and nonharming probe (plastic coated metal core with a spherical plastic tip) is ejected from the instrument against a cornea and is then reflected back into it. the probe movement can be monitored inductively, and the time response is used to calculate a value of the measured iop. the traceability of these instruments must be again ensured to a clinically tested rebound tonometer via a testing bench consisting of a silicone membrane surrogating a cornea with an inner pressure regulated by a water column which enables to compare the readings of a clinically tested device and a calibrated device, see figure 7 and figure 8. figure 3. a detail of a non-contact tonometer with a set of the rubber eyes. figure 4. a detail of a non-contact tonometer with a flapper. figure 5. the new transfer-standard during the comparison. figure 6. the new transfer-standard during the comparison. figure 7. a rebound tonometer on its test bench. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 3.6. contour tonometers the contour tonometer is another modern device. the head of this instrument has a concave shape corresponding to the typical shape and size of human cornea. the head is pressed to a cornea with a constant force (i.e., being in contact with the cornea but not applanating it). a miniature piezoresistive pressure sensor mounted in the head is then able to detect the iop with such a sensitivity that it is even able to detect the minor iop fluctuations caused by the cardiac cycle. the principle is less influenced by a corneal thickness or a corneal rigidity, but it is rather sensitive to a corneal curvature. the traceability of this device can be relatively easily and straightforwardly ensured by a direct calibration of its internal pressure sensor, see figure 9. 4. history of the trainings the first training for two experts of a german stakeholder took place during the project intense in march 2020. then these activities were interrupted due to the covid-19 pandemic. however, the trainings were resumed again in january 2022 within the project cefton when two colleagues from bosnia and herzegovina took part. it was followed by a training of six people of the nmis of bosnia and herzegovina, moldova and north macedonia in june 2022. all the trainings took place at the training centre in most and covered all the principles described above. it was found useful that both the instruments and the instrumental standards to which these are traceable are concentrated at one place. hence, the attendees could more easily distinguish between “construction principles of the tonometers” and “traceability principles of the tonometers” which used to be a stumblingblock during theoretical lectures. the only shortcoming found was the fact that the training does not cover maklakoff tonometer. this predecessor and a very simplified variant of goldmann tonometer has not been used in practice in the central europe for years but is still widely utilized in the area of the former soviet union. 5. tasks for future as it was mentioned in 3.4, some modern non-contact tonometers are able to determine more eye characteristics than the sole iop value (e.g., corneal thickness, rigidity or hysteresis). however, it is still not solved how to ensure a traceability for these extra measurands. we remain in contact with the academic partners to solve these problems to. the artificial eye of the stu seems to be a good starting device for the studies of corneal thickness influence because it allows to utilize the artificial corneas of various thicknesses. initial research in this direction has already started, [8]-[10] and figure 10. also, we search for a possibility to obtain a sample of a maklakoff tonometer and to establish a procedure for its traceability. moreover, as a greater emphasis is being given to the accuracy and reliability of the medical devices with a measuring function, [34]-[41], we consider the training centre a starting nucleus of further cooperation activities in the sector of medical metrology. 6. conclusions as a result of the fruitful cooperation of the european nmis, the training centre for the iop metrology at the cmi covers the most common eye tonometry principles, has the state-of-the-art equipment and remains in the intensive contacts with the nmi and academic partners to broaden its scope in the future. figure 8. a rebound tonometer on its test bench (detail). figure 9. a contour tonometer connected to a pressure standard. figure 10. experimentally found influence of the thickness of the artificial cornea on the non-contact tonometer response as found by the stu. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 5 acknowledgement this work was funded by the european metrology programme for innovation and research (empir) project 20scp02 cefton. the empir initiative is co-funded by the european union horizon 2020 research and innovation programme and the empir participating states. references [1] b. thylefors, a.-d. négrel, the global impact of glaucoma, bulletin of the world health organization, 72 (1994), pp. 323329. [2] h. a. quigley, a. t. broman, the number of people with glaucoma worldwide in 2010 and 2020, british journal of ophthalmology, 90 (2006), pp. 262-267. doi: 10.1136/bjo.2005.081224 [3] y.-c. tham, x. li, t. y. wong, h. a. quigley, t. aung, c.y. cheng, global prevalence of glaucoma and projections of glaucoma burden through 2040, ophthalmology, 121 (2014), pp. 2081-2090. doi: 10.1016/j.ophtha.2014.05.013 [4] s. tanabe, k. yuki, n. ozeki, d. shiba, t. abe, k. kouyama, k. tsubota, the association between primary open-angle glaucoma and motor vehicle collisions, investigative ophthalmology and visual science, 52 (2011), pp. 4177-4181. doi: 10.1167/iovs.10-6264 [5] e. sınır, y. durgut, d. m. rosu, d. pražák, towards the harmonization of medical metrology traceability in europe: an impact case study through activities in turkey & empir project intense, ieee international symposium on medical measurements and applications proc., istanbul, turkey, 26 – 28 june 2019, pp. 1-6. doi: 10.1109/memea.2019.8802141 [6] d. pražák, v. sedlák, e. sınır, f. pluháček, changing the status of mmhg, accred. and qual. assur. 25 (2020), pp. 81-82. doi: 10.1007/s00769-019-01414-7 [7] intense-consortium, developing research capabilities for traceable intraocular pressure measurements. online [accessed 16 august 2022] http://intense.cmi.cz [8] d. pražák, r. ziółkowski, d. rosu, m. schiebl, j. rybář, p. pavlásek, e. sınır, f. pluháček, metrology for intraocular pressure measurements, acta imeko 9 (2020) 5, pp. 353-356. doi: 10.21014/acta_imeko.v9i5.999 [9] d. pražák, j. rybář, p. pavlásek, v. sedlák, s. ďuriš, m. chytil, f. pluháček, rozvojové výskumné kapacity pre nadväznosť merania vnútroočného tlaku – výsledky európského projektu, metrológia a skúšobníctvo 26 (2021), pp. 10-14. [in slovak] [10] v. sedlák, d. pražák, m. schiebl, m. nawotka, e. jugo, m. do céu ferreira, a. duffy, d. m. rosu, p. pavlásek, g. geršak, smart specialisation concept in metrology for blood and intraocular pressure measurements, measurement: sensors 18 (2021), 100283. doi: 10.1016/j.measen.2021.100283 [11] cefton-consortium, development of eye-tonometry in cefta countries. online [accessed 16 august 2022] http://projectcefton.com [12] d. pražák, metrological traceability of the eye-tonometers, the 3rd international symposium on visual physiology, environment and perception abstr. proc., tallinn, estonia, 12 – 13 november 2021, p. 17. online [accessed 16 august 2022] https://konverentsikeskus.tlu.ee/sites/konverentsikeskus/files/ vispep2020/vispep2021%20abstract%20book%20.pdf [13] iso 8612: ophthalmic instruments – tonometers, 2009. [14] oiml r145-1: ophthalmic instruments – impression and applanation tonometers – part 1 – metrological and technical requirements, 2015. [15] s. mieke, t. schade, guidelines for metrological verifications of medical devices with a measuring function – part 1, ptb, berlin, ed. 3, 2016. online [accessed 16 august 2022] https://www.ptb.de/cms/fileadmin/internet/publikationen/wis sensch_tech_publikationen/lmkm-v3-part1englisch_2020.pdf [16] czech metrology institute, opatření obecné povahy sp. zn.: 0111oop-c038-16. [in czech] online [accessed 16 august 2022] https://www.cmi.cz/sites/all/files/public/download/uredni_de ska/oop/0111-oop-c038-16.pdf [17] czech metrology institute, opatření obecné povahy sp. zn.: 0111oop-c039-13. [in czech] online [accessed 16 august 2022] https://www.cmi.cz/sites/all/files/public/download/uredni_de ska/3439-id-c_3439-id-c.pdf [18] h. dudek, t. schwenteck, h.-j. thiemich, normale für die messtechnischen kontrollen von augentonometern – vergleichsmessungen an 50 prüfeinrichtungen, klinische monatsblätter für augenheilkunde, 219 (2002), pp. 703-709. [in german] [19] t. schwenteck, h.-j. thiemich, die sicherung der messtechnischen kontrolle von applanationstonometern durch technische untersuchungen an transfernormalen, klinische monatsblätter für augenheilkunde, 227 (2010), pp. 489-495. [in german] doi: 10.1055/s-0028-1110014 [20] t. schwenteck, h.-j. thiemich, messtechnische kontrollen für impressionstonometer – eine qualitätsgarantie in der augenheilkunde, klinische monatsblätter für augenheilkunde, 228 (2011), pp. 130-137. [in german] doi: 10.1055/s-0029-1245762 [21] t. schwenteck, m. knappe, i. moros, wie beeinflusst die zentrale hornhautdicke den intraokularen druck bei der applanations und konturtonometrie?, klinische monatsblätter für augenheilkunde, 229 (2012), pp. 917-927. [in german] doi: 10.1055/s-0031-1299536 [22] k. drbálková, v. suchý, tonometrie část 1 – mechanické a elektronické kontaktní tonometry, metrologie 28 (2019), pp. 2832. [in czech] [23] t. schwenteck, h.-j. thiemich, wahrung der messgüte von transfernormalen für die messtechnische kontrolle von luftimpulstonometern, klinische monatsblätter für augenheilkunde, 224 (2007), pp. 167-172. [in german] doi: 10.1055/s-2007-962953 [24] p. pavlásek, j. rybář, s. ďuriš, b. hučko, m. chytil, a. furdová, s. l. ferková, j. sekáč, v. suchý, p. grosinger, developments and progress in non-contact eye tonometer calibration, measurement science review, 20 (2020), pp. 171-177. doi: 10.2478/msr-2020-0021 [25] k. drbálková, v. suchý, tonometrie část 2 – elektronické bezkontaktní tonometry a rebound tonometrie, metrologie 29 (2020), pp. 31-35. [in czech] [26] p. c. ruokonen, t. schwenteck, j. draeger, evaluation of the impedance tonometers tgdc-01 and icare according to the international ocular tonometer standards iso 8612, graefe's archive for clinical and experimental ophthalmology, 245 (2007), pp. 1259-1265. doi: 10.1007/s00417-006-0483-3 [27] t. schwenteck, et al., “klinische evaluierung eines neuen tonometers auf der basis des internationalen standards für augentonometer iso 8612”, klinische monatsblätter für augenheilkunde, 223, pp. 808-812, 2006. [in german] doi: 10.1055/s-2006-926861 [28] p. pavlásek, m. chytil, j. rybář, j. palenčář, s. ďuriš, development of new calibration standard for noncontact tonometers, 2018 international congress on image and signal processing biomedical engineering and informatics proc., beijing, china, 13 – 15 october 2018, 8633148. doi: 10.1109/cisp-bmei.2018.8633148 [29] j. rybář, m. chytil, s. ižuriš, b. hučko, f. maukš, p. pavlásek, use of suitable materials such as artificial cornea on eye model for calibration of optical tonometers, aip conference proceedings https://doi.org/10.1136/bjo.2005.081224 https://dx.doi.org/10.1016/j.ophtha.2014.05.013 https://dx.doi.org/10.1167/iovs.10-6264 https://doi.org/10.1109/memea.2019.8802141 https://doi.org/10.1007/s00769-019-01414-7 http://intense.cmi.cz/ http://dx.doi.org/10.21014/acta_imeko.v9i5.999 https://doi.org/10.1016/j.measen.2021.100283 http://projectcefton.com/ https://konverentsikeskus.tlu.ee/sites/konverentsikeskus/files/vispep2020/vispep2021%20abstract%20book%20.pdf https://konverentsikeskus.tlu.ee/sites/konverentsikeskus/files/vispep2020/vispep2021%20abstract%20book%20.pdf https://www.ptb.de/cms/fileadmin/internet/publikationen/wissensch_tech_publikationen/lmkm-v3-part1-englisch_2020.pdf https://www.ptb.de/cms/fileadmin/internet/publikationen/wissensch_tech_publikationen/lmkm-v3-part1-englisch_2020.pdf https://www.ptb.de/cms/fileadmin/internet/publikationen/wissensch_tech_publikationen/lmkm-v3-part1-englisch_2020.pdf https://www.cmi.cz/sites/all/files/public/download/uredni_deska/oop/0111-oop-c038-16.pdf https://www.cmi.cz/sites/all/files/public/download/uredni_deska/oop/0111-oop-c038-16.pdf https://www.cmi.cz/sites/all/files/public/download/uredni_deska/3439-id-c_3439-id-c.pdf https://www.cmi.cz/sites/all/files/public/download/uredni_deska/3439-id-c_3439-id-c.pdf http://dx.doi.org/10.1055/s-0028-1110014 http://dx.doi.org/10.1055/s-0029-1245762 http://dx.doi.org/10.1055/s-0031-1299536 http://dx.doi.org/10.1055/s-2007-962953 https://doi.org/10.2478/msr-2020-0021 https://doi.org/10.1007/s00417-006-0483-3 http://dx.doi.org/10.1055/s-2006-926861 https://doi.org/10.1109/cisp-bmei.2018.8633148 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 6 vol. 2029, zakopane, poland, 4 – 6 june 2018, 020067. doi: 10.1063/1.5066529 [30] p. pavlásek, j. rybář, s. ďuriš, b. hučko, j. palenčář, m. chytil, developments in non-contact eye tonometer calibration, 2019 ieee international instrumentation and measurement technology conference proc., auckland, new zealand, 20 – 23 may 2019, 8827028. doi: 10.1109/i2mtc.2019.8827028 [31] b. hučko, s. l. ferková, s. ďuriš, j. rybář, p. pavlásek, glaucoma vs. biomechanical properties of cornea, strojnícky časopis – journal of mechanical engineering, 69 (2019), pp. 111116. doi: 10.2478/scjme-2019-0021 [32] p. pavlásek, j. rybář, s. ďuriš, b. hučko, j. palenčář, m. chytil, metrology in eye pressure measurements, 2020 ieee sensors applications symposium proc., kuala lumpur, malaysia, 9 – 11 march 2020, 9220014. doi: 10.1109/sas48726.2020.9220014 [33] b. hučko, ĺ. kučera, s. ďuriš, p. pavlásek, j. rybář, j. hodál, modelling of cornea applanation when measuring eye pressure, lecture notes in mechanical engineering, icmd-2018 (2020), pp. 287-294. doi: 10.1007/978-3-030-33146-7_33 [34] j. schreyögg, m. bäumler, r. busse, balancing adoption and affordability of medical devices in europe, health policy, 92 (2009), pp. 218-224. doi: 10.1016/j.healthpol.2009.03.016 [35] m. do céu ferreira, the role of metrology in the field of medical devices, international journal of metrology and quality engineering, 2 (2011), pp. 135-140. doi: 10.1051/ijmqe/2011101 [36] m. do céu ferreira, a. matos, r. p. leal, evaluation of the role of metrological traceability in health care: a comparison study by statistical approach, accreditation and quality assurance, 20 (2015), pp. 457-464. doi: 10.1007/s00769-015-1149-9 [37] s. terzic, v. jusufovic, a. nadarevic-vodencarcevic, m. asceric, a. pilavdzic, m. halilbasic, a. terzic, is prevention of glaucoma possible in bosnia and herzegovina?, medical archive, 70 (2016), pp. 140-141. doi: 10.5455/medarh.2016.70.140-141 [38] a. bošnjaković, z. džemić, legal metrology: medical devices, ifmbe proceedings 62, cmbebih-2017 (2017), pp. 583-288. doi: 10.1007/978-981-10-4166-2_88 [39] b. karaböce, h. o. durmuş, e. cetin, the importance of metrology in medicine, ifmbe proceedings 73, cmbebih-2019 (2019), pp. 443-450. doi: 10.1007/978-3-030-17971-7_67 [40] b. karaböce, challenges for medical metrology, ieee instrumentation and measurement magazine, 23 (2020), pp. 4855. doi: 10.1109/mim.2020.9126071 [41] a. bandjević, l. g. pokvić, z. džemić, f. bečić, risks of emergy use authorizations for medical products during outbreak situations: a covid-19 case study, biomedical engineering online, 19 (2020), 75. doi: 10.1186/s12938-020-00820-0 https://doi.org/10.1063/1.5066529 https://doi.org/10.1109/i2mtc.2019.8827028 http://dx.doi.org/10.2478/scjme-2019-0021 https://doi.org/10.1109/sas48726.2020.9220014 http://dx.doi.org/10.1007/978-3-030-33146-7_33 https://doi.org/10.1016/j.healthpol.2009.03.016 https://doi.org/10.1051/ijmqe/2011101 https://doi.org/10.1007/s00769-015-1149-9 https://doi.org/10.5455/medarh.2016.70.140-141 http://dx.doi.org/10.1007/978-981-10-4166-2_88 http://dx.doi.org/10.1007/978-3-030-17971-7_67 https://doi.org/10.1109/mim.2020.9126071 https://doi.org/10.1186/s12938-020-00820-0 an adaptive learning algorithm for spectrum sensing based on direction of arrival estimation in cognitive radio systems acta imeko issn: 2221-870x december 2021, volume 10, number 4, 67 72 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 67 an adaptive learning algorithm for spectrum sensing based on direction of arrival estimation in cognitive radio systems sala surekha1, md. zia ur rahman1, aimé lay-ekuakille2 1 department of electronics and communication engineering, k l university, koneru lakshmaiah education foundation, green fields, vaddeswaram, guntur-522502, a.p., india 2 department of innovation engineering, university of salento, lecce, italy section: research paper keywords: adaptive learning; beam forming; cognitive radio; direction of arrival; spectrum sensing citation: sala surekha, md. zia ur rahman, aimé lay-ekuakille, an adaptive learning algorithm for spectrum sensing based on direction of arrival estimation in cognitive radio systems, acta imeko, vol. 10, no. 4, article 13, december 2021, identifier: imeko-acta-10 (2021)-04-13 section editor: francesco lamonaca, university of calabria, italy received may 28, 2021; in final form december 1, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: md. zia ur rahman, e-mail: mdzr55@gmail.com 1. introduction telecommunication systems are rapidly using from past few decades it leads to increase in frequency spectrum usage. due to lack of frequency spectrum, there is low utilization of licensed and unlicensed bands. to avoid interferences, secondary users always must be aware of primary user is absent or present in a particular frequency band. further considered secondary user direction of arrival (doa), it contains multiple input single output (miso) and one receiving antenna for primary user and feedback based adaptive frequency algorithm. non orthogonal multiple access [1] based cognitive radio network is used for secure beamforming to avoid interference in multiple input multiple output networks. in array sensors, estimation of number of channels is a problem and it is solved by considering direction of arrival in spectrum sensing method. wideband spectrum sensing channel [2] is divided into two sub channels; each sub channel is connected with sensor for processing and then estimation of doa is done. for multi band signals, two spectrum scenarios are considered from sub-nyquist samples: in first scenario, spectrum sensing method is examined then doa is considered as recovery for frequency spectrum problems by proposing uniform linear array (ula) [3] with sensor at receiver then it is connected to equivalent circuit of analog front-end channel of modulated wideband converter (mwc). in second case, generalized likelihood ratio antenna beamforming [4] used for efficient and low complexity spectrum sensing. localization technique is investigated in [5] depending on direction of arrival measurements for estimating primary users in cognitive radio networks. in [6], [7] new spectrum sensing techniques based on beamforming are proposed. null steering and joint beam-based resource allocation [8] used in femtocell networks in spectrum sensing. performance of transmitter localization done using sector power measurements for every senso then derived cramer rao bound (crb) [9],[10] for sector power estimation using doa and mean square error is derived for analytical expression. main objective of this paper, estimating doa of various sensors for sensing [11], [12] the vacant spectrum and thereby facilitate channel allocation to secondary user. here, we make use of an abstract in cognitive radio systems, estimating primary user direction of arrival (doa) measurement is one of the key issues. in order to increase the probability detection multiple sensor antennas are used and they are analysed by using subspace-based technique. in this work, we considered wideband spectrum with sub channels and here each sub channel facilitated with a sensor for the estimation of doa. in practical spectrum sensing process interference component also encounters in the sensing process. to avoid this interference level at output of receiver, we used an adaptive learning algorithm known as normalised least absolute mean deviation (nlamd) algorithm. further to achieve better performance a bias compensator function is applied in weight coefficient updating process. using this hybrid realization, the vacant spectrum can be sensed based on doa estimation and number of vacant locations in each channel can be identified using maximum likelihood approach. in order to test at the diversified conditions different threshold parameters 0.1, 0.5, 1 are analysed. mailto:mdzr55@gmail.com acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 68 adaptive learning algorithm based on normalized least absolute mean deviation (nlamd) strategy. in section 2, the doa estimation with an adaptive leaning algorithm is discussed, in section 3 simulation results are discussed. the proposed realizations are suitable for the development of medical telemetry networks as well in the development of smart cities, smart hospitals. 2. doa estimation based spectrum sensing in cognitive radios, spectrum sensing is mostly used method because it overcomes the low spectrum utilization problems of primary users. there are various spectrum sensing algorithms they are based on narrowband methods used to solve binary hypothesis. this binary hypothesis test is used for every sub channel. for assessing the primary user’s existence, sub channel assessed in spectrum sensing algorithm. in each sub channel received signal is expressed [13]as 𝑦𝑘 = { 𝑤𝑘, ℎ0 𝑎𝑟𝑘 + 𝑤𝑘, ℎ1 , (1) where 𝑎𝑟𝑘 is primary user received signal, 𝑤𝑘 is additive white gaussian noise, ℎ0 and ℎ1 are hypothesis test used for primary user existence in every sub channel. multiple hypothesis tests are employed for spectrum sensing algorithm, then by taking into consideration of all sub channels primary user signal is detected. by analysing all sub channel received signals, observed that noise is present in output signal. to discriminate this noise from primary user signals, doa is estimated. by assuming received signal at sensor nodes in array processing, it looks like cognitive radio subchannel, to avoid these problems doa estimation is considered in spectrum sensing and its block diagram is shown in figure 1. doa estimation is considered to get exact information of antenna and also to avoid interferences between primary and secondary users, further an adaptive learning process called normalised least absolute mean deviation (nlamd) algorithm is considered for spectrum to reduce noise levels at output. by using adaptive filter, desired response of input signal is calculated as 𝑑𝑘 = 𝑠𝑘 t𝑢0 + 𝑜𝑘 , (2) where 𝑠𝑘 is input signal, u0 is unknown weight vector with ‘t’ taps and 𝑜𝑘 is output noise at time index ‘k’. error output of desired signal is represented as 𝑒𝑘 = 𝑑𝑘 − 𝑠𝑘 t𝑢𝑘 . (3) here, 𝑢𝑘 = [𝑢𝑘1,𝑢𝑘2 ….. ,𝑢𝑘𝑇] t is weight vector of adaptive filter. for above equation identification problem of adaptive system is solved by p-norm cost function minimization 𝐽(𝑒𝑘) = 1 𝑝 e[|𝑒𝑘| 𝑝] = 1 𝑝 e[|𝑒𝑘 = 𝑑𝑘 − 𝑠𝑘 t𝑢𝑘| 𝑝] (4) where e[.] is the operator of statistical expectation for p>0. e[|𝑒𝑘| 𝑝] is replaced with |𝑒𝑘| 𝑝, then after calculation we get gradient of uk for p-norm error evaluated as 𝜕𝐽(𝑒𝑘) 𝜕(𝑢𝑘) = −|𝑒𝑘| 𝑝−1 sign[𝑒𝑘𝑠𝑘] . (5) by using gradient descent algorithm weight equation is updated as 𝑢𝑘+1 = 𝑢𝑘 + 𝜔|𝑒𝑘| 𝑝−1sign[𝑒𝑘𝑠𝑘], (6) where ‘𝜔’ is step size selected appropriately used for balancing convergence rate and mean square error, ‘sign’ used for denoting sign function. for improving steady state rate and convergence rate above equation is updated with normalised least mean ppower algorithm as 𝑢𝑘+1 = 𝑢𝑘 + 𝜔 |𝑒𝑘| 𝑝−1sign[𝑒𝑘𝑠𝑘] ||𝑠𝑘||𝑝 𝑃 + 𝜗 . (7) here ||.||p used for p-norm operation, small positive value 𝜗 is also considered for avoiding denominator from zero. by selecting p-norm values as 1, then we obtain the nlamd algorithm equation as 𝑢𝑘+1 = 𝑢𝑘 + 𝜔 sign[𝑒𝑘𝑠𝑘] ||𝑠𝑘||1 1 + 𝜗 . (8) in sparse models, l1 norm is used for relaxation in least absolute shrinkages and operator selector algorithms, and it is employed in various adaptive filter algorithms. by using l1 norm, weight equation of nlamd algorithm is updated to minimise cost functions and it is given as 𝐽𝑑(𝑒𝑘) = 𝑑𝑘 − 𝑠𝑘 t𝑢𝑘 ||𝑠𝑘||1 1 + 𝜗 + ||𝑢𝑘||1 , (9) where is adopted parameter to know the difference between estimation error and sparsity. then we get the updated equation for nlamd sparse algorithm using gradient descent method using cost function equation (9) as 𝑢𝑘+1 = 𝑢𝑘 + 𝜔 sign(𝑒𝑘𝑠𝑘) ||𝑠𝑘||1 1 + 𝜗 − 𝜎 sign(𝑢𝑘) . (10) here, 𝜎 = 𝜔 is a regularised parameter. nlamd algorithm with sparse system and l1 norm is denoted as za nlamd algorithm. in bias compensated systems [14], considered a noisy input system for nlamd algorithm. input noise vector of a system is defined as �̅�𝑘 = 𝑠𝑘 + 𝑜𝑖𝑛𝑘 , (11) where 𝑜𝑖𝑛𝑘 noisy input vector it is represented as 𝑜𝑖𝑛𝑘 = [𝑜𝑖𝑛1𝑘,𝑜𝑖𝑛2𝑘,……,𝑜𝑖𝑛𝑀𝑘] t , and its limit is 𝑜𝑖𝑛,𝑡𝑘(𝑙 ∈ [1,𝑀] ), their input variance is represented as 𝜎𝑖𝑛 2 and is estimated by using some unknown info. to recover the biased estimation problems for nlamd algorithm of equation (10), an unbiased estimation vector 𝑏𝑘 is taken into consideration as 𝑢𝑘+1 = 𝑢𝑘 + 𝜔 sign(�̂�𝑘�̂�𝑘) ||�̂�𝑘||1 1 + 𝜗 − 𝜎 sign(𝑢𝑘)+ 𝑏𝑘 . (12) by using above equation, we get bias compensated vector as below [15] 𝑏𝑘 = 𝜔𝜎�̂�|�̂� 2 𝑘√ 2 (π𝜎�̂�|�̂� 2 𝑘 ) ⁄ ( 𝑢𝑘 ||�̂�𝑘||1 1 + 𝜗 ) . (13) noisy input parameter variances 𝜎𝑜𝑖𝑛 2 𝑘 , 𝜎�̂�|�̂� 2 𝑘 and 𝜎𝑢𝑘 2 are estimated accurately by computing these variance parameters as acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 69 𝜎𝑜𝑖𝑛𝑘 2 = 𝜎�̂�|�̂� 2 𝑘 𝑀𝜎𝑢𝑘 2 +𝑖 + 𝜎�̂�|�̂� 2 𝑘 (𝑀) �̂�𝑘 t�̂�𝑘 . (14) 𝜎�̂�|�̂� 2 𝑘 = ℵ𝜎�̂�|�̂� 2 𝑘−1 +(1 −ℵ)�̂�𝑘 2 (15) 𝜎𝑢𝑘 2 = ℵ𝜎𝑢𝑘−1 2 + (1 − ℵ) 1 𝑀 𝑢𝑘 t𝑢𝑘 . (16) equation (13) is substituted into (12), we get final bias compensated nlamd (bc-nlamd) adaptative learning process updated as 𝑢𝑘+1 = ( 1 + 𝜔 √ 2 (π𝜎�̂�|�̂� 2 𝑘 ) ⁄ 𝜎𝑜𝑖𝑛 2 ||�̂�𝑘||1 1 + 𝜗 ) 𝑢𝑘 + 𝜔 sign(�̂�𝑘�̂�𝑘) ||�̂�𝑘||1 1 + 𝜗 −𝜎 sign(𝑢𝑘) . (17) using this weight recursion, the noise in the received signal is minimised and accurate direction of arrival is estimated. the bcnlamd algorithm accurately estimates the doa and helps in finding the vacant spectrum and the flowchart of the proposed adaptive learning algorithm is shown in figure 2. 3. results and discussion this section demonstrates the experimental results for evaluating the performance of proposed bias compensated adaptive learning algorithm and is compared with absolute mean deviation (amd) and normalised absolute mean deviation (namd) methods with output gaussian noise. input and output noises generated using zero white mean gaussian noise and β stable distribution respectively for better performance of proposed algorithm and its characteristic function is expressed as, 𝑓𝑡 = e 𝑗𝛿𝑡− |𝑘| 𝛽[1+𝑗𝜏 sign(𝑡)𝑄𝑡,𝛽 , (18) where 𝑄𝑡,𝛽 = 𝑓(𝑥) = { tan 𝛽 π 2 , 𝛽 ≠ 1 2 π log𝑡, 𝛽 = 1 (19) with characteristic exponent 0 < 𝛽 ≤ 2, skewness −1 ≤ 𝜏 ≤ 1 , scale parameter range 0 < < ∞ and location parameter −∞ < 𝛿 < ∞. figure 1. doa estimation block diagram for spectrum sensing. figure 2. flow chart of spectrum sensing doa estimation using bc-nlamd algorithm. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 70 occupied subchannel locations are estimated using spectrum sensing method with choosing of q subchannels. in cognitive radio applications, using spectrum sensing correlation between channels are identified. in our framework for identifying spectrum location direction of arrival is taken into consideration. by using doa, we can identify spectrum location using the proposed bc-nlamd adaptive learning process. output signal with various combinations is considered as one white signal with one doa, one white signal with 3 doa, two white signals with one doa and three doa. performance of adaptive algorithm is studied in terms of convergence rate, beam pattern and number of active taps. in mobile environments, more than one multipath is considered, in those each multipath have different gains and it has amplitude and phase components. case 1: one white signal with one doa in this type, one signal with one path is considered it arrives at base station with 60 degrees angle and amplitude 0.5, it is propagated at different threshold values 0.1, 0.5 and 1. for different threshold control values, delay and steady state error are calculated as discussed in second section. for threshold value of 0.1, it improves convergence rate for proposed bc nlamd algorithm when compared to lms [16] algorithm. for threshold values of 0.5 and 1 steady state is converged faster than basic lms algorithm. improvement in convergence rate is identified by using taps in adaptive filter algorithm. for narrow band theoretical values, threshold value need only one tap but however it has delay cost and convergence rate problems. hence, we considered beam pattern with nlamd algorithm using matlab in doa estimation, it steers main beam with 60 degrees direction with beam strength of two, it is due to power signal reduced by factor 0.5. for every simulation, convergence rates are given in terms of number of samples, it is required for steady state and it is shown in table 1. case 2: one white signal with 3 doas these simulations are studied for effects of multipath smart antenna systems. multipath antennas with three different direction of arrivals -20, 30 and 60 degrees are considered for base station. at antenna system each multipath have a difference of one sampling period of 1/fc and their corresponding gains also introduced with amplitudes as shown in figure 3. for proposed method, three different weight vectors for each multipath are used for spectrum sensing. convergence rate for white signal with three doas gives better convergence rate for proposed bias compensated nlamd algorithm when compared to amd algorithm as shown in figure 4. by proposed algorithm, antenna systems show similar beam pattern compared to basic amd algorithm and it has ability to steer beams in multiple directions with zero interference directions for each beam, gain is inversely proportional to corresponding multipath of antenna. case 3: two white signals with one doa two different signals are transmitted with one doa each and its effect is same as sending two multipaths for one signal separated by one sample period at least, it is due to two signals are uncorrelated for each other with amplitude of 0.5 and 1 at threshold values of 0.1, 0.5 and 1. it gives good convergence rate for second signal when compared to first signal. for smaller amplitude it gives longer response to adapt taps and then signal is estimated. narrow threshold gives an ability to adapt smaller number of taps for better performance of proposed nlamd algorithm and their beam patterns for corresponding doa are shown in figure 5 and figure 6 respectively. case 4: two white signals with three doas in this case, two input sequences are considered each sequence with three multipath components is simulated. at the base station, multipath signals of second and third are used behind first multipath signals from various directions. for each signal, only two sets of multipath are considered. at base station, second and third multipath have the same weight vector. compared the convergence rate with basic lad algorithm. four beam patterns are considered from shannon theory, 3rd multipath pattern with two main lobes in direction of second and third multipaths, they are shown in figure 7 and figure 8 respectively. convergence rates for first signal, second signal in case of doa 3 is shown in table 2 and table 3 respectively in terms of number of samples considered to reach steady state. from the tables, it is clear that for doa3 of proposed algorithm converged faster for second signal when compared to first signal. in figure 7, only two multipath signals are visible because 2nd and 3rd signals have same weight vectors. main aim of proposed bc-nlamd algorithm is for detecting frequency spectrums in cognitive radio antenna systems. nlamd algorithm is used for low computational complexity, figure 3. bcnlamd beam pattern for one white signal with doas. figure 4. error for received signal using bcnlamd algorithm with threshold value of 0.1. table 1. bcnlamd beam pattern for one white signal with doas. algorithm steady state delay lamd [17] 65 0 bbnlms [18] 50 0 ffa [19] 35 0 pid [20] 25 8 enlms [21] 15 12 bcnlamd for 0.1 40 0 bcnlamd for 0.5 45 7 bcnlamd for 1.0 55 14 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 71 better stability and good robustness to avoid implementation errors. however, nlamd algorithm is poor convergence and it is increased by using bias compensated nlamd algorithm and it requires system with sparse channels. in wireless channel, spatial properties are exploited with detection guide systems. beam patterns are analysed and they are compared with shannon theorem. proposed results are compared with threshold point of spectrum sensing algorithm for bc-nlmad algorithm. then it gives better results for bc-nlmad in terms of faster convergence, low computational complexity and particularly narrow band threshold gives better performance of antennas. it is due to delay period presence, on that time system waits for set of samples those are undetermined before actually converging for desired signal. however, system performance is improved at a cost of increased computational complexity that will need increased number of taps adaptation for narrow threshold point in proposed algorithm and it reduces error at output signals. beam patterns are obtained with according to expectations of shannon theorem, and further proposed algorithm with active taps steers beams in desired signal direction. spatial filtering is particularly used in wireless communication systems. hence by proposed algorithm performance of system is increased with active taps weights in updated equation so that improves frequency utilization for cognitive radio systems. 4. conclusion in this paper, the vacant spectrum is sensed by using doa measurement in wireless communication systems. interferences occurred with cognitive radio systems are avoided by considering doa estimation for spectrum sensing. in wireless communications, antenna system are new developing technologies with new adaptive beam forming algorithms, it will provide high frequency spectrum then it improves quality of service of cognitive radio systems. further to reduce noise signals from received signal proposed a bias compensated nlamd algorithm. using this adaptive learning algorithm, performance of cognitive radio-based antenna beam streams and convergence rate is improved. by using sign regressor function in weight update equation of proposed algorithm computational complexity is reduced. performance of bcnlamd algorithm in presence of multipath users and multipath effects are analysed using matlab simulations and hence convergence rate is improved due to active taps used in adaptive learning algorithm leads to better spectrum efficiency in cognitive radios. figure 5. error signal for bcnlamd algorithm for 0.1 value. figure 6. bcnlamd beam pattern for two white signals with one doa. table 2. convergence rate of first signal, doa3. algorithm steady state delay lamd [17] 55 0 bbnlms [18] 60 20 ffa [19] 70 45 pid [20] 80 65 enlms [21] 90 70 bcnlamd for 0.1 35 60 bcnlamd for 0.5 55 40 bcnlamd for 1.0 125 95 table 3. convergence rate of second signal and doa3 algorithm steady state delay lamd [17] 45 0 bbnlms [18] 55 0 ffa [19] 62 20 pid [20] 70 40 enlms [21] 85 55 bcnlamd for 0.1 25 35 bcnlamd for 0.5 45 25 bcnlamd for 1.0 95 65 figure 7. received error signal for bcnlamd algorithm. figure 8. bcnlamd beam pattern for two white signals with 3 doas. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 72 references [1] h. s. m. antony, t. lakshmanan, secure beamforming in 5gbased cognitive radio network, symmetry, vol. 11, no. 10, october 2019, p. 1260. doi: 10.3390/sym11101260 [2] amir mahram, mahrokh g. shayesteh, blind wideband spectrum sensing in cognitive radio networks based on direction of arrival estimation model and generalised autoregressive conditional heteroscedasticity noise modelling, iet communications, vol.8, no.18, 2014, pp.3271-3279. doi: 10.1049/iet-com.2014.0162 [3] s. stein ioushua, o. yair, d. cohen, y. c. eldar, cascade: compressed carrier and doa estimation, ieee transactions on signal processing, vol. 65, no. 10, may 2017, pp. 2645-2658. doi: 10.1109/tsp.2017.2664054 [4] a. h. hussein, h. s. fouda, h. h. abdullah, a. a. m. khalaf, a highly efficient spectrum sensing approach based on antenna arrays beamforming, ieee access, vol. 8, 2020, pp. 2518425197. doi: 10.1109/access.2020.2969778 [5] j. wang, j. chen, d. cabric, cramer-rao bounds for joint rss/doa-based primary-user localization in cognitive radio networks, ieee transactions on wireless communications, vol. 12, no. 3, march 2013, pp. 1363-1375. doi: 10.1109/twc.2013.012513.120966 [6] h. s. fouda, a. h. hussein, m. a. attia, efficient glrt/doa spectrum sensing algorithm for single primary user detection in cognitive radio systems, international journal of electronics and communications, 2018 doi: 10.1016/j.aeue.2018.03.012 [7] s. elaraby, h. y. soliman, h. m. abdel-atty, m. a. mohamed, joint 2d-doa and carrier frequency estimation technique using nonlinear kalman filters for cognitive radio, ieee access, vol. 5, 2017, pp. 25097-25109. doi: 10.1109/access.2017.2768221 a. salman, i. m. qureshi, s. saleem, s. saeed, b. r. alyaei, novel sensing and joint beam and null steering-based resource allocation for cross-tier interference mitigation in cognitive femtocell networks, wireless networks, vol. 24, no. 6, february 2017, pp. 2205–2219. doi: 10.1007/s11276-017-1465-6 [8] j. werner, j. wang, a. hakkarainen, d. cabric, m. valkama, performance and cramer–rao bounds for doa/rss estimation and transmitter localization using sectorized antennas, ieee transactions on vehicular technology, vol. 65, no. 5, may 2016, pp. 3255-3270. doi: 10.1109/tvt.2015.2445317 [9] a. lay-ekuakille, p. vergallo, d. saracino, a. trotta, optimizing and post processing of a smart beamformer for obstacle retrieval, ieee sensors journal, vol.12, no.5, 2012, 1294–1299. doi: 10.1109/jsen.2011.2169782 [10] m. a. hussain ansari, c. l. law, grating lobe suppression of multicycle ir-uwb collaborative radar sensor in wireless sensor network system, ieee sensors letters, vol. 4, no. 1, jan. 2020, art no. 7000404, pp. 1-4. doi: 10.1109/lsens.2020.2964588 [11] w. lu, b. deng, q. fang, x. wen, s. peng, intelligent reflecting surface-enhanced target detection in mimo radar, ieee sensors letters, vol. 5, no. 2, february 2021, art no. 7000304, pp. 1-4. doi: 10.1109/lsens.2021.3052753 [12] s. surekha, m. z. ur rahman, a. lay-ekuakille, a. pietrosanto, m. a. ugwiri, energy detection for spectrum sensing in medical telemetry networks using modified nlms algorithm, 2020 ieee international instrumentation and measurement technology conference (i2mtc), dubrovnik, croatia, 2020, pp. 1-5. doi: 10.1109/i2mtc43012.2020.9129107 [13] wentao ma, ning li, yuanhao li, jiandong duan, badong chen, sparse normalized least mean absolute deviation algorithm based on unbiasedness criterion for system identification with noisy input, ieee access, vol.4, 2016, pp. 1-9. doi: 10.1109/access.2018.2800278 [14] s. m. jung, p. park, stabilization of a bias-compensated normalized least-mean-square algorithm for noisy inputs, ieee transactions on signal processing, vol. 65, no. 11, 1 june 2017, pp. 2949-2961. doi: 10.1109/tsp.2017.2675865 [15] m. o. bin saeed, a. zerguine, an incremental variable step-size lms algorithm for adaptive networks, ieee transactions on circuits and systems ii: express briefs, vol. 67, no. 10, pp. 22642268, october 2020. doi: 10.1109/tcsii.2019.2953199 [16] v. c. ramasami, spatial adaptive interference rejection: lms and smi algorithms, report, university of kansas, april 2001 [17] v. a. kumar, g. v. s. karthik, a low complex adaptive algorithm for antenna beam steering, 2011 international conference on signal processing, communication, computing and networking technologies, july 2011, pp. 317-321. doi: 10.1109/icsccn.2011.6024567 [18] m. l. m. lakshmi, k. rajkamal and s. v. a. v. prasad, amplitude only linear array synthesis with desired nulls using evolutionary computing technique, aces journal, vol.31, no.11, november 2016, pp.1357-1361. doi: 10.1109/wispnet.2017.8299890 [19] p. k. mvemba, a. lay-ekuakille, s. kidiamboko, an embedded beamformer for a pid-based trajectory sensing for an autonomous vehicle, metrology and measurement systems, vol.25, no.3, 2018, pp.561-575. doi: 10.24425/123891 [20] k. aravind rao, k. sai raj, rohan kumar jain, implementation of adaptive beam steering for phased array antennas using enlms algorithm, journal of critical reviews, vol.7, no.9, 2020 doi: 10.31838/jcr.07.09.10 https://doi.org/10.3390/sym11101260 https://doi.org/10.1049/iet-com.2014.0162 https://doi.org/10.1109/tsp.2017.2664054 https://doi.org/10.1109/access.2020.2969778 https://doi.org/10.1109/twc.2013.012513.120966 https://doi.org/10.1016/j.aeue.2018.03.012 https://doi.org/10.1109/access.2017.2768221 https://doi.org/10.1007/s11276-017-1465-6 https://doi.org/10.1109/tvt.2015.2445317 https://doi.org/10.1109/jsen.2011.2169782 https://doi.org/10.1109/lsens.2020.2964588 https://doi.org/10.1109/lsens.2021.3052753 https://doi.org/10.1109/i2mtc43012.2020.9129107 https://doi.org/10.1109/access.2018.2800278 https://doi.org/10.1109/tsp.2017.2675865 https://doi.org/10.1109/tcsii.2019.2953199 https://doi.org/10.1109/icsccn.2011.6024567 https://doi.org/10.1109/wispnet.2017.8299890 https://doi.org/10.24425/123891 https://doi.org/10.31838/jcr.07.09.10 dose reduction potential in dual-energy subtraction chest radiography based on the relationship between spatial-resolution property and segmentation accuracy of the tumor area acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 8 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 dose reduction potential in dual-energy subtraction chest radiography based on the relationship between spatialresolution property and segmentation accuracy of the tumor area shu onodera1, yongbum lee2, tomoyoshi kawabata1 1 department of radiology division of medical technology, tohoku university hospital, sendai, miyagi, japan 2 graduate school of health sciences, niigata university, niigata, japan section: research paper keywords: chest radiography; x-ray; u-net; deep learning; flat panel detector citation: shu onodera, yongbum lee, tomoyoshi kawabata, dose reduction potential in dual-energy subtraction chest radiography based on the relationship between spatial-resolution property and segmentation accuracy of the tumor area, acta imeko, vol. 11, no. 2, article 28, june 2022, identifier: imeko-acta-11 (2022)-02-28 section editor: francesco lamonaca, university of calabria, italy received september 30, 2021; in final form february 23, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: shu onodera, e-mail: onodera@rad.hosp.tohoku.ac.jp 1. introduction chest radiography is the most basic diagnostic imaging procedure for lung diseases. however, the amount of x-rays that the patient is exposed to is enormous, including in the case of standing position imaging which is usually performed during medical examinations and bedside imaging for critically ill patients [1]. compared with computed tomography (ct) examination, which provides three-dimensional information, the amount of exposure in radiography is very low (ct: 10 msv, chest radiography: 0.1 msv) [2]-[5] and its importance in terms of convenience of examination cannot be underestimated. usually, a high voltage of approximately 120 kv is applied during chest radiography to emphasize the contrast of the lung field rather than that of the ribs [6]. nevertheless, the shadow of the rib remains on the image, making it difficult to detect the shadow of the soft tissue that overlaps that of the ribs. to address this problem, it has been developed an energy subtraction process [7] in which two types of image data with different radiographic energy characteristics are obtained with a single exposure, the bone shadows are removed through weighed differentiation of the respective images, and the image of the soft tissue alone is segmented (hereinafter, referred to as soft tissue imaging). once bone shadows are removed, it becomes easier to detect tumours in soft tissue images. however, because the image quality in chest radiography considerably varies for abstract we investigated the relationship between the spatial-resolution property of soft tissue images and the lesion detection ability using unet. we aimed to explore the possibility of dose reduction during energy subtraction chest radiography. the correlation between the spatial-resolution property of each dose image and the segmentation accuracy of the tumor area in the four regions where the tumor was placed was evaluated using linear regression analysis. the spatial-resolution property was determined by task-based evaluation, and the task-based modulation transfer function (ttf) was computed as its index. ttfs of the reference dose image and the 75 % dose image showed almost the same frequency characteristics regardless of the location of the tumor, and the dice coefficient also high. when the tumor was located in the right supraclavicular region and under 50 % dose, the frequency characteristics were significantly reduced, and the dice coefficient was also low. our results showed a close relationship between the spatial-resolution property and the segmentation accuracy of tumor area using deep learning in dual-energy subtraction chest radiography. in conclusion, a dose reduction of approximately 25 % compared to the conventional method can be achieved. the limitations are the shape of the simulated mass and the use of chest phantom. mailto:onodera@rad.hosp.tohoku.ac.jp acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 different parts due to factors such as the amount of radiation reaching the detector and the amount of scatter radiation, detectability may differ depending on the location of the tumour. in particular, the scatter radiation generated from the clavicle and scapula is believed to significantly affect the upper lobe of the lung, which is a common site of adenocarcinoma [8]. the spatialresolution property of the image is also a very important factor in tumour detection [9]. the spatial-resolution property is a measure of the sharpness of an image and is an important characteristic that determines the detectability of lesions in x-ray images. chest radiography images are generally subjected to several image-processing techniques to improve image quality, and these processing tools lead to a nonlinear behaviour that depends on image quality, which is different for different parts [10]. thus, the quality of the soft tissue image also shows nonlinear behaviour, and task-based evaluation in a measurement environment that reflects clinical conditions is necessary to determine the spatial-resolution property. the use of computer-aided diagnosis (cad) in diagnostic imaging has increased [11], [12]. earlier, image interpretation was performed by radiologists, based on their cultivated experience. however, with the recent increase in the use of radiography and the consequent increase in the number of images, cad was introduced to reduce the burden on radiologists. cad based on deep learning has been attracting attention recently, and it may soon be possible to detect a tumour even in a low-dose image with high noise. in previous reports on energy subtraction-treated chest radiographs, visual evaluations of images acquired by the computed radiography (cr) systems have been reported [13][17]. the purpose of this study was to investigate the relationship between the spatial-resolution property of soft tissue images obtained by the flat panel detector (fpd) system and the lesion detection ability based on deep learning and explore the possibility of dose reduction during energy subtraction chest radiography. 2. materials and methods 2.1. image acquisition an acrylic cylindrical simulated tumour with a diameter of 20 mm and thickness of 3 mm was placed in four regions on the chest phantom (right supraclavicular, left middle lung, right lower lung, and mediastinum), figure 1. bone structure such as the clavicle, shoulder blade, and ribs as well as soft tissues as the mediastinum and pulmonary vascularity are located in the chest phantom. the single-exposure dual-energy subtraction system, [18], [19] calneo dual (fujifilm medical co. ltd. tokyo. japan, with a pixel size of 0.15 mm), was used in this study. the fpd implemented in this system consisted of two stacked scintillators. normal energy images were collected in the first layer (cesium iodide scintillator), and the second layer (gadolinium sulfide scintillator) collected high-energy images transmitted through the first layer. table 1 lists the imaging conditions used. the source-image distance was fixed at 180 cm, the field size was 43.2 cm, the image depth was 12 bits, and the tube voltage was 115 kv. three types of imaging doses were used: a standard dose of 1.6 ma s, which was then reduced by 25 % to 1.25 ma s and then reduced by 50 % to 0.8 ma s, and 100 images of each type were acquired. 2.2. calculation of the spatial-resolution property chest radiography images are generally subjected to several image-processing techniques to improve image quality. frequency-processing and dynamic-range compression processing are typical examples. however, these processing tools lead to a nonlinear behaviour that depends on image quality, which is different for different parts. therefore, in this study, the spatial-resolution property was determined by task-based evaluation, and the task-based modulation transfer function (ttf) was computed as its index [20]. the ttf calculation process is shown in figure 2. the edge spread function (esf) for the cylinder was obtained by averaging the profiles that cross the edge of the cylinder, measured from the centre in the direction of radiation. next, ttf was calculated using the fourier transform of the line spread function obtained by differentiating the esf. one of the factors to be considered when determining the ttf of a soft tissue image is the signal-tonoise ratio (snr) of the image. because images with a low snr create large errors in the calculation results, in this study, the image without acrylic was subtracted from the image with acrylic, and an image with a high snr was created through the additive average of 100 such images, which was then used to calculate the ttf (figure 3). 2.3. building the deep learning environment cad using deep learning has been an area of active research in recent years and has a wide range of applications in medical figure 1. phantom image and placement of acrylic cylinder. figure 2. ttf calculation process. table 1. imaging conditions. source image distance in cm field size in cm tube voltage in kv image depth in bit dose in ma s 180 43.2 115 12 1.6 (reference) 1.2 (25 % down) 0.8 (50 % down) acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 imaging, such as lesion detection, area extraction, and image noise reduction. image segmentation refers to the process of dividing an image into regions corresponding to each object. because the target areas in medical imaging are organs or lesions, the positional information must be specified in the original input image at the time of segmentation. u-net [21] is a typical example of a deep convolutional neural network for image segmentation. the present study deals with the detection of lung tumours using u-net in soft tissue images. figure 4 shows the structure of the u-net used in this study. the usage environment of u-net in this research is as follows: os: windows10, framework: python3.7, tensorflow, keras, cpu: core i7-10750h, memory: 16g. the relu function and sigmoid function were used as activation functions, cross entropy as the loss function, and adam as the learning optimization algorithm. 2.4. data set for deep learning the acquired soft tissue image (window width: 8500, window level: 8100, 14 bits) was segmented into 128 × 128 pixels centred around the tumour and converted to png format (window width: 255, window level: 128, 8bits). fifty standard-dose images were input in u-net as training data and 50 reduced-dose images as evaluation data, and learning was conducted by setting the number of epochs to 30. the teaching data for the soft tissue images containing the tumour were created by binarizing the image into the tumour area and other areas (figure 5). figure 3. creating the ttf calculation image. figure 4. the structure of the u-net in this study. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 2.5. evaluation of the segmented tumour area the dice coefficient [22] was calculated as the degree of similarity between the output image and the teacher image to evaluate the extraction accuracy of the tumour region using unet. the dice coefficient is defined by the following formula: 𝐷𝑖𝑐𝑒(𝐴, 𝐵) = 2|𝐴 ∩ 𝐵| |𝐴| + |𝐵| . (1) here, a denotes the tumour region in the teaching image (region with a digital value of 255), and b denotes the tumour region in the output image (region with a digital value of 255). 2.6. evaluation of the correlation between spatial-resolution property and segmentation accuracy of tumor area in this study, the correlation between the spatial-resolution property of each dose image and the segmentation accuracy of the tumour area in the four regions where the tumour was placed was evaluated using linear regression analysis. a scatter plot was created by treating the ttf and dice coefficient of each dose image from 0.2 to 1.2 cycle/mm (intervals of 0.2 cycle/mm) as variables. the dice coefficient for the reference dose image was set to 1. 3. results figure 6 shows the ttf results for each condition. no difference was observed between the ttfs of the reference dose image and the 75 % dose image in the supraclavicular region, where the contrast was low due to the influence of scattered radiation from the thoracic spine, clavicle, and scapula; however, the similarity decreased significantly in the ttf of 50 % dose images. in contrast, in the middle and lower lung regions where the effect of scattered radiation was small and the contrast was high compared to the supraclavicular region, the ttfs were generally high and the difference in values between doses was small. in the mediastinum, ttfs were low as in the supraclavicular region because of the low contrast due to the scattered radiation from the heart and sternum, but the decrease was not as high as that in the supraclavicular region; in comparison, the ttf in the supraclavicular region was the lowest among all the other regions. table 2 shows the average values of the dice coefficients of the 50 datasets for each condition. the dice coefficient between the segmented tumour area and the teaching data in the 75 % dose image showed a generally high value of approximately 0.96, regardless of the location of the tumour. furthermore, the ttf of the 75 % dose image showed a value similar to that of the reference dose image regardless of the location of the tumour. in contrast, the dice coefficient in the 50 % dose image was as low as 0.937 when the tumour was located in the supraclavicular region. likewise, the ttf of the 50 % dose image in which the tumour was located in the supraclavicular region showed a lower value compared to the reference dose image. figure 7 shows the actual tumour area segmented by u-net. in the 75 % dose condition, the segmented images were highly similar to the teaching data, regardless of the location of the tumour. however, in the 50 % dose condition and when the tumour was located in the right supraclavicular region, the segmented region was slightly larger compared with the teaching data. figure 8 to figure 11 show the correlation between ttf and dice coefficient in soft tissue images. a positive correlation was observed between the ttf and dice coefficient of every dose image at all spatial frequencies in the right supraclavicular and the right lower lung regions and between the frequencies of 0.2 to 0.8 cycle / mm in the mediastinum section. in contrast, no correlation was observed between the ttf and dice coefficients in any of the spatial frequencies in the left middle lung region. 4. discussion in the case of the simulated tumour located in the right supraclavicular region and under 50 % dose, both the ttf and dice coefficients showed significantly low values. one reason for this could be that the contrast of the tumour was reduced by scattered radiation mainly from the clavicle and scapula due to the complicated bone structure of the supraclavicular region. a second reason could be the fact that the tumour area could not be segmented accurately because of the increased image noise because the amount of radiation reaching the detector was smaller than that reaching other parts. in the case in which the simulated tumour was located in the mediastinum region, the value of ttf was not very different from that when the tumour was in the middle and lower lung regions, and the dice coefficient also showed a similar value. in the mediastinum region, the amount of radiation reaching the detector was less than that of the middle and lower lung regions, figure 5. creation of teaching data. figure 6. ttfs for each condition. table 2. dice coefficients for each of the conditions. ma s right supraclavicular left middle lung right lower lung mediastinum 1.25 0.960 0.960 0.959 0.971 0.8 0.937 0.969 0.963 0.967 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 5 and the amount of scattered radiation from the sternum and heart was also large. therefore, under the 50 % dose condition, the ttf and dice coefficients were perceived to be as low as the right supraclavicular region. however, as the tumour in this area had fewer pulmonary blood vessels around it than other sites, the structure was relatively simple and the tumour area could be segmented accurately (figure 12). the results of this study show that there is a high degree of correlation between the spatial resolution of the soft tissue image and the segmentation accuracy of the tumour area using deep figure 7. mass region segmentation using for each of the conditions. figure 8. between ttf and dice coefficient (right supraclavicular). figure 9. between ttf and dice coefficient (left middle lung). ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 6 learning in the supraclavicular, lower lung, and mediastinum regions. in the 75 % dose images, the ttf was high regardless of the tumour location, and the dice coefficient was also high. in contrast, in the 50 % dose images, when the tumour was present in the supraclavicular region, the ttf was significantly reduced, and the dice coefficient was also low. in other words, if the radiation dose is reduced to 50 % of the conventional radiation condition, tumour that develop in the supraclavicular region may not be segmented accurately due to a decrease in ttf. no correlation was confirmed between ttf and the dice coefficient in the middle lung area. this could be because there was no difference in ttfs of the dose images between 0.2 and 0.6 cycle / mm, and between the 0.8 and 1.2 cycle / mm, there was no difference in the ttfs of the 75 % dose image and the reference dose image. among the four sites examined in this study, the highest amount of radiation reached the detector from the middle lung area, and the amount of scattered radiation from the surroundings was also small. therefore, no correlation could be confirmed between the ttf and the dice coefficient in the middle lung region, and the dice coefficients of all dose images showed a high value of approximately 0.96. the discussion above suggests that, with single-exposure dual-energy subtraction chest radiography by the fpd system, it may be possible to reduce the dose by approximately 25 % compared to the conventional method. figure 13 shows the detection quantum efficiency (dqe) in the radiation qualities of the rqa9 of the cr system, which was manufactured by the same company as the calneo dual system used in this study [23], [24]. because lung tumors are the targets of this study, we focused on the value of the spatial frequency of 1 cycle / mm [25]. the dqe value at 1 cycle / mm is approximately 0.5 for the calneo dual and about 0.2 for the cr system, respectively, and the detection quantum efficiency of the calneo dual is approximately 2.5 times higher. a system with an excellent dqe has a high degree of freedom in adjusting the balance between sharpness and graininess through image processing [26]. therefore, selecting figure 10. between ttf and dice coefficient (right lower lung). figure 11. between ttf and dice coefficient (mediastinum). figure 12. pulmonary vessels around the mass in the mediastinum. figure 13. dqe for calneo dual and cr system with rqa9 spectra. ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 7 parameters with good spatial-resolution properties for multifrequency processing in single-exposure dual-energy subtraction chest radiography using fpd could lead to further dose reduction. as a limitation of this study, we would first mention the structure of the simulated tumors. in this study, an acrylic material with a simple cylindrical structure was used as the simulated tumor. actual lesions with increased malignancy, such as spiculated lesions, often have more complex structures, and in such cases, the results may differ. moreover, in this study, all measurements from beginning to end were performed using a phantom, and the effects of heartbeat, which is a major problem in actual clinical practice, [27] have not been considered. however, to reduce the effects of heartbeat, the imaging time was shortened to the largest extent possible, and measurements were performed in a very short interval of approximately 10 ms, as is done in clinical practice; hence, we hope that the results will not be greatly affected. 5. conclusions in this study, we clarified the relationship between the spatial resolution of single-exposure dual-energy subtraction chest radiography using the fpd system and the segmentation accuracy of the tumour area using deep learning. the ttfs of the reference dose image and the 75 % dose image showed almost the same frequency characteristics regardless of the location of the tumour, and the dice coefficient also showed a high value. when the tumour was located in the right supraclavicular region and under 50 % dose, the frequency characteristics were significantly reduced, and the dice coefficient was also low. therefore, a close relationship between the spatial-resolution property and the segmentation accuracy of the tumour area was confirmed using deep learning in singleexposure dual-energy subtraction chest radiography using the fpd system, and it may be possible to achieve dose reduction of approximately 25 % compared to the conventional method. references [1] unscear. medical radical exposures. sources and effects of ionizing radiation, unscer 2008 report. new york: united nations; 2010. annex a. [2] l. j. m. kroft, l. van der velden, i. h. girón, j. j. h. roelofs, a. de roos, j. geleijns, added value of ultra-low-dose computed tomography, dose equivalent to chest x-ray radiography, for diagnosing chest pathology, j. thorac imaging, 34 (2019), pp. 179186. doi: 10.1097/rti.0000000000000404 [3] r. ward, w. d carrol, p. cunningham, s. a ho, m. jones, w. lenney, d. thonpson, f. j gilchrist, radiation dose from common radiological investigations and cumulative exposure in children with cystic fibrosis: an observational study from a single uk centre, observational study, 7 (2017), pp. 1-5. doi: 10.1136/bmjopen-2017-017548 [4] s. singh, m. k. kalra, r. d. ali khawaja, a. padle, s. pourjabbar, d. lira, j. a. o. shepard, s. r. digumarthy, radiation dose optimization and thoracic computed tomography, radiol. clin. north am., 52(2014), pp. 1-15. doi: 10.1016/j.rcl.2013.08.004 [5] international commission on radiological protection, the 2007 recommendations of the international commission on radiological protection, icrp publication 103, ann. icrp 37 (24). [6] o. w. hamer, c. b. sirlin, m. strotzer, i. borisch, n. zorger, s. feuerbach, m. volk, chest radiography with a flat-panel detector: image quality with dose reduction after copper filtration. radiology, 237 (2005), pp691-700. doi: 10.1148/radiol.2372041738 [7] m. fukao, k. kawamoto, h. matsuzawa, o. honda, t. iwaki, t. doi, optimization of dual-energy subtraction chest radiography by use of a direct-conversion flat-panel detector system, radiol phys technol, 8 (2015), pp. 46-52. doi: 10.1007/s12194-014-0285-y [8] k. honda, y. matsui, h. imai, regional distribution of lung cancer, haigan, 23 (1983), pp. 11-21. [9] y. fujimura, h. nishiyama, t. masumoto, s. kono, y. kitagawa, t. ikeda, t. furukawa, t. ishida, investigation of reduction of exposure dose in digital mammography: relationship between exposure dose and image processing, jpnsoc, nihon hoshasen gijutsu gakkai zasshi, 64(2008), pp. 259-267. doi: 10.6009/jjrt.64.259 [10] k. kishimoto, e. ariga, r. ishigaki, m. imai, k. kawamoto, k. kobayashi, m. sawada, k. noto, m. nakamae, r. higashide, study of appropriate dosing in consideration of image quality and patient dose on the digital radiography, jpnsoc, nihon hoshasen gijutsu gakkai zasshi, 67(2011), 1381-1397. doi: 10.6009/jjrt.67.1381 [11] k. doi, current status and future potential of computer-aided diagnosis in medical imaging, br j radiol, 78 (2005), spec no 1, s3-s19. doi: 10.1259/bjr/82933343 [12] h. fujita, present status of mammography cad system, med imaging technol, 1 (2003), 27-33. [13] s. kido, j. ikezoe, h. naito, j. arisawa, s. tamura, t. kozuka, w. ito, k. shimura, h. kato, clinical evaluation of pulmonary nodules with single-exposure dual-energy subtraction chest radiography with an iterative noise-reduction algorithm, radiology, 194 (1995), pp. 407-412. doi: 10.1148/radiology.194.2.7824718 [14] s. kido, k. kuriyama, n. hosomi, e. inoue, c. kuroda, t. horai, low-cost soft-copy display accuracy in the detection of pulmonary nodules by single-exposure dual-energy subtraction: comparison with hard-copy viewing, j digit imaging, 2000, pp. 33-37. doi: 10.1007/bf03168338 [15] j. r. wilkie, m. l. giger, m. r. chinander, t. j. vokes, r. m. nishikawa, m. d. carlin, investigation of physical image quality indices of a bone densitometry system, med phys, 31 (2004), pp. 873-881. doi: 10.1118/1.1650528 [16] s. kido, h. nakamura, w. ito, k. shimura, h. kato, computerized detection of pulmonary nodules by single-exposure dual-energy computed radiography of the chest (part 1), eur j radiol, 44(2002), 198-204. doi: 10.1016/s0720-048x(02)00268-1 [17] s. kido, k. kuriyama, c. kuroda, h. nakamura, w. ito, k. shimura, h. kato, detection of simulated pulmonary nodules by singleexposure dual-energy computed radiography of the chest: effect of a computer-aided diagnosis system (part 2), eur j radiol, 44 (2002), pp. 205-209. doi: 10.1016/s0720-048x(02)00269-3 [18] l. shi, m. lu, n. r. bennett, e. shapiro, j. zhang, r. colbeth, j. s. lack, a. s. wang, characterization and potential applications of a dual-layer flat-panel detector, med phys, 47(2020), epub 2020 may 18, pp. 3332-3343. doi: 10.1002/mp.14211 [19] m. lu, a. wang, e. shapiro, a. shiroma, j. zhang, j. steiger, j. s. lack, dual energy imaging with a dual-layer flat panel detector, spie med imaging;10948 physics of medical imaging, sandiego, united states, 2019 doi: 10.1117/12.2513499 [20] s. richard, d. b. husarik, g. yadava, s. n. murphy, e. samei, towards task-based assessment of ct performance: system and object mtf across different reconstruction algorithms, med phys, 39(2012), 4115-4122. doi: 10.1118/1.4725171 https://doi.org/10.1097/rti.0000000000000404 https://doi.org/10.1136/bmjopen-2017-017548 https://doi.org/10.1016/j.rcl.2013.08.004 https://doi.org/10.1148/radiol.2372041738 https://doi.org/10.1007/s12194-014-0285-y https://doi.org/10.6009/jjrt.64.259 https://doi.org/10.6009/jjrt.67.1381 https://doi.org/10.1259/bjr/82933343 https://doi.org/10.1148/radiology.194.2.7824718 https://doi.org/10.1007/bf03168338 https://doi.org/10.1118/1.1650528 https://doi.org/10.1016/s0720-048x(02)00268-1 https://doi.org/10.1016/s0720-048x(02)00269-3 https://doi.org/10.1002/mp.14211 https://doi.org/10.1117/12.2513499 https://doi.org/10.1118/1.4725171 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 8 [21] o. ronneberger, p. fischer, t. brox, u-net: convolutional networks for biomedical image segmentation lecture notes in computer sciences (including subserlect notes artifintelllect notes bioinformatics). 2015; 9351: pp. 234-241. [22] b. sahiner, a. pezeshk, l. m. hadjiiski, x. wang, k. drukker, k. h. cha, r. m. summers, m. l. giger, deep learning in medical imaging and radiation therapy. med phys, 46(2019), epub 2018 nov 20, e1-e36. doi: 10.1002/mp.13264 [23] iec. 62220-1. medical electrical equipment-characteristics of digital x-ray imaging devices part 1: determination of detective quantum efficiency. international electrotechnical commission; 2003. [24] iec. 62220-1-2. medical electrical equipment-characteristics of digital x-ray imaging devices part 1-2: determination of detective quantum efficiency-detectors used in mammography. international electrotechnical commission; 2007. [25] t. yokoi, t. takata, k. ichikawa, investigation of image quality identification utilizing physical image quality measurement in directand indirect-type of flat panel detectors and computed radiography, nihon hoshasen gijutsu gakkai zasshi, 67(2011), pp. 1415-1425. doi: 10.6009/jjrt.67.1415 [26] a. r. cowen, a. g. davies, m. u. sivananthan, the design and imaging characteristics of dynamic, solid-state, flat-panel x-ray image detectors for digital fluoroscopy and fluorography, clin radiol, 63(2008), 1073-1085. doi: 10.1016/j.crad.2008.06.002 [27] european commission, chest. (lungs and heart) pa and lateral projections. european guidelines on quality criteria for diagnostic radiographic image. eur. luxembourg: cec; 1996:12:16260 en. https://doi.org/10.1002/mp.13264 https://doi.org/10.6009/jjrt.67.1415 https://doi.org/10.1016/j.crad.2008.06.002 a combination of terrestrial laser-scanning point clouds and the thrust network analysis approach for structural modelling of masonry vaults acta imeko issn: 2221-870x march 2021, volume 10, number 1, 257 264 acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 257 a combination of terrestrial laser-scanning point clouds and the thrust network analysis approach for structural modelling of masonry vaults maria grazia d'urso1, valerio manzari2, barbara marana3 1 department of engineering and applied sciences, university of bergamo, bergamo, italy 2 department of civil and mechanical engineering, university of cassino and southern lazio, cassino, italy 3 department of engineering and applied sciences, university of bergamo, bergamo, italy section: research paper keywords: masonry vault; laser-scanning; thrust network analysis; point cloud; geometric configuration; structural analysis; equilibrium of nodes citation: maria grazia d'urso, valerio manzari, barbara marana, a combination of terrestrial laser-scanning point clouds and the thrust network analysis approach for structural modelling of masonry vaults, acta imeko, vol. 10, no. 1, article 34, march 2021, identifier: imeko-acta-10 (2021)-01-34 editor: ioan tudosa, university of sannio, italy received january 2, 2021; in final form february 15, 2021; published march 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by the italian ministry of education, university and research (miur) corresponding author: maria grazia d'urso, e-mail: mariagrazia.durso@unibg.it 1. introduction in this paper, we present a review on the integration of terrestrial laser scanning (tls) point clouds acquired via the most frequently employed survey techniques, as well as an innovative method for studying historical masonry vaults. the study of historical buildings continues to face significant difficulties related to computational effort, the scarcity of input data, and the limited realism of the attendant methods. studies oriented toward the conservation and restoration of historical structures exploit structural analysis as a means of better understanding the genuine structural features of the building in view of characterising its present condition and the causes of the existing damage, determining the actual structural safety in terms of a variety of factors (e.g. gravity, soil settlements, wind, and earthquakes), and determining the necessary remedial measures [1]-[3]. historical structures are often characterised by a highly complex geometry composed of various straight or curved members, combining the curved 1d members (arches, flying arches) with both 2d (vaults, domes) and 3d members (fillings, etc.). in fact, the geometry is one of the most crucial aspects of investigation given the complex combination of comparatively slender members with far larger members (e.g. massive piers, walls buttresses, foundations). as such, the investigation of the geometry is perhaps one of the greatest challenges faced by analysts. abstract terrestrial laser-scanning (tls) is well suited to surveying the geometry of monumental complexes, often realised with highly irregular materials and forms. this paper addresses various issues related to the acquisition of point clouds via tls and their elaboration aimed at developing structural models of masonry vaults. this structural system, which exists in numerous artifacts and historical buildings, has the advantages of good static and functional behaviour, reduced weight, good requisites of insulation, and aesthetic quality. specifically, using tls, we create a geometric model of the ancient masonry church, s. maria della libera, in aquino, largely characterised by naves featuring cross vaults and previously used as a case study in the paper entitled ‘terrestrial laser-scanning point-clouds for modeling masonry vaults’, presented at the 2019 imeko tc-4 international conference on metrology for archaeology and cultural heritage. the results of the tls survey are used as input for a structural analysis based on the thrust network analysis. this recent methodology is used for modelling masonry vaults as a discrete network of forces in equilibrium with gravitational loads. it is demonstrated that the proposed approach is both effective and robust in terms of assessing not only the safety conditions of existing masonry vaults, the actual geometry of which significantly influences the safety level, but also to design new ones. mailto:mariagrazia.durso@unibg.it acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 258 historical structures may have experienced (and continue to experience) various phenomena of a very different nature, including gravity forces, earthquakes, and environmental effects (thermal effects, chemical or physical attack), as well as various anthropogenic actions such as architectural alterations, intentional destruction, and inadequate restorations. many of these actions also need to be characterised in terms of time, with some cyclic and repetitive (accumulating significant effects in the long term), others developing gradually over extremely long periods of time, and others still associated with long return periods. in many cases, they may be influenced by historical contingency and uncertain (or at least, insufficiently known) historical facts. the existing general alterations may significantly affect the response of the structure to be modelled, and, hence, the realism and accuracy of the prediction of the actual performance and capacity. damage encompasses aspects such as mechanical cracking, material decay (due to chemical or physical attack), or a variety of other phenomena affecting the original capacity of the materials and structural members. its history is an essential aspect of a building and this must be taken in account and integrated within the model. the following effects linked to history may have had an impact on both the structural response and the existing damage: the construction process, any architectural alterations and additions, destruction due to conflicts (e.g. wars) or natural disasters (earthquakes, floods, fires), and various long-term decay or damage-inducing phenomena. in fact, the history as a whole constitutes a crucial source of knowledge. in numerous cases, the historical performance of the building can be engineered to reach conclusions on its structural performance and strength. for example, the performance exhibited during past earthquakes can be considered in order to improve the understanding of the antiseismic capacity. in fact, the history of the building constitutes a unique experience occurring on a real scale of space and time. as such, the knowledge of the historical performance can compensate for the aforementioned data insufficiency [4]. the most frequently employed survey techniques for capturing the geometry of buildings, also applied in the field of cultural heritage preservation, include both terrestrial and remote photogrammetry as well as tls. approaches based on the acquisition of images aimed at 3d modelling have recently become the subject of significant studies and research in various different areas [5]-[7]. the tls technique presents a non-contact and non-intrusive technique that allows for digitally acquiring the shape of objects in a rapid and accurate way. the attendant research provides several examples of 3d point clouds acquired for detailed structural digital models, which represents a particularly relevant issue in the case of masonry structures, where geometry plays a crucial role in the comparative degree of safety [8]. nowadays, starting from an accurate 3d model derived from laser scanning measures, it is possible to implement a building information modelling (bim) approach that allows for managing the entire project in a consistent and optimal manner. in fact, the laser scan techniques, such as tls, which encompass scan-to-bim among other processes, present a valuable prerequisite for bim modelling, since they can support geometric and spatial data that can be acquired, organised, and managed to satisfy the required project scale. one single database ensures the quality of the results due to the direct link among the shapes, the information, and the project documentation [9], [10]. specifically, 3d modelling becomes important when these models are adopted for structural engineering purposes [11]. this is the case, for example, for masonry structures and vaults, the structural safety assessment of which involves either meshing them as a collection of finite elements, as in the traditional finite element method, or as a set of nodes connected by branches, as in the more recent thrust network analysis (tna) approach [8]. given the recent emergence of this approach, we first briefly illustrate its theoretical background and basic assumptions, largely to emphasise its flexibility and how the geometric data required as input naturally matches the output of tls surveys. once the theoretical background and the state of art of the tna method have been illustrated in the section 2, the basics of the tna are dealt with in section 3. section 4 describes the study case referred to the medieval church of s. maria della libera, in aquino for which the safety assessment of the cross vaults has been carried out. the related results are illustrated and commented in section 5. finally the conclusions are summarized in section 6. 2. theoretical background nowadays, finite element (fem) analysis can be regarded as the most effective numerical technique for structural analysis since, unlike traditional static analysis, it allows for i) providing a 3d model of geometrically complex structures, ii) managing the characteristic parameters of the materials employed in the model, and iii) performing different analyses (linear, non-linear, dynamical, etc.) on the same geometry. however, in the case of historical and monumental masonry structures, fem analysis does not present the best option since it is difficult to ascertain the characteristics of the materials and the effects induced by the interventions previously performed on the structure. an alternative effective approach to fem is the tna approach, a fairly recent methodology that is briefly described in the sequel [12]-[17]. this method can be regarded as an automated and computerised variant of mery’s method, which is used for hand calculations of masonry arches (figure 1). specifically, tna is used to model masonry vaults as a discrete network of branches, subjected only to compressive forces in equilibrium with the gravitational loads. originally devised by o’dwyer [18] and later developed by block et al. [8], the evolution of this method represents one of the first rational approaches to the stability of masonry buildings and takes its steps from the analogy between the equilibrated shape of masonry arches and that of tensile suspended cables. this analogy, known as the ‘catenary principle’, is that of an arc that is reminiscent of a long chain, retained at its ends and allowed to dangle (figure 2). figure 1. mery’s method: internal forces by means of a funicular polygon. acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 259 heyman [19] combined this principle with the limit theorems of plasticity, specifically the static theorem, to evaluate the safety of masonry structures, predicting the ultimate mechanism of arches or 3d framed structures. an extension that includes domes and vaults was then proposed by o’dwyer [18] in terms of fictitiously deconstructing the structure in discrete equilibrated arches, which entails seeking networks of forces inside the structure according to what has been denominated tna. a pictorial description of this idea is illustrated in figure 3, with reference to a cross vault exhibiting a comparatively simple static behaviour, with the two diagonal arcs providing the bearing structure that distributes the loads on the four pillars at the vertexes. the four columns support the four ribs of a barrel vault as a succession of increasingly smaller arcs from the external perimeter towards the centre. each arch transmits its thrust connected to the diagonal arcs such that the diagonal arcs are loaded from the combination of the forces they sustain. 3. basics of thrust network analysis according to the so-called safe theorem of limit analysis, in the form established by heyman for masonry structures, the limit equilibrium of masonry vaults can be assessed by seeking a network of thrusts, i.e. purely compressive forces that are fully contained within the thickness of the vault and hence do not induce tensile stresses that masonry is incapable of withstanding. as such, the problem of identifying the maximum loads that a masonry vault can safety sustain is reconducted to identifying a specific set of points or nodes within the vault such that the applied loads are in equilibrium with the forces at each point, internal to the vault and purely compressive and directed along the fictitious branches connecting pairs of nodes. the geometric position of these nodes is determined via an optimisation algorithm that enforces the condition whereby the initially unknown coordinates are contained within the vault thickness as well as in the branches connecting them (figure 4). since the original paper [20] can be referred to for further details, in this paper, we detail only the simplest case of vertical loads by supplementing the original formulation with a simple example that will help the reader to grasp the details of the procedure (see figure 5 and figure 6). the set of nodes and the related branches define a specific network, from now on referred to as a thrust network, which is described by nn nodes and nb branches that connect specific pairs of nodes. the n-th node of the network is defined by its position (xn , yn , zn), in a 3d cartesian reference system, where z is the vertical direction. the external force concentrated at each node can be described as follows: 𝑓 𝑛 = (𝑡𝑥 𝑛 , 𝑡𝑦 𝑛 , 𝑡𝑧 𝑛) (1) while the thrust value related to the generic branch can be denoted as: 𝑇 (𝑏) = (𝑡𝑥 (𝑏) , 𝑡𝑦 (𝑏) , 𝑡𝑧 (𝑏) ) . (2) the set of nodes is split into ni internal nodes and nr restrained (or external) nodes, where only one external branch converges such that nn = ni + nr, thus ensuring the external branches model supports the reactions. the unknowns of the problem are represented by the coordinates of the nodes and thrusts within each branch. in fact, only the vertical coordinates of the nodes are sought since the figure 2. the catenary principle. figure 3. plant and axonometric views of a masonry vault. figure 4. tna modelling of the vault. figure 5. equilibrium of nodes. acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 260 horizontal coordinates are assigned by the designer by projecting the vault in the plane and defining a regular grid of points. clearly, the disadvantage is the point distance, while the advantage is the quality of the analysis. the zn coordinates of the nodes and the branch values t b are evaluated by enforcing an equilibrium at the internal and external nodes via two different strategies detailed separately here in terms of the horizontal and vertical direction. 3.1 horizontal equilibrium of nodes denoted by bn, the set of branches converging to node n, the horizontal equilibrium of the n-th node is expressed by the following equation: ∑ 𝑡𝑥 (𝑏) 𝑏∈𝐵𝑛 + 𝑓𝑥 (𝑛) = 0 ∑ 𝑡𝑦 (𝑏) 𝑏∈𝐵𝑛 + 𝑓𝑦 (𝑛) = 0 (3) in terms of the horizontal components 𝑓𝑥 𝑛, 𝑓𝑦 𝑛 of the external loads and the thrust forces 𝑡𝑥 𝑏 , 𝑡𝑦 𝑏 relative to the branches b connected to the node. indicated by n and m(b), the indices of the nodes connected by the generic branch 𝑏𝜖𝐵𝑛 can be denoted by 𝑡ℎ (𝑏) = √𝑡𝑥 (𝑏)2 + 𝑡𝑦 (𝑏)2 (4) the horizontal component of the thrust, and by 𝑙ℎ (𝑏) = √(𝑥𝑛 − 𝑥𝑚 (𝑏) ) 2 + (𝑦𝑛 − 𝑦𝑚 (𝑏) ) 2 (5) the length of the generic branch projected in the horizontal plane (see figure 5). hence, we can evaluate the following: 𝑡𝑥 (𝑏) 𝑡 ℎ (𝑏) = (𝑥𝑛 − 𝑥𝑚 (𝑏) ) 𝑙 ℎ (𝑏) (6) and 𝑡𝑦 (𝑏) 𝑡 ℎ (𝑏) = (𝑦𝑛 − 𝑦𝑚 (𝑏) ) 𝑙 ℎ (𝑏) (7) which can be incorporated into eq. 3 to get: ∑ (𝑥𝑛 − 𝑥𝑚 (𝑏) ) 𝑙 ℎ (𝑏) 𝑡ℎ (𝑏) 𝑏∈𝐵𝑛 + 𝑓𝑥 (𝑛) = 0 (8) ∑ (𝑦𝑛 − 𝑦𝑚 (𝑏) ) 𝑙 ℎ (𝑏) 𝑡ℎ (𝑏) 𝑏∈𝐵𝑛 + 𝑓𝑦 (𝑛) = 0 (9) there is a large number of networks that are in equilibrium with a given set of external forces and, at the same time, are contained within the vault thickness. to address all of them in a comprehensive way, it is important to express the horizontal components of thrust 𝑡ℎ (𝑏) as the product of a factor  and various reference thrust values �̂�ℎ (𝑏) , both of which are left unspecified at the moment. accordingly, for the generic b-th branch, we can set 𝑡ℎ (𝑏) = �̂�ℎ (𝑏) = 1 𝑟 �̂�ℎ (𝑏) , and eqs. 8–9 thus become: ∑ [ �̂�ℎ (𝑏) 𝑙 ℎ (𝑏) 𝑥𝑛 − �̂�ℎ (𝑏) 𝑙 ℎ (𝑏) 𝑥𝑚 (𝑏) ] 𝑏∈𝐵𝑛 + 𝑓𝑥 (𝑛) 𝑟 = 0 (10) ∑ [ �̂�ℎ (𝑏) 𝑙 ℎ (𝑏) 𝑦𝑛 − �̂�ℎ (𝑏) 𝑙 ℎ (𝑏) 𝑦𝑚 (𝑏) ] 𝑏∈𝐵𝑛 + 𝑓𝑦 (𝑛) 𝑟 = 0 (11) where the ratios �̂� ℎ (𝑏) 𝑙 ℎ (𝑏) represent the reference thrust densities of the network branches. 3.2 vertical equilibrium of nodes recalling that 𝑡𝑧 (𝑏) and 𝑓𝑧 (𝑛) are, respectively, the vertical component of the 𝑏𝑡ℎ branch thrust converging to node n and the nodal load, the vertical equilibrium of a generic node can be written as follows: ∑ 𝑡𝑧 (𝑏) 𝑏∈𝐵𝑛 + 𝑓𝑧 (𝑛) = 0 (12) now, recalling that, if compressive, thrust force t(b) is oriented towards node n, we have the following: 𝑡𝑧 (𝑏) = 𝑡ℎ (𝑏) 𝑙 ℎ (𝑏) (𝑧𝑛 − 𝑧𝑚 (𝑏) ) =  �̂�ℎ (𝑏) 𝑙 ℎ (𝑏) (𝑧𝑛 − 𝑧𝑚 (𝑏) ) = 1 𝑟 𝑡ℎ (𝑏) 𝑙 ℎ (𝑏) (𝑧𝑛 − 𝑧𝑚 (𝑏) ) (13) where the formula 𝑡ℎ (𝑏) = �̂�ℎ (𝑏) = 1 𝑟 �̂�ℎ (𝑏) has been used. accordingly, eq. 13 can be rewritten as follows: ∑ 𝑧𝑛 − 𝑧𝑚 (𝑏) 𝑙 ℎ (𝑏) �̂�𝑧 (𝑏) 𝑏∈𝐵𝑛 + 𝑓𝑧 (𝑛) 𝑟 = 0 (14) or equivalently as ∑ [ �̂�ℎ (𝑏) 𝑙 ℎ (𝑏) 𝑧𝑛 − �̂�ℎ (𝑏) 𝑙 ℎ (𝑏) 𝑧𝑚 (𝑏) ] 𝑏∈𝐵𝑛 + 𝑓𝑧 (𝑛) 𝑟 = 0 . (15) the previous condition is used to evaluate the unknown nodal heights zn, the coefficients of which are expressed by means of the reference thrust densities. the physical meaning of the parameter  = 1 𝑟 is exemplified by the three-hinged arch shown in figure 6, for which the equilibrium needs to be written only for the central node. figure 6. illustrative example of the vertical equilibrium of a node. acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 261 given that 𝑧𝑚 (𝑏) = 0, eq. 15 can be simplified to �̂� ℎ (𝑏) 𝑙 ℎ (𝑏) 𝑧𝑛 + 𝑓𝑧 (𝑛) 𝑟 = 0 or, equivalently, given that 𝑓𝑧 (𝑛) < 0 since it is directed downwards, to |𝑓𝑧 (𝑛) |𝑟 = �̂�𝑥 (𝑏) 𝑙 ℎ (𝑏) 𝑧𝑛 . (16) given that 𝑡ℎ (𝑏) , 𝑙ℎ (𝑏) are both positive, it can be inferred that a greater value of r is associated with a greater value of 𝑧𝑛 and vice versa. this, in turn, implies that we are seeking networks with the uppermost vertical coordinates of all nodes and, hence, the minimum thrust. 4. case study the present case study relates to the laser scanning survey for the santa maria della libera church in the municipality of aquino, and the attendant structural analysis for the static verification of the church’s cross vaults [20]. the church, which dates from 1000–1100 a.d. and is characterised by a pure romanic–benedictine style, was built with the typical ‘local soft’ travertine, fragmentary material of the remains of roman buildings surrounding the area where it was erected. the amazing and austere interior, with dimensions of 17 × 38 m, consists of three aisles divided by square pillars with three semi-circular absids and an imposing triumphal arch, also resting on pillars culminating with fragments of roman cornice that act as capitals, which leads to the transept (figure 7, figure 8). meanwhile, the main altar, consisting of a roman marble sarcophagus, is placed in the centre, with the centre aisle having a wooden roof, while the side aisles feature various cross vaults. the campaign of measurements was focused on the interior and exterior of the masonry structure, with its geometry and external projections making the survey particularly complex and cumbersome. in fact, the vaults in the interior aisles of the church have been the subject of various detailed studies [21]-[27]. first, an accurate topographical survey of the historic artifact’s site was carried out using the topcon gls-2000 laser scanner station (figure 9). during a four-hour period, 16 scans were performed at different station points in order to obtain an extremely high density of scan points, approximately five million points with measured coordinates with millimetre accuracy [20]. the survey was divided into several phases after careful planning of the campaign and the identification of the station points. the design of the survey included maps from google maps with on-site identification, cloud capture scans of specific points detected by a laser beam with a 360 ° horizontal and 270 ° vertical range of action, scanning alignment in pairs, global alignment, filtering, modelling, and editing in terms of the subroutine that the tna code was connected to (figure 10). point cloud models obtained using a survey method figure 7. image of the central nave. figure 8. internal plan. figure 9. image of the lateral left nave with a view of the masonry cross vaults. acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 262 incorporating high-precision laser scanning instruments have certain limitations from a computational point of view. in fact, it is not possible to directly associate the physical behaviour of the model derived from a point cloud using structural software. in fact, the transition from a point cloud to a polygonal surface model can be defined as a ‘re-topology’ operation since both quantitative and qualitative information of the point’s model are translated and adapted to obtain a triangular or quadrangular mesh that better represents the polygonal surface. meanwhile, the structural analysis software uses an algorithm that requires the geometric dimensions of the masonry object as the input data. the laser scanning survey consists of assigning a triplet of xyz coordinates to each point, with the coordinates initially relative before becoming absolute through a geo-referencing operation. in order to extrapolate the coordinates of the points, the cloud compare software offered us both the possibility of querying the single point or selecting a number of n points and exporting the list in .txt formats. once the data had been extrapolated and processed, they were exported to an excel table. in fact, the masonry elements are typically non-regular and non-continuous, do not have a homogeneous surface, and are made of masonry ashlars that are not perfectly squared. accordingly, the geometric data obtained from the survey needed to be appropriately smoothed to achieve more regular surfaces, i.e. surfaces with no kinks, unrealistic holes, or superposition patches. in order to resolve this problem and to obtain coordinates that allowed us to provide correct geometric data for processing via the structural algorithm, an interpolating function was built. specifically, the following polynomial formula was chosen to interpolate the curve drawn with the coordinates of the surveyed points in the best way possible. the equation of the polynomial formula is: y = 0.0236 x6 + 0.023 x5 + 0.0677 x4 0.0597 x3 0.3024 x2 + 0.1172 x + 12.871 . (17) the regression line represented a good fit of the points as the index of variance of the line demonstrated, with a value of r2 = 0.9974, i.e. very close to one. 5. results the application of the tna method, illustrated here for a single cross vault of the roof, was extended to the static and structural verification of all masonry vaults existing in the s. maria della libera church in aquino. each of the church’s cross vaults in the aisles has a square base whose side length is equal to 4 meters, a height equal to 2 meters, a thickness of 0.45 meters and it is made of soft travertine with a specific weight of 2.72 t/m³ (figure 11). the application of the tna method to a single cross vault provided, as can be seen from figure 12 and figure 13, the minimum and maximum thrust values of each of the 389 branches of the cross vaults, as well as the minimum and maximum values of the height of each of the 222 nodes of the roof associated with the maximum and minimum thrust values. the maximum height of the network nodes that characterises the deepest limit configuration, i.e. that associated with the minimum thrust, was 2.48 m. conversely, the shallowest limit configuration, associated with the maximum thrust, had a value of 1.38 m as the minimum node height. figure 10. flowchart of the scan-to-bim process. figure 11. medium surface of the vault geometry obtained via the surveying of the intrados and discrete measurements of the vault thickness. figure 12. distribution of maximum and minimum thrusts within the vault. figure 13. thrust distribution in a rib. acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 263 6. conclusions the tls survey carried out on the monumental complex of the church of santa maria della libera in aquino provided the point cloud required to perform a structural modelling of the church’s masonry vaults using the tna approach. the geometric and geo-referenced 3d model obtained by processing the laser-scanning measurements presented a model built on a coherent geometric basis, which considers the methodological complexities of the detected object (figure 14 and figure 15). the paper demonstrated how the interdisciplinarity between a geometric model, built with the innovative techniques typical of the geomatic-type survey, and a structural model, can represent a useful support to the structural verification of the safety and conservation of complex structures, such as those typically pertaining to the field of monumental heritage. in future research within this reference context, hbim models will be addressed, which are part of a semi-automated method that allows for switching from point cloud to an advanced 3d model with the capacity to contain all the geometrical and mechanical characteristics of the built object [28]-[34]. moreover, fem analyses, based on recently developed strategies related to masonry modelling [35]-[37], will be investigated in order to assess the outcomes of the tna. acknowledgments this work has been carried out under the gamher project: geomatics data acquisition and management for landscape and built heritage in a european perspective, prin: progetti di ricerca di rilevante interesse nazionale – bando 2015, prot. 2015hjls7e. gamher, url: https://site.unibo.it/gamher/en. references [1] r. quattrini, f. clementi, a. lucidi, s. giannetti, a. santoni, from tls to fe analysis: points cloud exploitation for structural behaviour for definition. the san ciriaco’s bell tower, the int. archives of the photogrammetry, remote sensing and spatial information sciences, vol. xlii-2/w15, 2019 27th cipa int. symp. documenting the past for a better future, ávila, spain, 1-5 september 2019, pp. 957-964. doi: 10.5194/isprs-archives-xlii-2-w15-957-2019 [2] t. ramon herrero-tejedor, f. arques soler, s. lopez-cuervo medina, m. r. de la o cabrera, j. l. martìn romero, documenting a cultural landscape using point-cloud 3d models obtained with geomatic integration techniques. the case of the el encın atomic garden, madrid (spain), plos one 15(6), 24 june 2020, e0235169, 16 pp. doi: 10.1371/journal.pone.0235169 [3] p. roca, m. cervera, g. gariup, l. pela, structural analysis of masonry historical constructions. classical and advanced approaches architectural computation methods engineering, springer, cham., 2010, pp. 299-325. doi: 10.1007/s11831-010-9046-1 [4] g. roca, f. lopez-almansa, j. miquel, a. hanganu, limit analysis of reinforced masonry vaults, engineering structures 29 (2007) 3, pp. 431-439. doi: 10.1016/j.engstruc.2006.05.009 [5] g. bitelli, c.balletti, r. brumana, l. barazzetti, m. g. d'urso, f. rinaudo, g.tucci, the gamher research project for metric documentation of cultural heritage: current developments, the int. archives of the photogrammetry, remote sensing and spatial information sciences, vol. xlii-2-w11-, proc. of 2019geores and 2019-2nd international conference of geomatics and restoration, 8 – 10 may 2019, milan, italy, pp. 239-246; doi: 10.5149/isprs -archives-xlii-2-w11-239-2019 [6] g. bitelli, c. balletti, r. brumana, l. barazzetti, m.g. d'urso, f. rinaudo, g.tucci, metric documentation of cultural heritage: research directions from the italian gamher project, the int. archives of the photogrammetry, remote sensing and spatial information sciences, vol. xlii-2-w5, 2017, pp. 83-89; doi: 10.5149/isprs -archives-xlii-2-w5-83-2017 [7] g. bitelli, g. castellazzi, a. m. d’altri, s. de miranda, a. lambertini, i. selvaggi, automated voxel model from point clouds for structural analysis of cultural heritage, the int. archives of the photogrammetry, remote sensing and spatial information sciences, vol. xli-b5, xxiii isprs congress, prague, czech republic, 12-19 july 2016. doi: 10.5194/isprsarchives-xli-b5-191-2016 [8] a. georgoupolos, ch. ioannidis, 3d visualization by integration of multi-source data for monument geometric recording, in: recording, modeling and visualization of cultural heritage, baltsavias et al. (editors), taylor & francis group, international workshop, ascona, 2005. isbn 0 415 39208 x [9] c. brito, n. alves, l. magalhães, m. guevara, bim mixed reality tool for the inspection of heritage building, isprs ann. photogramm. remote sens. spatial inf. sci, iv-2/w6, pp. 25-29. doi: 10.5194/isprs-annals-iv-2-w6-25-2019 [10] s. logothetis, a. delinasiou, e. stylianidis, building information modelling for cultural heritage: a review, isprs annals of the photogrammetry; vol. ii-5/w3, 25th int. cipa symposium, taipei, taiwan, 31 august -4 september 2015, pp. 177-183. doi: 10.5194/isprsannals-ii-5-w3-177-2015 figure 14. 3d digital model of an external side of the s. maria della libera church. figure 15. export sections from cloud https://site.unibo.it/gamher/en https://doi.org/10.5194/isprs-archives-xlii-2-w15-957-2019 https://doi.org/10.1371/journal.pone.0235169 https://doi.org/10.1007/s11831-010-9046-1 https://doi.org/10.1016/j.engstruc.2006.05.009 https://doi.org/10.5149/isprs%20-archives-xlii-2-w11-239-2019 https://doi.org/10.5149/isprs%20-archives-xlii-2-w5-83-2017 https://doi.org/10.5194/isprsarchives-xli-b5-191-2016 https://doi.org/10.5194/isprs-annals-iv-2-w6-25-2019 https://doi.org/10.5194/isprsannals-ii-5-w3-177-2015 acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 264 [11] a. georgoupolos, d. delikaraoglou, ch. ioannidis, e.lambrou, g.pantazis, using geodetic and laser scanner measurements for measuring and monitoring the structural damage of a post byzantine church, 8th int. symp. on conservation of monuments in the mediterranean basin, monument damage hazards and rehabilitation technologies, patras, greece, 31 may-2 june 2010. [12] f. marmo, l. rosati, reformulation and extension of the thrust network analysis, comp. & struct. 1822 (2017), pp. 104-118. doi: 10.1016/j.compstuc.2016.11.016 [13] f. marmo, d. masi, l. rosati, thrust network analysis of masonry helical staircases, int. j. of arch. her. 12(5) (2018), pp. 828-848. doi: 10.1080/15583058.2017.1419313 [14] f. marmo, d. masi, d. mase, l. rosati, thrust network analysis of masonry vaults, int. j. of masonry res. and innov. 4 (2019), pp. 64-77. doi: 10.1504/ijmri.2019.096828 [15] f. marmo, n. ruggieri, f. toraldo, l. rosati, historical study and static assessment of an innovative vaulting technique of the 19th century, int. j. of arch. her., 13(6) (2019), pp. 799-819. doi: 10.1080/15583058.2018.1476607 [16] f. marmo, m. marmo, s. sessa, a. pagliano, l. rosati, thrust membrane analysis of the domes of the baia thermal baths, in: carcaterra a., paolone a., graziani g. (editors), proc. of xxiv aimeta conf., rome, italy, 15-19 september 2019, lect. notes mech. engrg. springer, 2019, isbn 978-3-030-41057-5. doi: 10.1007/978-3-030-41057-5_154 [17] f. marmo, d. masi, s. sessa, f. toraldo, l. rosati, thrust network analysis of masonry vaults subject to vertical and horizontal loads, proc. 6th int. conf. on comp. methods in structural dynamics and earthquake eng. compdyn 2017, rhodes island, greece, 15-17 june 2017, pp. 2227-2238. doi: 10.7712/120117.5562.17018 [18] d. o'dwyer, funicular analysis of masonry vaults, comp. & struct. 73 (1999), pp. 187-197. [19] j. heyman, the masonry arch, ellis horwood, 1982, pp.85-90. [20] m. g. d'urso, v. manzari, b. marana, terrestrial laser-scanning point clouds for modeling masonry vaults, proc. of imeko tc4 int. conf. on metrology for archaeology and cultural heritage, 46 december 2019, florence, italy, pp. 282-286. online [accessed 22 march 2021] https://www.imeko.org/publications/tc4-archaeo2019/imeko-tc4-metroarchaeo-2019-52.pdf [21] m. g. d'urso, e. corsi, c. corsi, mapping of archaeological evidences and 3d models for the historical reconstruction of archaeological sites, metrology for archaeology and cultural heritage (metroarchaeo), cassino fr, italy, 22-24 october 2018, pp. 437-442. doi: 10.1109/metroarchaeo43810.2018.9089783 [22] m. g. d’urso, e. corsi, s. nemeti, m. germani, from excavations to web: a gis for archaeology, the int. archives of the photogrammetry, remote sensing and spatial information sciences, vol. xlii-5/w1, geomatics & restorationconservation of cultural heritage in the digital era, florence, italy, 22-24 may 2017, pp. 219-226. doi: 10.5194/isprsarchivesxlii-5/w1-219-2017 [23] m. g. d'urso, c. l. marino, a. rotondi, on 3d dimension: study cases for archaeological sites, the int. archives of the photogrammetry, remote sensing and spatial information sciences, vol. xl-6 (2008), pp. 13-18, issn: 1682-1750. doi: 10.5194/isprsarchives xl-6-13-18 [24] m.g. d’urso, g. russo, on the integrated use of laser-scanning and digital photogrammetry applied to an archaeological site, the int. archives of the photogrammetry, remote sensing and spatial information sciences. vol. xxxvii-b5-2/comm.v, issn 16821750, isprs, beijing, china, 3-11 july 2008, pp. 1107-1112. [25] s. parrinello, r. de marco, integration and modelling of 3d data as strategy for structural diagnosis in endangered sites. the study case of church of the annunciation in pokcha (russia), imeko tc4 int. conf. on metrology for archaeology and cultural heritage, florence, italy, 4-6 december 2019, pp. 223-228. online [accessed 22 march 2021] https://www.imeko.org/publications/tc4-archaeo2019/imeko-tc4-metroarchaeo-2019-41.pdf [26] a. piemonte, g. caroti, i. martínez-espejo zaragoza, f. fantini, l. cipriani, a methodology for planar representation of frescoed oval domes: formulation and testing on pisa cathedral isprs, int. j. geo-information 7 (2018), p. 318. doi: 10.3390/ijgi7080318. [27] b. riveiro, b. conde-carnero, h. gonzález-jorge, p. arias, j. c. caamaño, automatic creation of structural models from point cloud data: the case of masonry structures, isprs annals of the photogrammetry; vol. ii-3/w5 geospatial week, 2015, pp. 3-9. doi: 10.5194/isprsannals-ii-3-w5-3-2015 [28] g. guidi, f. remondino, m. russo, f. menna, a. rizzi, s. ercoli, a multi-resolution methodology for the 3d modeling of large and complex archeological areas, international journal of architectural computing (ijac) special issue (2009), pp. 39-55. [29] m. hess, v. petrovic, m. yeager, f. kuester, terrestrial laser scanning for the comprehensive structural health assessment of the baptistery di san giovanni in florence, italy: an integrative methodology for repeatable data acquisition, visualization and analysis, structure and infrastr. eng. 14(2) (2018), pp. 247-263. doi: 10.1080/15732479.2017.1349810 [30] m. c. l. howey, m. brouwer burg, assessing the state of archaeological gis research: unbinding analyses of past landscapes, journal of archaeological science 15(5) 2017, pp. 1-9. doi: 10.1016/j.jas.2017.05.002 [31] l. c. hung, w. xiangyu, j. yi, bim-enabled structural design: impacts and future developments in structural modelling, analysis and optimisation processes, arch. computat. methods 22 (2015), pp. 135-151. doi 10.1007/s11831-014-9127-7 [32] m. llobera, building past landscape perception with gis: understanding topographic prominence, journal of archaeological science 28 (2001), pp. 1005-1014. doi: 10.1016/jasc.2001.0720 [33] a. mitropoulou, a. georgopoulos, an automated process to detect edges in unorganized point clouds, isprs ann. photogramm. remote sens. spatial inf. sci., vol. iv-2/w6, (2019), pp. 99-105. doi: 10.5194/isprs-annals-iv-2-w6-99-2019 [34] x. yang, m. koehl, p. grussenmeyer, parametric modelling of asbuilt beam framed structure in bim environment the int. archives of the photogrammetry, remote sensing and spatial information sciences, volume xlii-2/w3, 2017 3d virtual reconstruction and visualization of complex architectures, nafplio, greece, 1-3 march 2017, pp. 651–657. doi: 10.5194/isprs-archives-xlii-2-w3-651-2017 [35] r. serpieri, s. sessa, l. rosati, a mitc-based procedure for the numerical integration of a continuum elastic-plastic theory of through-the-thickness-jacketed shell structures, comp. struct., 191 2018 pp. 209-220. doi: 10.1016/j.compstruct.2018.02.031 [36] s. sessa, r. serpieri, l. rosati, a continuum theory of throughthe-thickness jacketed shells for the elasto-plastic analysis of confined composite structures: theory and numerical assessment, comp. part b: eng., 113 (2017), pp. 225-242. doi: 10.1016/j.compositesb.2017.01.011 [37] s. sessa, r. serpieri, l. rosati, probabilistic assessment of historical masonry walls retrofitted with through-the-thickness confinement devices, proc. of 23rd aimeta conf., salerno, italy, 4-7 september 2017, pp. 2324-2332. https://doi.org/10.1016/j.compstuc.2016.11.016 https://doi.org/10.1080/15583058.2017.1419313 https://doi.org/10.1504/ijmri.2019.096828 https://doi.org/10.1080/15583058.2018.1476607 https://doi.org/10.1007/978-3-030-41057-5_154 https://doi.org/10.7712/120117.5562.17018 https://www.imeko.org/publications/tc4-archaeo-2019/imeko-tc4-metroarchaeo-2019-52.pdf https://www.imeko.org/publications/tc4-archaeo-2019/imeko-tc4-metroarchaeo-2019-52.pdf https://doi.org/10.1109/metroarchaeo43810.2018.9089783 https://doi.org/10.5194/isprsarchives-%20xlii-5/w1-219-2017 https://doi.org/10.5194/isprsarchives%20xl-6-13-18 https://www.imeko.org/publications/tc4-archaeo-2019/imeko-tc4-metroarchaeo-2019-41.pdf https://www.imeko.org/publications/tc4-archaeo-2019/imeko-tc4-metroarchaeo-2019-41.pdf https://doi.org/10.3390/ijgi7080318 https://doi.org/10.5194/isprsannals-ii-3-w5-3-2015 https://doi.org/10.1080/15732479.2017.1349810 https://doi.org/10.1016/j.jas.2017.05.002 https://doi.org/10.1007/s11831-014-9127-7 https://doi.org/10.1016/jasc.2001.0720 https://doi.org/10.5194/isprs-annals-iv-2-w6-99-2019 https://doi.org/10.5194/isprs-archives-xlii-2-w3-651-2017 https://doi.org/10.1016/j.compstruct.2018.02.031 https://doi.org/10.1016/j.compositesb.2017.01.011 measurements of helium permeation in zerodur glass used for the realisation of quantum pascal acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 4 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 measurements of helium permeation in zerodur glass used for the realisation of quantum pascal ardita kurtishaj1, ibrahim hameli1, arber zeqiraj2, sefer avdiaj1 1 department of physics, university of prishtina “hasan prishtina”, 10000 prishtinë, kosovo 2 department of materials and metallurgy, university of mitrovica “isa boletini”, 40000 mitrovicë, kosovo section: research paper keywords: permeation; helium; diffusion; vacuum; metrology citation: ardita kurtishaj, ibrahim hameli, arber zeqiraj, sefer avdiaj, measurements of helium permeation in zerodur glass used for the realisation of quantum pascal, acta imeko, vol. 11, no. 2, article 27, june 2022, identifier: imeko-acta-11 (2022)-02-27 section editor: sabrina grassini, politecnico di torino, italy received august 3, 2021; in final form march 1, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: sefer avdiaj, e-mail: sefer.avdiaj@uni-pr.edu 1. introduction pressure is traditionally defined as force per unit area. therefore, to realise the unit of pressure pascal, the most apparent method is to apply a known force to a known surface. essentially, this is how pressure is defined since 1640, when evangelista torricelli invented the mercury barometer [1]. nonetheless, since pressures in the vacuum range do not exert great forces, it becomes more convenient to formulate pascal as the amount of energy per unit volume [2]. consequently, at low pressures, pascal is realized through the law of ideal gases, utilizing the optical measurement of gas density [3]. one of the methods that serves for the latter purpose, relies on fabryperot optical cavities for the measurement of refractivity of the gas being used [4], [5]. cavities made out of ultra-low expansion (ule) glass were initially proposed to measure helium refractivity for the new realisation of pascal [1], [4], [6]. however, the usage of this material has shown some difficulties, as reported in references [2] and [7]. one of these difficulties is the permeability of helium into the cavity material. for this reason, the 18sib04 quantumpascal empir project “towards quantum-based realisations of the pascal” proposed the testing of zerodur as potential cavity material. to estimate whether zerodur is more suitable than the ule glass, different studies are being made, such as the one reported in reference [8]. this evaluation requires the modelling of gas transport dynamics in the material. such modelling, on the other hand, requires knowledge of diffusion and permeability coefficients. therefore, as collaborators in the above-mentioned project, we have studied the permeability of helium into the zerodur material. the measurements were performed in the temperature range 27 °c – 120 °c. determined values of helium gas permeability in the zerodur sample are given in the temperature range 80 °c – 120 °c. 2. materials and methods the vacuum system, in which the measurements were made, consists of two separate volumes (high-pressure volume 𝑉1 and abstract in the new optical pressure standard ultra-low expansion glass cavities were proposed to measure helium refractivity for a new realisation of the unit of pressure, pascal. however, it was noticed that the use of this type of material causes some difficulties. one of the main problems of ule glass is the pumping effect for helium. therefore, instead of ule, zerodur glass was proposed as a material for the cavity. this proposal was given by the vacuum metrology team of the physikalisch-technische bundesanstalt ptb in the quantumpascal project. in order to calculate the flow of helium gas through zerodur glass one has to know the permeation constant k. moreover, the modelling of time dependency of the flow requires the knowledge of diffusion constant d as well. the relation between them is given by k = s · d, where s is the solubility of helium in glass. in our research work we measured permeation of helium gas in zerodur. measurements were performed in the temperature range 80 °c – 120 °c. based on our results, we consider that the zerodur material has potential to be used as cavity material for the new quantum standard of pressure. mailto:sefer.avdiaj@uni-pr.edu acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 low-pressure volume 𝑉2 + 𝑉3, as shown in figure 1) having in common the sample wall. each of these volumes has its own vacuum pump, vacuum gauge and valves. in this setup, gas diffuses through the thin sample wall directly into the chamber where the quadrupole mass spectrometer was mounted. the vacuum chamber’s material is stainless steel, the valves used are kf valves with stainless steel body and elastomer seal, whereas the pumps used in this research work are pfeiffer turbopumps. with this vacuum system, we investigated the diffusion of helium in a zerodur sample. the sample (a squared plate with a thickness of 0.2 cm and an area of 2.27 cm2) was received by the physikalisch-technische bundesanstalt in berlin (charge number "105080201"). aluminium joints, iso-kf flanges and elastomer o-rings were used to mount the sample into the system. the orings were used to prevent he permeation and were placed on both sides of the sample, whereas kf flanges were placed next to the o-rings and then were tightened using aluminium joints. to regulate and control the temperature we used htc-5500 /5500pro temperature control unit and heating tapes. the sample was wrapped several times with heating tapes, which were then wrapped with aluminium foil. such sample insulation provided temperature regulation of about ± 1 ℃. temperature changes were recorded using a thermocouple connected to the labview software. before running measurements with the investigated sample, a leak test was performed, to make sure there was no he leakage through the elastomer parts of our system. during the measurements, the following procedures were pursued: the high-pressure chamber and the vacuum chamber were evacuated with the aid of two turbomolecular pumps. when the vacuum chamber had been pumped down to 10-4 pa, helium gas was admitted to the high-pressure side of the mounted sample, at a pressure of 1.32·105 pa and temperature 27.1 ℃. data on the he partial pressure and the chemical composition of the gas species in the vacuum system were obtained by pfeiffer’s prismapro qmg 250 f1 quadrupole mass spectrometer, qms. data recording and analysis was done using qms software pv massspec. the qms was factory calibrated, nonetheless, its calibration expired by the time we conducted the study. to investigate helium diffusion through the zerodur sample we used two main methods. the first method aimed to determine permeation, diffusion and solubility coefficients by recording he partial pressure increase in the vacuum chamber vs. time, until steady state is achieved. after the steady state is reached, the permeated amount of substance vs. time graph is a straight line. the intercept of this straight line with the time axis can be used to calculate the diffusion coefficient, d [9]. the permeation coefficient, k, can be determined from the helium gas flow in the steady state, the known thickness and area of the sample used, and the known pressure in the high-pressure chamber, as described in reference [9]. the relation between the diffusion and permeation coefficient is given by k = s · d, where s is the solubility of he in glass. therefore, with the known diffusion and permeation coefficients, solubility coefficient can be calculated as well. the second method is a modification of the so-called accumulation method, which is described in detail below. following the two methods, our study lasted 70 days. from the first day to the twelfth day the sample was kept at a temperature of 27 °c. then, for the next nineteen days, temperature was raised and adjusted to be 50 ℃. fourteen consecutive days temperature has been changed to 80 ℃, then the next nine days to 110 ℃, the next seven days to 115 ℃ and the last nine days to 120 ℃. to reduce the noise in the recorded signal, dwell time was changed from 32 ms to 1024 ms on the 49th day of the study. also, due to the reduction of he pressure in the high-pressure volume, on the 66th day of the study pressure was increased from 8.87·104 pa to 1.31·105 pa. on the 69th day of the study he gas was pumped from the high-pressure volume, to finally record changes in the he signal. 3. results and discussion modified accumulation method with the previously described procedure, he signal was recorded after its permeation into the zerodur sample. graphs presented at figure 2 display some of the measurements of this signal (representing the partial pressure of he, in pa) at different times and temperatures during the study. as can be noticed from the graphs listed in figure 2, even with the increased dwell time, the recorded he signal was still very low. this may be because he permeability in zerodur material is so small that it cannot be recorded with our experimental scheme. therefore, we have conducted figure 1. schematic representation of the vacuum system used to study the permeability of helium gas into the zerodur material. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 measurements with a modified version of the accumulation method as well [10]. to record measurements by this method, the valve separating the pump and the vacuum system (valve 3, in figure 1), closes. this enables the accumulation of helium gas that has permeated the sample and allows a longer time for the qms to record the signal, before it is pumped. while keeping valve 3 closed, the total pressure is monitored in the full range gauge (gauge 5, figure 1). as soon as pressure reaches a value of 10-2 pa the valve opens to prevent damage to the qms filament. this procedure is repeated several times, for different times and temperatures. in the meantime, we tried to conduct measurements with the same method, while keeping valve 2 closed (the valve separating volume 𝑉2 from volume 𝑉3, in figure 1). this enables for the gas that permeates the zerodur sample to not glide into the vacuum chamber where the qms is located, and it allows us to make comparisons between the signal recorded in these two states. if after closing valve 2 the signal value drops, it indicates that the recorded helium signal is actually the one permeating the sample. since no significant values of the helium signal were recorded up to the temperature of 80℃, the results of the measurements conducted with the accumulation method are presented for the temperatures of 80 ℃, 110 ℃, 115 ℃ and 120 ℃. an example of recorded measurements is shown in figure 3. the first three measurements represent the helium signal recorded by the accumulation method when valve 2 is open while the last three measurements indicate the helium signal recorded by the accumulation method when valve 2 is closed. thus, by comparing the recorded measurements, with valve 2 open or closed, the helium gas flow can be determined. with the known gas flow, the coefficient of permeability can be determined utilizing the known mathematics of the problem. the latter has been discussed in many articles and books, such as [11], [12], and [13]. here we refer to reference [9] where the gas flow q, through a plane sample with pressure gradient ∆p, cross-sectional area a and thickness l, is given by: 𝑞 = 𝐾 𝐴 𝑝 − 𝑝0 𝑙 , (1) figure 2. recorded he signal at different times and temperatures during the study. p(cdg) indicates the he pressure at the high-pressure volume, measured with the capacitance diaphragm gauge – cdg. in addition to the date of measurements recording, the start time of their registration and dwell time are marked as well. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 where k is the permeability coefficient, p is helium pressure in the high-pressure volume and p0 is the helium partial pressure in the low-pressure volume. as the low-pressure side of our sample is a high vacuum, it was assumed that the partial pressure of helium at this side is zero. the volume of the low-pressure part of the system was determined by gas expansion method [14]. the sample surface and thickness were determined geometrically. values of the permeability coefficient for different temperatures and at different study times are listed in table 1. 4. conclusions after 70 days of studying the helium permeability in the zerodur material, the stationary state of the diffusion process was not recorded. this is because, the helium signal is so weak that it cannot be fully recorded with our vacuum system. as it was not possible to analyse the stationary state of helium diffusion in the zerodur material, the time constant was not recorded, and consequently the helium diffusion coefficient in the zerodur material was not determined. on the other hand, with the accumulation method, we determined the helium permeability coefficient in the zerodur material, for different temperatures. to our knowledge, there are no data in literature concerning the helium permeability in zerodur. from the obtained results we estimate that the value of this coefficient is quite low and that it does not change much with increased temperature. for this and for the fact that the time constant of this material is observed to be long, we can conclude that zerodur material has the potential to be used as cavity material for the quantum standard of pressure measurement. overcoming other problems regarding the performance of refractometers is among the objectives of the "quantumpascal" empir project. acknowledgement the research equipment in our laboratory was financed by usa embassy in kosovo and the ministry of education, science and innovation of kosovo. we also thank prof. lars westerberg from uppsala university for providing us with some of the vacuum components used in this research. references [1] j. hendricks, quantum for pressure, nat. phys., vol. 14, no. 1, pp. 100–100, 2018. doi: https://doi.org/10.1038/nphys4338 [2] k. jousten, j. hendricks, d. barker, k. douglas, s. eckel, p. egan, j. fedchak, j. flügge, ch. gaiser, d. olson, j. ricker, t. rubin, w. sabuga, j. scherschligt, r. schödel, u. sterr, j. stone, g. strouse, perspectives for a new realization of the pascal by optical methods, metrologia, vol. 54, no. 6, 2017, pp. s146–s161. doi: https://doi.org/10.1088/1681-7575/aa8a4d [3] k. jousten, a unit for nothing, nat. phys., vol. 15, no. 6, 2019, p. 618. doi: https://doi.org/10.1038/s41567-019-0530-8 [4] p. f. egan, j. a. stone, j. h. hendricks, j. e. ricker, g. e. scace, g. f. strouse, performance of a dual fabry–perot cavity refractometer, opt. lett., vol. 40, no. 17, 2015, p. 3945. doi: https://doi.org/10.1364/ol.40.003945 [5] z. silvestri, d. bentouati, p. otal, j. p. wallerand, towards an improved helium-based refractometer for pressure measurents, acta imeko, vol. 9, 2020, no. 5, pp. 305-309. doi: http://dx.doi.org/10.21014/acta_imeko.v9i5.989 [6] j. scherschligt, j. a. fedchak, z. ahmed, d. s. barker, k. douglass, s. eckel, e. hanson, j. hendricks, n. klimov, t. purdy, j. ricker, ro. singh, j. stone, quantum-based vacuum metrology at nist, arxiv, 2018, pp. 1-49. doi: https://doi.org/10.1116/1.5033568 [7] s. avdiaj, y. yang, k. jousten, t. rubin, note: diffusion constant and solubility of helium in ule glass at 23 °c, j. chem. phys., vol. 148, no. 11, 2018, pp. 3–5. doi: https://doi.org/10.1063/1.5019015 [8] j. zakrisson, i. silander, c. forssén, z. silvestri, d. mari, s. pasqualin, a. kussicke, p. asbahr, t. rubin, o. axner, simulation of pressure-induced cavity deformation – the 18sib04 quantumpascal empir project, acta imeko, vol. 9, 2020, no. 5, pp. 281-286. doi: http://dx.doi.org/10.21014/acta_imeko.v9i5.985 [9] b. sebok, m. schülke, f. réti, g. kiss, diffusivity, permeability and solubility of h2, ar, n2, and co2 in poly(tetrafluoroethylene) between room temperature and 180 °c, polym. test., vol. 49, 2016, pp. 66-72. doi: https://doi.org/10.1016/j.polymertesting.2015.10.016 [10] technical specification iso/ts — procedures to measure and report, vol. 2018, 2018. [11] j. crank, the mathematics of diffusion, oxford university press, london and new york, 1975. [12] v. o. altemose, helium diffusion through glass, j. appl. phys., vol. 32, no. 7, 1961, pp. 1309-1316. doi: https://doi.org/10.1063/1.1736226 [13] f. j. nortonn, helium diffusion through glass, j. am. ceram. soc., vol. 36, no. 3, 1953, pp. 90-96. doi: https://doi.org/10.1111/j.1151-2916.1953.tb12843.x [14] s. avdiaj, j. setina, b. erjavec, volume determination of vacuum vessels by gas expansion method, mapan j. metrol. soc. india, vol. 30, no. 3, 2015, pp. 175-178. doi: https://doi.org/10.1007/s12647-015-0137-1. table 1. determined values of the permeability coefficient of helium gas in the zerodur sample, for different temperatures. date and time of commencement of data registration temperature t in °c permeation coefficient k in cm² / s 31.08.2020 12:51h 80 1.12 ∙ 10−15 05.09.2020 18:24h 110 1.30 ∙ 10−14 08.09.2020 17:20h 110 1.59 ∙ 10−14 09.09.2020 17:18h 115 2.09 ∙ 10−14 12.09.2020 15:27h 115 5.20 ∙ 10−14 19.09.2020 15:54h 120 6.47 ∙ 10−14 figure 3. an example of the data conducted with the accumulation method at temperature 110 °c. the first three measurements represent the helium signal recorded by the accumulation method when valve 2 is open while the last three measurements show the helium signal recorded by the accumulation method when valve 2 is closed. https://doi.org/10.1038/nphys4338 https://doi.org/10.1088/1681-7575/aa8a4d https://doi.org/10.1038/s41567-019-0530-8 https://doi.org/10.1364/ol.40.003945 http://dx.doi.org/10.21014/acta_imeko.v9i5.989 https://doi.org/10.1116/1.5033568 https://doi.org/10.1063/1.5019015 http://dx.doi.org/10.21014/acta_imeko.v9i5.985 https://doi.org/10.1016/j.polymertesting.2015.10.016 https://doi.org/10.1063/1.1736226 https://doi.org/10.1111/j.1151-2916.1953.tb12843.x https://doi.org/10.1007/s12647-015-0137-1 evaluation on effect of alkaline activator on compaction properties of red mud stabilised by ground granulated blast slag acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 6 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 evaluation on effect of alkaline activator on compaction properties of red mud stabilised by ground granulated blast slag sarath chandra1, sankranthi krishnaiah2 1 department of civil engineering, jawaharlal nehru technological university anantapur, anantapur, india and department of civil engineering, christ (deemed to be university), bangalore, karnataka-560029, india 2 department of civil engineering, jawaharlal nehru technological university anantapur, anantapur, andhra pradesh-515002, india section: research paper keywords: red mud; ground granulated blast slag; alkaline activator; evaluating compaction properties; assessing compaction energy citation: sarath chandra, sankranthi krishnaiah, evaluation on effect of alkaline activator on compaction properties of red mud stabilised by ground granulated blast slag, acta imeko, vol. 11, no. 1, article 22, march 2022, identifier: imeko-acta-11 (2022)-01-22 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received november 20, 2021; in final form february 20, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: sarath chandra, e-mail: chandrasarath011@gmail.com 1. introduction it’s very important to develop new methods rather than traditional methods of civil engineering construction to control the utilization of virgin resources. on the other hand, handling the industrial waste materials which are generated with the new methods of construction and the unpredictable growth of industries around the globe is also essential. the best method to balance both sides of this worldwide issue is by utilizing industrial waste materials in construction industries with suitable and environmentally friendly methods. with the revolution of sustainable construction technique, in the last two decades, enormous research was carried out in utilization of various industry materials like fly ash, ground granulated blast slag, pond ash, red mud, iron ore tailings, foundry sand, etc. in various verticals or areas of civil engineering constructions like manufacture of bricks and paver blocks [1], [2], as a subgrade and sub base material in road construction [3], [4], as an embankments and backfill materials [5], [6], as a soil stabilisation technique [7], [8] and vegetation [9]. along with the industries, construction activities also increased tremendously with the advancement in technologies were utilization of raw materials increased exponentially. it directly results on another important waste material called construction and demolished waste [10]. it is important to induce the latest technologies like optical metrology which involves digital, vision and video systems in order to understand the exact behaviour of various materials. the usage of advance technologies will help to reduce the pollution and gives new solutions with an accuracy in measuring and assessing the problems in civil engineering [11]. like many industry waste materials red mud (rm) is also a type of highly alkaline (ph ranging from 10.5 to 13) industrial residue which is produced during the process of extraction of abstract any industrial waste has a potential to be used as a civil engineering material with an effective and appropriate waste management system. like many industrial wastes, red mud (rm) and ground granulated blast slag (ggbs) are some of the industrial wastes produced from aluminium and steel industries respectively. utilization of only waste materials will not be effective without a suitable stabilizer, which forced to use an alkaline activator to satisfy the needs of a building materials. this paper evaluates measurements to assess the effect of alkaline activator on the compaction properties of ggbs stabilized rm. different ratios of naoh to na 2sio3 was used as an alkaline activator with 10, 20 and 30 percentage replacement of ggbs to rm and measured the compaction properties by using a mini compaction apparatus. upon conducting standard and modified proctor compaction tests for various combinations of rm and ggbs, the compaction curves depicted that huge variation in maximum dry density and optimum moisture content with the change of ggbs percentage and different ratios of naoh to na2sio3 are measured and analysed. further the influence of compaction energy on the density characteristics of these trails were assessed for better understanding. mailto:chandrasarath011@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 aluminium from the bauxite ore [12]. rm is generally produced in the form of slurry which contains up to 40 % of water into the collection ponds situated next to aluminium industries [13]. the high alkaline nature and very fine particle size of rm along with the aluminium traces result in a threat to the environment. it is also difficult to store rm for longer periods which may create air pollution with open exposure to air and also groundwater pollution with the leachate without proper liners. annually more than 4 million tons of red mud is producing in india. on the other side, it requires huge land to dispose of and involves a good amount of money for proper waste management. the lack of appropriate waste management and storage system of rm will show adverse effects on the environment which may end up taking both many lives and huge property which was resulted in the past [14]. these negative shades of rm emphasizing to use it in civil engineering constructions where a bulk material can be used with minimal cost. on the other side, rm also shows the similar properties of clayey and sandy soils, which is a good indication to use in civil engineering applications like the construction of embankments, landfills, and different layers of road construction. however, the application of these waste materials always depends on the density characteristics. this implies the measurement and compaction characteristics of finding optimum moisture content and maximum dry density of the material. measurement of compaction characteristics is a very important parameter for any foreign material in the geotechnical application at various stages upon satisfying the standard specifications of relevant codes, this manner measurement technology plays an important role in such real time applications. for any soil or waste material upon the application of loads, there will be a change in the volume because of the expelling of air form the voids. the change in the volume of the material depends on the number of voids developed in the material, the amount of air filled in the voids and the amount of load or pressure applied. the standard and modified compaction tests show the differences in dry densities with the change of load applied for any type of material. the maximum dry density and optimum moisture content values obtained from the compaction tests form the basis to determine many strength parameters of the waste material at various construction stages. various geotechnical parameters depend on the amount of water added to the material as we know the moisture largely controls the behaviour of soils, particularly fine-grained soils. hence, it is important to understand the in-depth information on compaction characteristics of rm for the benefit of various applications as a geo-material. as very limited work was carried out in the past focusing on measurement of compaction characteristics of rm along with other waste material stabilisation in india. so, an attempt was made to determine the compaction characteristics of rm along with ggbs and alkaline activators by using mini compaction apparatus. the prototype gives a better study of the load and the material behaviour with the control of the loads applied in the study [15]. rm was replaced with 10 %, 20 % and 30 % of ggbs to its dry weight, which is commonly used as stabilising material in many civil engineering applications because of its capability in increasing the strength and bearing capacity and also good in compaction [16]. naoh and na2sio3 are used as alkaline activators which are effectively proved as good stabilisers for ggbs mixed materials in the past [17]. the alkaline activators in various proportions are used in different industry waste materials to increase the strength properties of the waste materials exponentially and to use them effectively in the field of construction [18]-[20]. to understand the effect of alkaline activators on compaction characteristics of rm, series of standard and modified proctor compaction tests were performed on various combinations made of rm, ggbs, naoh and na2sio3. 2. experimental investigations rm for this study was collected from waste disposal pond of hindustan aluminum corporation (hindalco), belgaum, karnataka, situated in the southern part of india. ggbs was procured from the jsw cement limited. ggbs is produced as a by-product or waste from the blast furnaces during the process of making iron. figure 1a shows the original images of over dried red mud and ggbs and figure 1(c) and figure 1(d) presents the scanning electron microscopy (sem) images of red mud and ggbs respectively to understand the microstructure of these materials to use as a geo material in this study. the sem images showing red mud present the particles of scattered behaviour whereas the sem images of ggbs show the smooth sharp ends which indicate that a better compaction can be attained by using both the materials with the appropriate amount of water and energy. the microstructure of these two materials supports the current study on finding the compaction characteristics of rm with ggbs and an addition of water, experiments were also performed by using alkaline activator to achieve a better density of the material upon drying. sodium hydroxide (naoh) pellets, which were put into use for this study was purchased from the prince chemical bangalore, karnataka, india, which sells pellets with 98 % purity. based on many research findings, prioritizing the economical and strength aspects, the molarity of the sodium hydroxide solution was chosen to be 8 m throughout the research. hence, molarity calculations to prepare a sodium hydroxide solution of 8 m were, 320 g of sodium hydroxide pellets have to be dissolved in 1 l of water [8 mol/l x 40 g/mol = 320 g, where 40 g/mol is the approximate molecular weight of naoh]. the sodium silicate (na2sio3) solution was purchased from the para fine chemical industries, bangalore, karnataka, india, which contains na2o = 14.78 %, sio2 = 29.46 % and water = 55.97 % by mass. before finding the compaction characteristics of alkaline activated, ggbs stabilised rm, the physical and geotechnical properties of rm and ggbs were determined to understand and classify the materials for better justification on the compaction properties. according to astm d854, the specific gravity of rm and ggbs were determined by using pycnometer and the results are presented in table 1. average of three trails was considered for all the tests to maintain accuracy in the results. compaction is considered as a very important geotechnical property to obtain the maximum dry density and to know the optimum moisture content. so, it is also important to understand both physical and geotechnical properties of the waste material used in this research work as a pre-requisite. the properties prove that the rm can be effectively used in the subgrades in place of virgin soil. according to astm d4318, d422, d1140 and d2487, the liquid limit, plastic limit and particle size distribution and soil classification was determined for both rm and ggbs and the same were presented in table 2 [21]-[25]. the rm was classified into silt of low plasticity and ggbs as silt with low compressibility according to unified soil classification system. in this study [26], [27] the compaction characteristics of all the combinations were determined by using mini compaction apparatus which is presented in figure 2. this method of acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 compaction is only suitable for the materials which have particle size less than 2 mm and this method is opted as both rm and ggbs particle sizes are less than 2 mm. the unique advantage of this apparatus is the low amount of material used for testing, i.e. up to 300 g instead of 3 kg of soil, which is generally used for the standard proctor compaction test. the amount of energy given by the number of blows can be easily calculated by using mini compaction apparatus. it is difficult to determine the amount of energy applied with respect to number of blows accurately by using of standard or modified proctor compaction apparatus, but this can be easily done by using the mini compaction apparatus. it is also observed better results accuracy because less material is used and not much energy is applied. this method of compaction saves both time and energy of a researcher compared to the traditional method of compaction and the characteristics can be observed with the change of number of blows in a better way. the same compacted samples after trimming can be used for the purpose finding strength parameters of the material directly. both standard and modified compaction tests can be performed by using of this mini compaction apparatus. the dimensions of this apparatus are very small as the internal diameter of the cylinder is equal to 3.81 cm, the height of the cylinder is 8 cm and it was used by many researchers in the past for the easy conduction of compaction test with less material usage and less labour and time. [28] 3. results and discussion table 3 shows the optimum moisture content (omc) and maximum dry density (mdd) obtained for various combinations upon conducting an average of three trails of sp and mp tests of each. the omc and mdd of rm are 34.39 % and 1.59 g/cc, respectively, whereas in the case of ggbs it is 25.63 % and 1.62 g/cc with distilled water. rm shows more similar reproducible values with the mp tests compared to sp tests and the values of omc and mdd are noted as 28.65 % and mdd was 1.64 g/cc. this shows an initial idea about the material behaviour with the change of application of load and accuracy in omc and mdd. the results confirm that with the increase of compaction energy in rm, the dry density increases, and the water content requirement decreases in appreciable way which coincides with the past research results also [12]. it is observed that an increase up to a maximum of 5 % in mdd and up to 31 % in the omc with the standard and modified proctor compaction tests. it is because of increase in the rammer weight for the modified compaction. standard indicates the lightweight compaction light tampers and rammers and modified compactions indicates the rollers and other compactors. the omc and mdd presented in table 3 for various combinations shows the influence of alkaline activators on the ggbs stabilised rm. the ggbs replacement in first three combinations shows that the decrease in omc and increase in mdd in both sp and mp tests which indicates the sensitivity of ggbs stabilised rm table 1. physical and geotechnical properties of rm and ggbs. sno. property rm ggbs 1 specific gravity 2.95 2.81 2 consistency limits: liquid limit (%) plastic limit (%) plasticity index 43 38 5 32 np np 3 percentage fractions: sand silt clay 3 75 22 75 24 1.0 4 uscs ml ml (a) (b) (c) (d) figure 1. a) original image of red mud b) original image of ggbs c) sem image of red mud d) sem image of ggbs. table 2. combinations and nomenclature used in the research work. sl no. combinations mixing agent nomenclature 1 90 % rm + 10 % ggbs 100 % distilled water rgd1 2 80 % rm+ 20 % ggbs 100 % distilled water rgd2 3 70 % rm+ 30 % ggbs 100 % distilled water rgd3 4 90 % rm + 10 % ggbs 100 % naoh rga1 5 80 % rm + 20 % ggbs 100 % naoh rga2 6 70 % rm + 30 % ggbs 100 % naoh rga3 7 90 % rm + 10 % ggbs 90 %naoh + 10 % na₂sio₃ rga4 8 80 % rm + 20 % ggbs 90 % naoh + 10 % na₂sio₃ rga5 9 70 % rm + 30 % ggbs 90 % naoh + 10 % na₂sio₃ rga6 10 90 % rm + 10 % ggbs 80 % naoh + 20 % na₂sio₃ rga7 11 80 % rm + 20 % ggbs 80 % naoh + 20 % na₂sio₃ rga8 12 70 % rm + 30 % ggbs 80 % naoh + 20 % na₂sio₃ rga9 13 90 % rm + 10 % ggbs 50 % naoh + 50 % na₂sio₃ rga10 14 80 % rm + 20 % ggbs 50 % naoh + 50 % na₂sio₃ rga11 15 70 % rm + 30 % ggbs 50 % naoh + 50 % na₂sio₃ rga12 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 to water. the addition of ggbs beyond 30 % will not show any significance in the compaction characteristics of rm which was confirmed in the past studies and so limited to 30 % replacement only [29] and also the omc and mdd values of rgd2 and rgd3 are almost similar which confirms that the further increment in ggbs will not be beneficial regards to stabilisation and cost. the increase in mdd with the addition of ggbs to rm may be due to the reduction of clay fraction in rm which reduces the particle movement resistance during compaction. the addition of naoh and na2sio3 shows the decrease in omc and increase in mdd for all the cases of ggbs stabilised rm. the highest value of mdd was observed in combination rga12 with a value of 1.91 g/cc which is an exponential increase up to 20 % compared to the virgin rm with respect to sp test and the same percentage of increment in mdd was observed in the case of mp test also. omc was reduced up to 50 % in combination rga12 with respect to virgin rm both in sp and mp tests. almost similar pattern of change of omc and mdd percentages of all the combinations were observed both in sp and mp tests. omc was retained same and mdd was decreased with the addition of naoh alone compared to water which proves that the only naoh will not affect on the dry density on the ggbs stabilised rm and shows to add silicates to have more effective dry density in all the combinations. the increase in mdd and decrease in omc trend was observed with the addition of silicates to naoh in different ratios. both sp and mp tests confirms that the effect of alkaline activator depends on the percentage addition of ggbs to rm. the change in omc and mdd with the change of ratio of naoh/ na2sio3 was observed very minimum for a particular ggbs replacement to rm. combinations rga 10,11,12 shows very good increase in mdd values supporting that the same 50:50 ratios of naoh to na2sio3 for 10 %, 20 %, 30 % replacement by ggbs gives more effective results. in all the combinations the effect of alkaline activators largely depends on the addition of ggbs to rm, which may be due to the reaction of minerals present in the ggbs with the naoh and na2sio3. according to irc sp:20-2002, the minimum mdd of 1.46 g/cc is required to use any material as an embankment fill or in any road construction [30]. in this research work all the trails exceeds the minimum mdd required as per the specifications of irc which shows that alkaline activated ggbs stabilised rm satisfies the requirements to use in embankments with further evaluation of strength properties. the effect of compaction energy on the moisture content and dry density of untreated rm in the form of number of blows was studied by the research fraternity in the past. it is confirmed that the increase in the compaction energy has resulted to the decrease in moisture content and increase in dry density. in this study an attempt was made to evaluate the effect of compaction energy on the ggbs stabilised rm by replacing 10 %, 20 and 30 of rm with ggbs by using distilled water. based on the previous study, the number of blows used for this research work are 12, 15, 18, 22, 25, 28, 33, 45, 56 and the tests have been conducted by using a mini compaction apparatus. table 4 depicts the effect of compaction energy which was converted by using the number of blows on the ggbs stabilised rm. it shows that figure 2. mini compaction test apparatus. table 3. omc a nd mdd of alkaline activated ggbs stabilised red mud for both sp and mp. slno combination sp test mp test omc (%) mdd (g/cc) omc (%) mdd (g/cc) 1 rgd1 30.65 1.60 25.14 1.65 2 rgd2 29.64 1.61 24.22 1.67 3 rgd3 27.88 1.62 22.10 1.68 4 rga1 30.68 1.53 25.15 1.58 5 rga2 27.38 1.51 22.55 1.59 6 rga3 26.05 1.57 21.10 1.62 7 rga4 28.04 1.65 21.11 1.71 8 rga5 26.54 1.68 20.24 1.72 9 rga6 25.91 1.71 19.34 1.72 10 rga7 27.90 1.57 20.90 1.64 11 rga8 26.40 1.68 18.60 1.73 12 rga9 25.68 1.72 17.61 1.77 13 rga10 26.56 1.78 19.36 1.82 14 rga11 24.80 1.88 17.90 1.92 15 rga12 21.62 1.91 15.01 1.99 table 4. effect of compaction energy on density -water relationship of ggbs stabilised rm. number of blows compaction energy (kj/m3) rgd1 rgd2 rgd3 mc (%) dd (g/cc) mc (%) dd (g/cc) mc (%) dd (g/cc) 12 285 31.66 1.59 31.01 1.60 29.10 1.60 15 356 31.30 1.59 29.90 1.61 28.22 1.61 18 427 30.59 1.59 30.11 1.60 28.45 1.61 22 522 30.41 1.60 29.90 1.61 27.58 1.62 25 594 30.65 1.60 29.64 1.61 27.88 1.62 29 689 29.11 1.61 28.45 1.62 26.66 1.62 33 783 28.34 1.63 27.44 1.64 24.59 1.64 45 1068 26.40 1.64 25.81 1.66 23.56 1.66 56 2595 25.14 1.65 24.22 1.67 22.10 1.68 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 the increase in ggbs indicates the increase in the dry density with the increase of the compaction energy. this confirms that the compaction energy has a significant impact on moisture content (mc) and dry density (dd) in all the ggbs stabilised rm which also proves that the effective compaction results on attaining the better dry density of any stabilised waste material. 4. conclusions in the current study, detailed compaction tests were performed on virgin rm, ggbs stabilised rm and alkaline activated ggbs stabilised rm samples. from the results it is concluded that the mp tests show better compaction characteristics compared to sp test which highlights the effect of compaction energy on increasing the density of samples with the close package of fine particles present in rm and ggbs stabilised rm. ggbs acts as a good stabiliser for rm with the satisfying density as per the irc specifications for the construction of embankments and other filling layers. the increase in mdd and decrease in omc was observed with the increase of ggbs percentage to rm in all the trails. the results conclude that there is a minimum amount of influence on the density of rm with alkaline activator, but the influence of alkaline activator was more on the amount of ggbs added to the rm in both sp and mp tests. the outcome of this research work emphasizes that the waste materials can be effectively utilized upon stabilising with suitable other industry by products or waste materials and the available alkaline activators. further, strength properties and leachate characteristics can be studied of these combinations in future to improve the utilization in various civil engineering applications. acknowledgement authors acknowledge school of engineering and technology, christ university for providing all the laboratory facilities to perform this study. references [1] s. abbas, m. a. saleem, s. m. s. kazmi, m. j. munir, production of sustainable clay bricks using waste fly ash: mechanical and durability properties, j. build. eng., 14 (2017), pp. 7–14. doi: 10.1016/j.jobe.2017.09.008 [2] f. a. kuranchie, s. k. shukla, d. habibi, utilisation of iron ore mine tailings for the production of geopolymer bricks, int. j. mining, reclam. environ., 30(2) (2016), pp. 92–114. doi: 10.1080/17480930.2014.993834 [3] f. noorbasha, m. manasa, r. t. gouthami, s. sruthi, d. h. priya, n. prashanth, m. z. ur rahman, fpga implementation of cryptographic systems for symmetric encryption, journal of theoretical and applied information technology, 95(9) (2017), pp. 2038-2045. doi: 10.1155/2021/6610655 [4] e. mukiza, l. l. zhang, x. liu, and n. zhang, utilization of red mud in road base and subgrade materials: a review, resour. conserv. recycl., 141 (2019), pp. 187–199 doi: 10.1016/j.resconrec.2018.10.031 [5] p. v. v. kishore, a. s. c. s. sastry, z. ur rahman, double technique for improving ultrasound medical images, journal of medical imaging and health informatics, 6(3) (2016), pp. 667-675. doi: 10.1166/jmihi.2016.1743 [6] s. s. mirza, m. z. ur rahman, efficient adaptive filtering techniques for thoracic electrical bio-impedance analysis in health care systems, journal of medical imaging and health informatics, 7(6) (2017), pp. 1126-1138. doi: 10.1166/jmihi.2017.2211 [7] j. prabakar, n. dendorkar, r. k. morchhale, influence of fly ash on strength behavior of typical soils, constr. build. mater., 18(4)(2004), pp. 263–267. doi: 10.1016/j.conbuildmat.2003.11.003 [8] r. a. shaik, d. r. k. reddy, noise cancellation in ecg signals using normalized sign-sign lms algorithm, in 2009 ieee international symposium on signal processing and information technology (isspit), ieee, (2009), pp. 288-292. doi: 10.1109/isspit.2009.5407510 [9] r. j. haynes, reclamation and revegetation of fly ash disposal sites challenges and research needs, j. environ. manage., 90(1) (2009), pp. 43–53. doi: 10.1016/j.jenvman.2008.07.003 [10] m. n. salman, p. t. rao, novel logarithmic reference free adaptive signal enhancers for ecg analysis of wireless cardiac care monitoring systems, ieee access, 6 (2018), pp. 46382-46395. doi: 10.1109/access.2018.2866303 [11] l. martins, a. ribeiro, m. c. almeida, j. a. sousa, bringing optical metrology to testing and inspection activities in civil engineering, acta imeko, 10(3) (2021), pp. 3-16. doi: 10.21014/acta_imeko.v10i3.1059 [12] n. gangadhara reddy, b. hanumantha rao, evaluation of the compaction characteristics of untreated and treated red mud, geotech. spec. publ., 272 (2016), pp. 23–32. doi: 10.1061/9780784480151.003 [13] s. k. rout, t. sahoo, s. k. das, design of tailing dam using red mud, cent. eur. j. eng., 3(2) (2013), pp. 316–328. doi: 10.2478/s13531-012-0056-7 [14] w. m. mayes, i. t. burke, h. i. gomes, á. d. anton, m. molnár, v. feigl, é. ujaczki, advances in understanding environmental risks of red mud after the ajka spill, hungary, j. sustain. metall., 2(4) (2016), pp. 332–343. doi: 10.1007/s40831-016-0050-z [15] s. m. osman, r. kumme, h. m. ei-hakeem, f. loffler, e. h. hasan, r. m. rashad, f. kouta, multi capacity load cell prototype, acta imeko, 5(4) (2016), pp. 64-69. doi: 10.21014/acta_imeko.v5i3.310 [16] a. k. pathak, v. pandey, k. murari, j. p. singh, soil stabilisation using ground granulated blast furnace slag, j. eng. res. appl, 4(2) (2014), pp. 164–171. [17] y. yi, c. li, and s. liu, alkali-activated ground-granulated blast furnace slag for stabilization of marine soft clay, j. mater. civ. eng., 27(4) (2015), pp. 1–7. doi: 10.1061/(asce)mt.1943-5533.0001100 [18] m. mavroulidou, s. shah, alkali-activated slag concrete with paper industry waste, waste management and research, 39 (3) (2021), pp. 466-472. doi: 10.1177/0734242x20983890 [19] s. a. bernal, e. d. rodriguez, r. m. de guteirrez, j. l. provis, s. delvasto, activation of metakaolin/slag bends using alkaline solutions based on chemically modified silica fume and rice husk ash, waste biomass volar, 3 (2012), pp. 99-108. doi: 10.1007/s12649-011-9093-3 [20] t. bhakarev, j. g. sanjayan, y. b. cheng, alkali activation of australian slag, cem. conc. res., 29(1) (1999), pp. 113-120. doi: 10.1016/s0008-8846(98)00170-7 [21] astm d854-14, standard test methods for specific gravity of soil solids by water pycnometer, annual book of astm standard, astm international, west conshohocken, pa. 4(8) (2014). https://doi.org/10.1016/j.jobe.2017.09.008 https://doi.org/10.1080/17480930.2014.993834 https://doi.org/10.1155/2021/6610655 https://doi.org/10.1016/j.resconrec.2018.10.031 https://doi.org/10.1166/jmihi.2016.1743 https://doi.org/10.1166/jmihi.2017.2211 https://doi.org/10.1016/j.conbuildmat.2003.11.003 https://doi.org/10.1109/isspit.2009.5407510 https://doi.org/10.1016/j.jenvman.2008.07.003 https://doi.org/10.1109/access.2018.2866303 https://doi.org/10.21014/acta_imeko.v10i3.1059 https://doi.org/10.1061/9780784480151.003 https://doi.org/10.2478/s13531-012-0056-7 https://doi.org/10.1007/s40831-016-0050-z https://doi.org/10.21014/acta_imeko.v5i3.310 https://doi.org/10.1061/(asce)mt.1943-5533.0001100 https://doi.org/https:/doi.org/10.1177/0734242x20983890 https://doi.org/10.1007/s12649-011-9093-3 https://doi.org/10.1016/s0008-8846(98)00170-7 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 [22] astm d4318-10, standard test methods for liquid limit, plastic limit and plasticity index of soils, annual book of astm standard, astm international, west conshohocken, pa, 4(8) (2010). [23] astm d422-63, standard test method for particle size analysis of soils, annual book of astm standard, astm international, 4(8), west conshohocken, pa. [24] astm d1140-14 (2014), standard test methods for determining the amount of material finer than 75 micro meter (no 200) sieve in soils by washing, annual book of astm standard, astm international, west conshohocken, pa, vol. 4(8). [25] astm d2487-11, standard practice for classification of soils for engineering purposes (unified soil classification system), annual book of astm standard, astm international, 04-08 (2011), west conshohocken, pa. [26] astm d698-07, standard test methods for laboratory compaction characteristics of soil using standard effort.” annual book of astm standard, astm international, west conshohocken, pa, 4(8) (2007) [27] astm d1557-12, standard test methods for laboratory compaction characteristics of soil using modified effort.” annual book of astm standard, astm international, 04-08 (2012), west conshohocken, pa. [28] a. sridharan, p. v. sivapullaiah, mini compaction test apparatus for fine grained soils, geotech. test. j, 28(3) (2005) pp. 240–246. doi: 10.1520/gtj12542 [29] s. alam, s. k. das, b. h. rao, strength and durability characteristic of alkali activated ggbs stabilized red mud as geomaterial, constr. build. mater., 211 (2019), pp. 932–942. doi: 10.1016/j.conbuildmat.2019.03.261 [30] irc (indian road congress), guide lines and construction of rural roads. irc sp 20, new delhi, india: irc, (2002). https://doi.org/10.1520/gtj12542 https://doi.org/10.1016/j.conbuildmat.2019.03.261 introductory notes for the acta imeko special issue on the xxix italian national congress on mechanical and thermal measurements acta imeko issn: 2221-870x december 2021, volume 10, number 4, 6 7 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 6 introductory notes for the acta imeko special issue on the xxix italian national congress on mechanical and thermal measurements alfredo cigada1, roberto montanini2 1 dipartimento di meccanica., politecnico di milano, via la masa 120156 milano, italy 2 dipartimento di ingegneria, università degli studi di messina, c.da di dio 98166 villaggio sant’ agata, messina, italy section: editorial citation: alfredo cigada, roberto montanini, introductory notes for the acta imeko special issue on the xxix italian national congress on mechanical and thermal measurements, acta imeko, vol. 10, no. 4, article 4, december 2021, identifier: imeko-acta-10 (2021)-04-04 received december 14, 2021; in final form december 14, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: alfredo cigada, e-mail: alfredo.cigada@polimi.it dear readers, there is no doubt that measurements are playing a fundamental role in our everyday life. hot topics like the internet of things, industry 4.0, measurements for health or smart structures can not exist without a massive and pervasive presence of sensors and, more generally speaking, of measurement systems, meaning that data management and their analysis, both led by the interpretation by numerical models and by data driven approaches, are to be considered research trends of paramount interest for measurements. the reason why mechanical measurements are considered a science on its own has strong roots, not always fully understood, as data quality assessment, a preliminary step to any model, requires skills in a broad range of engineering topics, from sensors to mechanics, to materials and data science, just to mention some: this wide knowledge makes the cultural background of the expert in mechanical measurements one of the largest in engineering. all these aspects make the yearly italian meeting of “forum nazionale delle misure” a milestone for the discussion and update about the new trends in measurements, a science running very fast: this is also the occasion to relate to the colleagues mainly dealing with electric and electronic measurements, to widen the horizons and cultural exchange, making the occasion very rich for both a review on the present activities and a stimulus for the future. the pandemic has created a discontinuity in the meeting long lasting tradition, started in 1986: in 2020 the forum took place online, and the in-person meeting was organized again in september 2021 at giardini naxos, a wonderful location close to messina. as usual, part of the workshop has consisted in joint meetings between scientists from the mechanical and thermal measurements and the electrical and electronic measurements academic groups, in an interesting cultural discussion on the shared topics, faced with different viewpoints, and then on other topics, more peculiar to each group, in which the common language of metrology has a fundamental role to define the measurements quality. this special issue collects a selection of 12 papers presented during the three days of the “forum”: the authors have been asked to review their work and prepare an extended version, fit for the publication on acta imeko, a reference journal for measurements. the remarkable transversality of the mechanical measurements field is witnessed by the significant heterogeneity of the topics covered in each of the twelve selected works. the paper entitled ‘skin potential response for stress recognition in simulated urban driving’ by zontone et al. addresses the problem of stress conditions arising in car drivers using machine learning techniques based on skin potential response (spr) signals recorded from each hand of the test subjects. results showed that, in a situation without traffic, the test individuals are less stressed, confirming the effectiveness of the proposed minimally invasive system for detection of stress in drivers. in the paper ‘human identification and tracking using ultrawideband-vision data fusion in unstructured environments’ the research group of the university of trento, led by prof. mariolino de cecco, faced the problem of the cooperation between automated guided vehicles (agv) and the operator, to solve two crucial functions of autonomy: operator identification, and tracking. using sensor fusion, the authors were able to improve the accuracy and goodness of the final tracking, reducing uncertainty. mailto:alfredo.cigada@polimi.it acta imeko issn: 2221-870x december 2021, volume 10, number 4, 7 8 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 7 the third paper, authored by giulietti et al., deals with the continuous monitoring of cement-based structures and infrastructures, to optimize their service life and reduce maintenance costs. the proposed approach is based on electrical impedance measurements. data can be made available on a cloud through wi-fi network or lte modem, hence they can be accessed remotely, via a user-friendly multi-platform interface. the paper ‘validation of a measurement procedure for the assessment of the safety of buildings in urgent technical rescue operations’ saw the collaboration between the research group of the university of l’aquila and the fire, public rescue and civil defense department of the italian ministry of the interior. the work provides a preliminary contribution to the draft of standard procedures for the adoption of total stations by rescuers in emergency situations, so as to offer a reliable and effective support to their assessment activities. in the paper ‘a comparison between aeroacoustic source mapping techniques for characterization of wind turbine blade models with microphone arrays’, g. battista et al. deal with the problem of characterizing the aeroacoustic noise sources generated by a rotating wind turbine blade, in order to provide useful information for tackling noise reduction. this paper discusses a series of acoustic mapping strategies that can be exploited in this kind of applications, based on laboratory tests carried out in a semianechoic room on a single-blade rotor. the research group of the university of padua, led by prof. stefano debei, presented a paper entitled ‘occupancy grid mapping for rover navigation based on semantic segmentation’, dealing with obstacle perception in a planetary environment by means of occupancy grid mapping. to evaluate the metrological performances of the proposed method, the esa katwijk beach planetary rover dataset have been used. the paper, ‘characterization of glue behaviour under thermal and mechanical stress conditions’ by caposciutti et al., explores the behaviour of the glued interface commonly used to fix the electronics to the box housing, as it undergoes daily or seasonal thermal cycles combined to mechanical stress. to carry out the study, the authors prepared some parallel plates capacitors by using glue as a dielectric material. non-linear behaviour of the capacitance vs. temperature as well as effects of thermal cycles on the glue geometry were investigated. the research group of the university of perugia, led by prof. gianluca rossi, presented an optical-flow-based motion compensation algorithm to be used in thermoelastic stress analysis to account for rigid-displacements that can occur during loading. the proposed approach is based on measuring the displacement field of the specimen directly from the thermal video. the blurring and edge effects produced by the motion were almost completely eliminated, making it possible to accurately measure the stress field, especially in areas around geometrical discontinuities. thermoelasticity and aruco markers were also employed by l. capponi et al. to validate a numerical model of the inspection robot mounted on the new san giorgio's bridge on the polcevera river in genova. an infrared thermoelasticity-based approach was used to measure stress-concentration factors while aruco fiducial markers were exploited to assess the natural frequencies of the robot inspection structure. a completely different field of application concerns the paper ‘doppler flow phantom failure detection by combining empirical mode decomposition and independent component analysis with short time fourier transform’, that reports some very last results obtained by the research group led by prof. sciuto at the university of roma tre. the paper aims at providing an improvement of a previously proposed method for doppler flow phantom failures detection, combining application of empirical mode decomposition (emd), independent component analysis (ica) and short time fourier transform (stft) techniques on pulsed wave (pw) doppler spectrograms. the paper ‘comparison between 3d-reconstruction optical methods applied to bulge-tests through a feed-forward neural network’ was originated from the collaboration between the research groups of mechanical and thermal measurements of the university of messina and of the university of catania. the aim of the work was to compare two different 3d reconstruction techniques, epipolar geometry and digital image correlation, to measure the deformation field of hyperelastic membranes under plane and equibiaxial stress state. a ffnn neural network was then used to assess accuracy of the two experimental approaches using a laser sensor as reference. finally, the paper ‘development and characterization of a self-powered measurement buoy prototype by means of piezoelectric energy harvester for monitoring activities in a marine environment’, written by the research unit of the university of messina, led by prof. roberto montanini, addresses a series of interesting topics: among the others measurements for the sea and energy harvesting from the sea waves: the paper focuses on this aspect with an innovative approach. we gratefully acknowledge all the authors who have contributed to this special issue, as well as all the reviewers and a special thanks goes to prof. francesco lamonaca, editor in chief of acta imeko for his tireless and patient help which has made possible this special issue. we are proud of having served as guest editors for this issue, hoping that this will help spreading the culture of measurements. alfredo cigada and roberto montanini guest editors a2cm: a new multi-agent algorithm acta imeko issn: 2221-870x september 2021, volume 10, number 3, 28 35 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 28 a2cm: a new multi-agent algorithm gabor paczolay1, istvan harmati1 1 budapest university of technology and economics, magyar tudósok körútja 2, 1117 budapest, hungary section: research paper keywords: reinforcement learning, multiagent learning citation: gabor paczolay, istvan harmati, a2cm: a new multi-agent algorithm, acta imeko, vol. 10, no. 3, article 6, september 2021, identifier: imeko-acta10 (2021)-03-06 section editor: bálint kiss, budapest university of technology and economics, hungary received january 15, 2021; in final form august 13, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: gabor paczolay, e-mail: paczolay.gabor@gmail.com 1. introduction reinforcement learning is one of the most researched fields within the scope of artificial intelligence. newer algorithms are continually being developed to achieve successful learning in more situations or with fewer samples. in reinforcement learning, a new challenge arises when we take other agents into consideration. this research field is called ‘multi-agent learning’. dealing with other agents – whether they are cooperative, competitive or a mixture of both – brings the learning model closer to a real-world scenario. in real life, no agent acts alone; even random counteracts can be treated as ‘counteracts of nature’. in our work, we optimised the synchronous actor–critic algorithm to perform better in cooperative multi-agent scenarios (those in which agents help each other). littman [1] utilised the minimax-q algorithm, a zero-sum multiagent reinforcement learning algorithm, and applied it to a simplified version of a robotic soccer game. hu and wellmann [2] created the nash-q algorithm and used it on a small gridworld example to demonstrate the results. bowling [3] varied the learning rate of the training process to speed it up while ensuring convergence. later, he applied the win or learn fast methodology to an actor–critic algorithm to improve its multiagent capabilities [4]. reinforcement learning advanced significantly when neural networks gained popularity and convergence was improved. mnih et al. [5] successfully applied deep reinforcement learning to playing atari games by feeding multiple frames at once and utilising experience replay to ensure convergence. later, deep reinforcement learning was applied to multi-agent systems, such as independent multi-agent reinforcement learning. foerster et al. [6] stabilised experience replay for independent q-learning using fingerprints. omidshafiei et al. [7] utilised decentralised hysteretic deep recurrent q-networks for partially observable multi-task multi-agent reinforcement learning problems. figure 1. markov decision process. abstract reinforcement learning is currently one of the most researched fields of artificial intelligence. new algorithms are being developed that use neural networks to compute the selected action, especially for deep reinforcement learning. one subcategory of reinforcement learning is multi-agent reinforcement learning, in which multiple agents are present in the world. as it involves the simulation of an environment, it can be applied to robotics as well. in our paper, we use our modified version of the advantage actor–critic (a2c) algorithm, which is suitable for multi-agent scenarios. we test this modified algorithm on our testbed, a cooperative–competitive pursuit–evasion environment, and later we address the problem of collision avoidance. mailto:paczolay.gabor@gmail.com acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 29 multiple advancements have also been made in the field of centralised learning and decentralised execution. foerster et al. [8] created counterfactual multi-agent policy gradients to solve the issue of multi-agent credit assignment. peng et al. [9] created multiagent bidirectionally-coordinated nets with actor–critic hierarchy and recurrent neural networks for communication. sunehag et al. [10] utilised value-decomposition networks with common rewards and q-function decomposition. rashid et al. [11] utilised qmix with value function factorisation, q-function decomposition and a feed-forward neural network with better performance than the former value-decomposition one. lowe et al. [12] improved the deep deterministic policy gradient by altering the critic to contain all actions of all agents, thus making the algorithm capable of processing more multi-agent scenarios. shihui et al. [13] improved upon the previous maddpg algorithm, increasing its performance in zero-sum competitive scenarios by utilising a method based on minimax-q learning. casgrain et al. [14] upgraded the deep q-network algorithm utilising methods based on nash equilibria, making it capable of solving multi-agent environments. benchmarks have also been created to analyse the performance of various algorithms in multi-agent environments. vinyals et al. [15] modified the starcraft ii game to make it a learning environment. samvelyan et al. [16] also pointed to starcraft as a multi-agent benchmark but with a focus on micromanagement. liu et al. [17] introduced a multi-agent soccer environment with continuous simulated physics. bard et al. [18] reached a new frontier with the cooperative hanabi game benchmark. cooperative multiagent reinforcement learning and the proposed algoirthm are usable in many scenarios in robotics. as our algorithm is decentralised, it can be installed into the robots themselves without any central command center. it might be useful in exploration or localisation tasks in which the use of multiple agents would significantly speed up the process. our testbed can be considered a simplified version of a localisation task, as the pursuer robots are trying to approach and measure a non-cooperative moving object. for proper use in robotics, a well-prepared simulation of the robots and the environment is required, in which thousands of episodes can be run for learning. in our work, we modified the already existing advantage actor–critic (a2c) algorithm to make it better suited for multiagent scenarios by creating a single-critic version of the algorithm. then, we tested this modified a2cm algorithm on our cooperative–competitive pursuit–evasion testbed. in the following section, we explain the theoretical background for our work. then, the experiments themselves and the testbed are introduced. we continue by presenting the results and end with our conclusions and suggestions for future work on the topic. 2. theoretical background 2.1. markov decision processes a markov decision process is a mathematical framework for modeling decision making, as shown in figure 1. in a markov decision process there are states, selectable actions, transition probabilities and rewards [1]. at each timestep, the process starts at a state 𝑠 and selects an action 𝑎 from the available action space. it gets a corresponding reward 𝑟 and then finds itself in a state 𝑠′ given by the probability of 𝑃(𝑠, 𝑠′). a process is said to be markovian if 𝑃(𝑎𝑡 = 𝑎|𝑠𝑡 , 𝑎𝑡−1, . . . , 𝑠0, 𝑎0) = 𝑃(𝑎𝑡 = 𝑎|𝑠𝑡 ), (1) which means that a state’s transition is based only on the previous state and the current action. thus, only the last state and action are considered when deciding on the next state. in a markov decision process, the agents are trying to find a policy that maximises the sum of discounted expected rewards. the standard solution for this uses an iterative search method that searches for a fixed point of the bellman equation: 𝑣(𝑠, 𝜋 ∗) = max𝑎 (𝑟(𝑠, 𝑎) + 𝛾 ∑ 𝑠′ 𝑝(𝑠 ′|𝑠, 𝑎)𝑣(𝑠 ′, 𝜋 ∗)). (2) 2.2. reinforcement learning when the state transition probabilities or the rewards are unknown, the problem of the markov decision process becomes a problem of reinforcement learning. in this group of problems, the agent tries to make a model of the world around itself via trial and error. one type of reinforcement learning is value-based reinforcement learning. in this case, the agent tries to learn a figure 2. the simulation environment. the squares represent the controlled agents, while the circle represents the fleeing enemy. the goal is to catch the enemy by moving horizontally or vertically. initialise model: initialise n+1 hidden and n+1 output (1 value + n action) layers (4 different networks in one model, 1 critic + 3 actor) number of updates batch size for number of updates do for batch size do calculate next actions 𝑎 based on the previous state take actions 𝑎, get terminal state boolean and new rewards store the actions, the terminal state booleans, the calculated values, the rewards and the states end for calculate returns based on (13) calculate advantages based on (12) update critic neural network based on the observed states and the corresponding returns: loss is the mean squared error between the returns and calculated values update actor neural networks based on the observed states, the taken actions and the advantages: loss is policy loss(weighted sparse categorical cross-entropy loss) − entropy loss(crossentropy over itself) end for algorithm 1: a2cm. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 30 value function that renders a value to states or to actions from states. these values correspond to a reward achieved by reaching a state or taking a specific action from a state. the most commonly used type of value-based reinforcement learning is q-learning [2], in which the so-called q-values are estimated for each of the state–action pairs of the world. these q-values represent the value of choosing a specific action in a state, meaning the highest reward the agent could possibly get by taking that action. the equation for q-learning for updating the q-values of a state is: 𝑄(𝑠 ′, 𝑎) ← (1 − 𝛼) ⋅ 𝑄(𝑠, 𝑎) + 𝛼 ⋅ (𝑟 + 𝛾 ⋅ max 𝑎′ 𝑄(𝑠 ′, 𝑎′)) , (3) where 𝛼 is the learning rate and 𝛾 is the discount for the reward. the agent always selects an action that maximises the q-function for the state that the agent is in. another type of reinforcement learning is policy-based reinforcement learning. in this case, actions are derived as a function of the state itself. the most common policy-based reinforcement learning method is the policy gradient approach [19]. in this case, the agent tries to maximise the expected reward by following the policy 𝜋𝜃 parametrised by 𝜃 based on the total reward for a given trajectory 𝑟(𝜏). thus, the cost function of the parameters 𝜃 is the following: 𝐽(𝜃) = 𝐸𝜋𝜃 [𝑟(𝜏)]. (4) the parameters are then tuned based on the gradient of the cost function: 𝜃𝑘+1] = 𝜃𝑡 + 𝛼δ𝐽(𝜃𝑡 ). (5) the advantages of policy-based methods include the ability to map environments with huge or even continuous action spaces and solve environments with stochasticity. however, when using these methods, there is also a much greater possibility of getting stuck in a local maximum rather than following the optimal policy. apart from the aforementioned model-free reinforcement learning methods, there is also model-based reinforcement learning. in this case, a model is built (or just tuned) to perform the reinforcement learning. this is more sample-efficient than model-free methods and thus requires fewer samples to perform equally, but it is very dependent on the particular model. it can be combined with model-free methods to achieve better results, as in [20]. 2.3. multi-agent systems and markov games a matrix game is a stochastic framework in which each player selects an action and gets an immediate reward based on their action and those of the other agents [1]. they are called ‘matrix games’ because the game can be written as a matrix, with the first two players selecting actions in the rows and columns of the matrix. unlike markov decision processes, these games have no states. markov games, or stochastic games, are extensions of markov decision processes with multiple agents. they can also be thought of as extensions of matrix games with multiple states. in a markov game, each state has a payoff matrix for all of the states. the next state is determined by the joint actions of the agents. a game is markovian if 𝑃(𝑎𝑖 𝑡 = 𝑎𝑖 |𝑠 𝑡 , 𝑎𝑖 𝑡−1, . . . , 𝑠0, 𝑎𝑖 0) = 𝑃(𝑎𝑖 𝑡 = 𝑎𝑖 |𝑠 𝑡 ), (6) so the next state depends only on the current state and the current actions taken by all agents. 2.4. deep reinforcement learning a reinforcement learning algorithm is called ‘deep’ if it is assisted by a neural network. figure 3. an example of catching the randomly moving opponent. figure 4. an example of catching the fleeing opponent. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 31 a neural network is a function approximator built from (sometimes billions of) artificial neurons. an artificial neuron, which is based on the real neurons of the brain, has the following equation: 𝑦 = act (∑ 𝑤 𝑥 + 𝑏), (7) where 𝑥 is the input vector, 𝑤 is the weight vector, 𝑏 is the bias and act() is the activation function to introduce nonlinearity in an otherwise linear system. the parameters (𝑤 and 𝑏) are tuned with backpropagation, calculating the partial derivative error of all parameters propagated from the final error to the input vector. the selection of the activation function is important in deep learning due to the vanishing gradients: when many layers are stacked upon each other, higher layers’ gradients are too small during backpropagation, and thus, those layers are difficult to train. a basic activation function can be a sigmoid or logistic activation function: 𝑦 = 1 1 + e−𝑥 . (8) a common activation function in deep learning is rectified linear unit (relu) [21], which has gradients that are less vanishing and therefore better to train. it has the following equation: 𝑦 = 𝑥 if 𝑥 > 0 𝑦 = 0 if 𝑥 <= 0 . (9) for multi-class classification, another activation function is used: the softmax activation function. when used as the last layer, the probabilities of all of the output neurons add up to exactly 1. thus, in reinforcement learning, it is utile to use it as the probability distribution of the possible actions. it has the following equation: 𝑦 = e𝑥𝑖 ∑𝑗 e 𝑦𝑗 . (10) deep reinforcement learning algorithms have several advantages compared to traditional reinforcement learning algorithms. first of all, they are not based on a state table, as the states are approximated (which is much more robust than using linear function approximators). this allows many more states to be mapped and even allows for continuous states. however, they are more prone to diverging, and thus, many optimisations have been created on deep reinforcement learning algorithms to provide better convergence on the problems. 2.5. actor–critic an actor–critic system combines value-based and policybased reinforcement learning. in these systems, there are two distinct parametrised networks: the critic, which estimates a value function (as in value-based reinforcement learning), and an actor, which updates the policy network based on the direction suggested by the critic (as in policy-based reinforcement learning). actor–critic algorithms follow an approximate policy gradient: ∇𝜃 𝐽(𝜃) ≈ 𝔼𝜋𝜃 [∇𝜃 log 𝜋𝜃 (𝑠, 𝑎) 𝑄𝑤 (𝑠, 𝑎) δ𝜃 = 𝛼 ∇𝜃 log 𝜋𝜃 (𝑠, 𝑎) 𝑄𝑤 (𝑠, 𝑎) . (11) approximating the policy gradient introduces bias to the system. a biased policy gradient may not find the right solution, but if we choose the value function approximation carefully, then we can avoid introducing any bias. actor–critic systems generally perform better than regular reinforcement learning algorithms. the critic network ensures that the system does not get stuck in a local maximum; meanwhile, the actor network enables the mapping of environments with huge action spaces and provides better convergence [19]. 2.6. the a2c algorithm a2c stands for synchronous advantage actor–critic. it is a one-environment-at-a-time derivation of the asynchronous advantage actor–critic (a3c) algorithm [22], which processes multiple agent-environments simultaneously. in that algorithm, multiple workers update a global value function, thus exploring the state space effectively. however, the synchronous advantage figure 5. the performance of the original a2c algorithm on our benchmark. figure 6. performance of the modified a2c algorithm on our benchmark. figure 7. performance of the original a2c algorithm on our benchmark with collision (with terminating at collision). acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 32 actor–critic provides better performance than the asynchronous model. advantage function is a method that significantly reduces the variance of the policy gradient by subtracting the cumulative reward using a baseline to make smaller gradients; thus, it provides much better convergence than regular q-values. it has the following equation: 𝐴(𝑠, 𝑎) = 𝑄(𝑠, 𝑎) − 𝑉(𝑠) . (12) returns are calculated using the equation: 𝐺𝑡 = 𝑟𝑡 + 𝛾 ∗ 𝑟𝑡+1 ∗ (1 − 𝑇𝑡 ) , (13) where 𝐺 is the return, 𝑟𝑡 is the reward at time t, 𝛾 is the discount factor and 𝑇𝑡 indicates whether the step at time 𝑡 is a terminal state. 3. experiments and results the testbed is a 5 × 5 grid with three cooperating agents (the squares) in three of the four corners of the environment. in the middle, there is a fourth agent (the circle). the former three agents have the objective of catching the fourth agent, which moves randomly. this testbed is analogous to pursuit–evasion (or predator–prey) scenarios that are also significant in robotics. the agents can move in four directions: up, down, left or right. when one of the three agents catches the fourth one, the episode ends. a penalty is introduced to the cooperative agents every timestep; thus, the return of an episode is maximised by ending the episode as soon as possible (i.e. catching the fleeing agent as quickly as possible). each episode must end in 1,000 timesteps to avoid getting stuck. in the modification of the a2c algorithm, we followed the theory of centralised learning and decentralised execution. this means that the execution is decentralised, but the learning phase can be assisted by additional information from other agents. in our case, we used the information that the agents are cooperative; thus, they acquire the same rewards (and returns). as noted before, decentralised execution is most helpful in real-world scenarios in which communication difficulties make a centralised task-solving achitecture impossible. such scenarios are often encountered in robotics. in our experiment, many a2c models with one actor and one critic were substituted for one model with one critic and multiple actors. the pseudocode of the algorithm can be seen in algorithm 3. all neural network layers were subclasses of the tensorflow model class, which provides utile functions for training and prediction – even for batch tasks – by providing only the forward steps of the network. the optimiser was rmsprop, with a learning rate of 7 · 10−3. the value estimator critic contained a neural network of 128 hidden unit layers with relu activation function and one output layer with one unit. its loss function was a simple mean squared error between the returns and the value. the actors contained a hidden layer with 128 hidden units and an output layer with four units (the number of actions in the action space). the loss function contained two distinct parts: policy and entropy loss. the policy loss was a weighted sparse categorical cross-entropy loss, where the weights were given by the advantages. this method increased the convergence of the algorithm. entropy loss is a method for increasing exploration by encouraging actions that are not in the local minimum. this is very important for tasks with sparse rewards due to the fact that the agent does not receive feedback often. this loss was calculated as a cross-entropy over itself, and it was subtracted from the policy loss because it should be maximised, not minimised. the entropy loss was tuned by a constant, which was taken as 1 · 10−4. episode rewards were taken to be a list where a value of 0 was appended to the end of the list at each episode’s end. during the episodes, only the last value of the list was incremented by the episode reward of the given step. for the training, a batch-sized container was created for the actions, rewards, terminal state booleans, state values and observed states. then, a two-level loop was started: the outer one was run for the number of required updates (set by us), while the inner loop was run as many times as the batch size. the state observations, the taken actions (which were selected by a probability distribution based on the actor neural network results), the state values, the rewards, the terminal state booleans and the last observed state were stored in the aforementioned containers. next, the returns and advantages figure 8. performance of the a2cm algorithm on our benchmark with collision (with terminating at collision). figure 9. number of steps per episode of the original a2c algorithm on our benchmark with collision (without terminating at collision). figure 10. number of steps per episode of the a2cm algorithm on our benchmark with collision (without terminating at collision). acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 33 were calculated on the batch using the collected data, and then a batch training was performed on those data. there was no need to calculate the gradients themselves due to the use of the keras api. during our experiment, the system was run 5,000 times in batches of 128, thus running the environments over a total of 640,000 steps. gamma was taken to be 0.99. figure 3 and figure 4 show the ends of some remarkable episodes of catching the opponent. figure 6 and figure 7 show the results of our experiments. it is important to note the xcoordinates in figure 5 and figure 6: for the same number of steps, the original was run for 40,340 episodes, while the modified algorithm managed to complete 82,119 episodes. this means that the a2cm algorithm spent half as many steps in an episode and was able to catch the fleeing opponent in, on average, half of the time required by the agent based on the original algorithm. these figures also show that the original algorithm did not find an optimal solution without diverging later, and even between divergences, the solutions were not as stable. our agent, on the other hand, found a solution with no divergences later and only small divergencess after the first half of the episodes. the a2cm algorithm found a solution with which it can catch the opponent in 6 steps, and it maintained this knowledge for 20,000 episodes, with one positive spike where it found the solution to the problem in just 3 steps. the run times are worth considering, as well. the regular a2c algorithm took 14,567.45 seconds to run, while the modified one ran for 14,458.28 seconds. it is worth noting that, due to the fact that almost twice as many episodes were completed, the environment had to be reset twice as often, so the modified algorithm is even faster than the normal one. later, the difference between the algorithms were tested with collision turned on, bringing the problem set even closer to realworld robotics scenarios. in this case, the agents received a penalty if they collided with each other. this method makes the environment much harder to learn, as failure will probably only result from chasing the enemy agent. it also makes the training process harder, as the steps leading to success are not as easy to determine; a collision that occurs before the enemy is caught will make similar attempts less likely to be selected as actions. when considering the training process of the environment with collision detection turned on, it is important to pay attention to the reward ratio between the negative rewards for each step and the negative reward for collision. the larger the reward for collision, the better the agents will evade collision; otherwise, they will be optimised to finish the episode as fast as possible. for this reason, the negative reward for each step was selected as −1,000, and the negative reward for the collision was −150,000,000, providing a ratio that is large enough to encourage the agents to follow a collision-evasion policy. in the first experiment on the environment with collision detection, we tried to set the algorithms such that a collision would terminate the episode. this scenario is analogous to certain scenarios in robotics in which collisions can cause malfunctions in the robots themselves and should be evaded even via high-level control. apart from turning on the collision, all other conditions and parameters of the training process were the same. figure 7 and figure 8 show the cumulated rewards per episode for the original a2c and our a2cm algorithm, respectively. it can be seen that, while neither was able to solve the environment over the timespan of the training, there was a time span of ca. 700 episodes in which our algorithm was able to catch the enemy without colliding. the original algorithm lacked any of these longer periods. the training of the original algorithm in this case took 14,173.42 seconds, while the training of the a2cm took 14,659.00 seconds. it is worth noting that the original algorithm completed 1,665 episodes, and the a2cm completed 3,723; the different numbers of reinitialisations should be considered when comparing the training times. to make the environment easier to train on, the second experiment with collisions was conducted such that the episodes only terminated if the opponent was caught. this way, the episodes were longer and always terminated successfnotedully and therefore might provide better training information than the setting of the previous experiment. this scenario is analogous to problems in robotics in which the presence of two robots in the same area is discouraged, such as area scanning scenarios or subtasks in which two robots should not scan the same area at once. just as in the previous experiment, all other parameters were left as they were in the training of the system without collision. figure 9 and figure 10 show the number of steps required to finish each episode for the original a2c and the modified a2cm algorithms, respectively, while figure 11 and figure 12 show the cumulated (negative) rewards per episode (higher is better) for the a2c and the a2cm algorithms, respectively. it can be seen that, while the original a2c algorithm did not show any clear sign of successful training, there is some indication of success for the a2cm algorithm. approaching the end of the training process, the number of steps were kept low, and, as per figure 12, collisions were also evaded, with the exception of some episodes. the original algorithm completed 1,177 episodes, while the modified one completed 1,964, which can also be seen as a sign of the superiority of the a2cm algorithm. regarding the training times, the original algorithm was trained for 13,981.22 seconds, while the modified one was trained for 19,519.85 seconds. in this case, it is clear that our algorithm used significantly more training time. figure 11. rewards per episode of the a2cm algorithm on our benchmark with collision (without terminating at collision). figure 12. rewards per episode of the original a2c algorithm on our benchmark with collision (without terminating at collision). acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 34 4. conclusion looking at the previous section, we can conclude that our modification of the original a2c algorithm, the a2cm algorithm, was able to perform much better than the original on our testbed without collision. to some extent, it outperformed the original a2c algorithm even in enviroments with collision; thus, it is recommendable for tasks in robotics. however, the algorithm has the caveat of being usable only when the agents are fully cooperative and do not have special, predefined roles. there are still many ways to improve upon the current state of our algorithm. one possibile improvement would be to introduce a variable learning rate, such as win or learn fast [3], in a deep reinforcement learning algorithm. another possible improvement is to include the fleeing agent in the algorithm so that the algorithm encompasses the full cooperative–competitive nature of the environment. in addition, other activation functions could be tried to check their behavior; for example, exponential linear units [23] might have better convergence at the price of slightly more training time. the algorithm could be extended using recurrent neural networks so that it could handle partially observable markov decision processes in which the full state is unknown. acknowledgement the research reported in this paper and carried out at the budapest university of technology and economics was supported by the tkp2020, institutional excellence program of the national research development and innovation office in the field of artificial intelligence (bme ie-mi-sc tkp2020). the research was supported by the efop-3.6.2-16-201600014 project, which was financed by the hungarian ministry of human capacities. references [1] m. l. littman, markov games as a framework for multi-agent reinforcement learning, proceedings of the eleventh international conference on machine learning, new brunswick, usa, 10 – 13 july 1994, pp. 157-163. doi 10.1016/b978-1-55860-335-6.50027-1 [2] j. hu, m. wellman, nash q-learning for general-sum stochastic games, journal of machine learning research 4 (2003), pp. 10391069. online [accessed 6 september 2021] https://www.jmlr.org/papers/volume4/hu03a/hu03a.pdf [3] m. bowling, m. veloso, multiagent learning using a variable learning rate, artificial intelligence 136 (2002), pp. 215-250. doi: 10.1016/s0004-3702(02)00121-2 [4] m. h. bowling, m. m. veloso, simultaneous adversarial multirobot learning, ijcai (2003) pp. 699-704. doi: 10.5555/1630659.1630761 [5] v. mnih, k. kavukcuoglu, d. silver, a. graves, i. antonoglou, d. wierstra, m. riedmiller, playing atari with deep reinforcement learning, arxiv (2013), 9 pp. online [accessed 14 september 2021] https://arxiv.org/abs/1312.5602 [6] j. foerster, n. nardelli, g. farquhar, t. afouras, p. h. s. torr, p. kohli, s. whiteson, stabilising experience replay for deep multiagent reinforcement learning, pmlr 70 (2017) pp. 1146-1155. doi: 10.5555/3305381.3305500 [7] s. omidshafiei, j. pazis, c. amato, j. p. how, j. vian, deep decentralized multi-task multi-agent reinforcement learning under partial observability, pmlr 70 (2017) pp. 2681-2690. doi: 10.5555/3305890.3305958 [8] j. foerster, g. farquhar, t. afouras, n. nardelli, s. whiteson, counterfactual multi-agent policy gradients, proceedings of the aaai conference on artificial intelligence, new orleans, usa, 2 – 7 february 2018, pp. 1146-1155, arxiv (2017), 12 pp. online [accessed 14 september 2021] https://arxiv.org/abs/1705.08926 [9] p. peng, y. wen, y. yang, q. yuan, z. tang, h. long, j. wang, multiagent bidirectionally-coordinated nets: emergence of human-level coordination in learning to play starcraft combat games, arxiv (2017), 10 pp. online [accessed 14 september 2021] https://arxiv.org/abs/1703.10069 [10] p. sunehag, g. lever, a. gruslys, w. m. czarnecki, v. zambaldi, m. jaderberg, m. lanctot, n. sonnerat, j. z. leibo, k. tuyls, t. graepel, value-decomposition networks for cooperative multiagent learning, arxiv (2017), 17 pp. online [accessed 14 september 2021] https://arxiv.org/abs/1706.05296 [11] t. rashid, m. samvelyan, c. s. de witt, g. farquhar, j. foerster, s. whiteson, qmix: monotonic value function factorisation for deep multi-agent reinforcement learning, proceedings of machine learning research, stockholm, sweden, 10 – 15 july 2018, pp. 4295-4304. arxiv (2018), 14 pp. online [accessed 14 september 2021] https://arxiv.org/abs/1803.11485 [12] r. lowe, y. wu, a. tamar, j. harb, p. abbeel, i. mordatch, multiagent actor-critic for mixed cooperative-competitive environments, advances in neural information processing systems 30 (2017), pp. 6379-6390. doi: 10.5555/3295222.3295385 [13] s. li, y. wu, x. cui, h. dong, f. fang, s. russell, robust multiagent reinforcement learning via minimax deep deterministic policy gradient, proceedings of the 33rd aaai conference on artificial intelligence, honolulu, hawaii, usa, 27 january – 1 february 2019, pp. 4213-4220. doi: 10.1609/aaai.v33i01.33014213 [14] p. casgrain, b. ning, s. jaimungal, deep q-learning for nash equilibria: nash-dqn, arxiv (2019), 16 pp. online [accessed 14 september 2021] https://arxiv.org/abs/1904.10554 [15] o. vinyals, t. ewalds, s. bartunov, p. georgiev, a. s. vezhnevets, m. yeo, a. makhzani, h. kättler, j. agapiou, j. schrittwieser, j. quan, s. gaffney, s. petersen, k. simonyan, t. schaul, h. van hasselt, d. silver, t. lillicrap, k. calderone, p. keet, a. brunasso, d. lawrence, a. ekermo, j. repp, r. tsing, starcraft ii: a new challenge for reinforcement learning, arxiv (2017), 20 pp. online [accessed 14 september 2021] https://arxiv.org/abs/1708.04782 [16] m. samvelyan, t. rashid, c. s. de witt, g. farquhar, n. nardelli, t. g. j. rudner, c. hung, p. h. s. torr, j. foerster, s. whiteson, the starcraft multi-agent challenge, arxiv (2019), 14 pp. online [accessed 14 september 2021] https://arxiv.org/abs/1902.04043 [17] s. liu, g. lever, j. merel, s. tunyasuvunakool, n. heess, t. graepel, emergent coordination through competition, arxiv (2019), 19 pp. online [accessed 14 september 2021] https://arxiv.org/abs/1902.07151 [18] n. bard, j. n. foerster, s. chandar, n. burch, m. lanctot, h. f. song, e. parisotto, v. dumoulin, s. moitra, e. hughes, i. dunning, s. mourad, h. larochelle, m. g. bellemare, m. bowling, the hanabi challenge: a new frontier for ai research, artificial intelligence 280 (2020), 103216. doi: 10.1016/j.artint.2019.103216 [19] r. s. sutton, d. mcallester, s. singh, y. mansour, policy gradient methods for reinforcement learning with function approximation, proceedings of the 12th international conference on neural information processing systems, denver, usa, 29 november – 4 december 2000, pp. 1057-1063. doi: 10.5555/3009657.3009806 [20] a. nagabandi, g. kahn, r. s. fearing, s. levine, neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning, 2018 ieee international conference on robotics and automation, brisbane, australia, 21 – 26 may 2018, https://doi.org/10.1016/b978-1-55860-335-6.50027-1 https://www.jmlr.org/papers/volume4/hu03a/hu03a.pdf https://doi.org/10.1016/s0004-3702(02)00121-2 https://doi.org/10.5555/1630659.1630761 https://arxiv.org/abs/1312.5602 https://doi.org/10.5555/3305381.3305500 https://doi.org/10.5555/3305890.3305958 https://arxiv.org/abs/1705.08926 https://arxiv.org/abs/1703.10069 https://arxiv.org/abs/1706.05296 https://arxiv.org/abs/1803.11485 https://doi.org/10.5555/3295222.3295385 https://doi.org/10.1609/aaai.v33i01.33014213 https://arxiv.org/abs/1904.10554 https://arxiv.org/abs/1708.04782 https://arxiv.org/abs/1902.04043 https://arxiv.org/abs/1902.07151 https://doi.org/10.1016/j.artint.2019.103216 https://doi.org/10.5555/3009657.3009806 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 35 pp. 7559-7566. doi: 10.1109/icra.2018.8463189 [21] a. f. agarap, deep learning using rectified linear units (relu), arxiv (2018), 7 pp. online [accessed 14 september 2021] https://arxiv.org/abs/1803.08375 [22] v. mnih, a. p. badia, m. mirza, a. graves, t. p. lillicrap, t. harley, d. silver, k. kavukcuoglu, asynchronous methods for deep reinforcement learning, proceedings of machine learning research, new york, usa, 20 – 22 june 2016, pp. 1928-1937. doi: 10.5555/3045390.3045594 [23] d. a. clevert, t. unterthiner, s. hochreiter, fast and accurate deep network learning by exponential linear units (elus), arxiv (2015), 14 pp. online [accessed 14 september 2021] https://arxiv.org/abs/1511.07289 http://dx.doi.org/10.1109/icra.2018.8463189 https://arxiv.org/abs/1803.08375 https://doi.org/10.5555/3045390.3045594 https://arxiv.org/abs/1511.07289 image analysis for the sorting of brick and masonry waste using machine learning methods acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 5 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 image analysis for the sorting of brick and masonry waste using machine learning methods elske linß1, jurij walz1, carsten könke1 1 materialforschungsund -prüfanstalt at the bauhaus-university of weimar (mfpa), coudraystraße 9, 99423 weimar, germany section: research paper keywords: optical sorting of building material; masonry waste; image analysis; classification; machine learning citation: elske linß, jurij walz, carsten könke, image analysis for the sorting of brick and masonry waste using machine learning methods, acta imeko, vol. 12, no. 2, article 15, june 2023, identifier: imeko-acta-12 (2023)-02-15 section editor: eric benoit, université savoie mont blanc, france received july 8, 2022; in final form february 27, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work is supporting by the tmwwdg of the free state of thuringia, germany. corresponding author: elske linß, e-mail: elske.linss@mfpa.de 1. introduction and state of the art currently, 20-23 million tons of masonry building materials (mortar and plaster, lightweight concrete, aerated concrete, sandlime bricks, and bricks) are produced annually in germany. the quantity of bricks produced, including roof tiles, is approx. 1015 million tons [1]. current guiding strategies and ambitious environmental policy goals increasingly call on manufacturers of mineral building products to introduce material cycles [2], [3]. pure brick aggregates are currently used in sports field construction, in vegetation applications, in road construction as a proportionate component of frost protection and gravel base courses, and in building construction as recycled aggregate for concrete production [1]. pure brick recycled aggregates (figure 1), which are made from low-dense waste and can be obtained from masonry waste, can be returned to brick production as recycled material after being ground again. a prerequisite for this, however, is that no mortar adhesions or other impurities may be present [4]-[6]. based on the investigations in the projects [5], [7], a leaflet on the use of recycled bricks in the brick industry was drafted. it recommends pure hard-fired material, pure/hard material, lowfired material, and masonry waste as suitable feed material for the production of roof tiles, facing bricks, and vertically perforated bricks. depending on the type of brick, up to 25 wt.-% can be reused [6], [7]. after grinding, masonry waste can also be used as a cement composite material in the cement industry [8]. here it is necessary to be able to distinguish and separate low-fired and high-fired brick types, mortar, concrete, and other components from each other. figure 1. masonry waste. abstract this paper describes different machine learning methods for recognizing and distinguishing brick types in masonry debris. certain types of bricks, such as roof tiles, facing bricks and vertically perforated bricks can be reused and recycled in different ways if it is possible to separate them by optical sorting. the aim of the research was to test different classification methods from machine learning for this task based on high-resolution images. for this purpose, image captures of different bricks were made with an image acquisition system, the data was pre-processed, segmented, significant features selected and different ai methods were applied. a support vector machine (svm), multilayer perceptron (mlp), and k-nearest neighbour (k-nn) classifier were used to classify the images. as a result, a recognition rate of 98 % and higher was achieved for the classification into the three investigated brick classes. mailto:elske.linss@mfpa.de acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 optical single-grain sorting methods offer a chance for much differentiation even of very similar materials [10, 11, and 12]. optical analysis and sorting methods are not yet used on a large scale in the sorting of construction and demolition waste. the optical sorting methods are essentially based on innovative detection routines that can be integrated into corresponding software (figure 2). the aim of the investigations is the development of a recognition routine for the differentiation of different brick types for single-variety brick and masonry rubble. features from the high-resolution image and spectrum and evaluation algorithms from machine learning will be used for the recognition task. the main focus is in a first step on particles with a size of 8/16 mm. later it will be extended on particle sizes 2/4 and 4/8 mm. very important is the sorting out impurities, like mortar, gypsum, concrete and so on. the fundamentals are being created in order to develop more effective sorting processes in the recycling of residual brick masses in the future. this paper investigates the recognition of types of brick by using rgb images and innovative methods of machine learning are described. this provides the basis for the development of optical sorting procedures. in the investigations, recognition routines and algorithms are to be tested for the correct differentiation and separation of various brick types based on image features. 2. investigations figure 3 gives an overview of the test procedure. 2.1. materials and categories at the mfpa, initial preliminary investigations were carried out to distinguish between brick types on the basis of visual image information. in the investigations, different brick products were distinguished, which can be assigned to the following brick categories: • category ii: roof tiles and facing bricks, • category iii: vertically perforated bricks and • category v: other. at first, a sample collection of a representative nationwide cross-section of brick varieties and adhering mineral building materials were done. the brick samples in the different categories are composed of different new (unused) and recycled (used) bricks without adherents. furthermore, other relevant building materials such as aerated concrete, concrete, mortar, sand-lime bricks, etc. (new and recycled) have also been included as impurities. the measured material parameters water absorption after 24 h and bulk density according to din 4226-101 for all samples included in the investigations are shown in the diagram (figure 4). a total of 7,765 image recordings of samples were made. category i (high-fired material) is still missing from these investigations, as too little material was available for examination. category iv (brick waste from recycling plant) was also excluded at this point, as the samples were not homogeneous and can therefore belong to different categories. in future works, both categories will be added to the data set. the particle size is 8-16 mm. the brick samples listed in table 1 were divided into three classes, which differ in bulk density. figure 2. principle of optical single-grain sorting method. figure 3. overview of the experimental programme. figure 4. water adsorption versus bulk density for all samples. table 1. example images for the investigated three brick categories. category objects per category example images ii roof tiles and facing bricks 3,203 iii vertically perforated bricks 2,314 v others 2,248 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 2.2. image acquisition and used software a data set of high-resolution images of the brick particles was created. the brick samples were examined under different conditions. on the one hand, three different types of lighting and, on the other hand, different combinations of features were used. this allows the influence of the illumination on the analysis results to be investigated. figure 5 shows the used image acquisition system “qualileo”. it consists of an rgb matrix camera with a 12 mp sensor and a step-less adjustable lighting system (figure 6). 2.3. investigations and results the provided algorithms are analysed using the presented data set with individual parameters and the average recognition rate (rr) captured. the achieved rr and standard deviation (stdev) are used to compare all of the results. all of the researches were carried out in the halcon programming environment. the investigation’s classifier have different setting parameters. the distance-based nearest k-neighbours value has been set to 5 in the k-nn classifier. the rbf-kernel was used with svm and the γ parameter was set to 0.02. also the one-versusall classification mode was used. with this setting a multi-class problem is reduced to a binary decision each class is compared to all the other classes. for mlp is used a softmax activation function. it performs best at classification tasks with multiple independent classification outputs. in the hidden layer, the number of hidden units is set to 15. in this study, the data set is split into two parts: 80 % and 20 %. the classifier are trained with a major part of the data set, while the classifier are tested with a smaller fraction of the data. 2.3.1. results before selection and combination of features at the beginning, the typical feature groups were analysed. table 2 shows a summary of these results. the first result was that lighting 2 (half of a ring light that simulates light from the side) always produced significantly worse results, which is why the following only presented the results for lighting l1 (fully ring light on) and l3 (incident light). this data set showed that colour features alone are the most important feature for this recognition task. the recognition rates achieved are always above 95 % for all data sets and classifiers, as shown in table 2. the region features have only little impact on recognition. additionally, the texture features prove more effective with over 80 % recognition rates for svm and mlp. the k-nn classifier had the lowest recognition results. aside from colour features, all other accuracy is insufficient. this is because the k-nn algorithm is more vulnerable to redundant features than the robust classifiers such as svm and mlp. this research found as well that using all of the possible number of features does not always result in a better accuracy, or that the improvement is minor because many of the features are redundant. besides that, with a large number of features, the classifier’s training time increase. a further improvement is to be achieved with feature selection. only the most important features for the study’s problem are calculated by this algorithm. 2.3.2. results after selection and combination of features the best results by feature selection are shown in figure 7 and figure 8. the very high recognition rates for all classifiers are remarkable. figure 5. used image acquisition system qualileo. figure 6. step-less adjustable lighting system on the qualileo. table 2. recognition rates (rr) for different classifiers learned by various numbers and kinds of features for different lightings (l1 = lighting 1, l3 = lighting 3). features svm (rr in %) mlp (rr in %) k-nn (rr in %) l1 l3 l1 l3 l1 l3 18 colour 96.27 97.04 98.30 98.23 95.91 95.24 32 region 36.62 37.71 44.98 46.33 35.26 37.52 195 texture 86.16 83.78 84.94 81.92 65.83 64.74 382 gray 78.38 77.48 74.13 85.14 42.92 44.08 432 all 80.05 79.99 93.89 94.02 58.42 60.01 figure 7. results of the reached recognition rates for the different brick types using different classifiers on l1 data set. category ii roof tiles and facing bricks category iii vertically perforated bricks category v others acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 the very high recognition rates for all classifiers are remarkable. also, there is no significant difference in the detection rate in the different lighting settings. both ring light images and incident light images have very similar results. the best result was achieved by the mlp with 98.51 % for lighting 1. with svm lighting l3 an average recognition rate of 97.92 % was reached. with a rr of 96.40 % a slightly lower result is achieved by k-nn classifier. however, the difference is marginal. a look at the individual categories also shows that category iii is recognised with an over 99 % recognition rate with both svm and mlp. not far from this result, k-nn is just below. category ii is always recognised equally well by svm and mlp (close to 98 %). here k-nn is clearly worse in the recognition (about 96 %). category v shows a lower recognition rate between 96 % and 97 % for all classifiers, but again k-nn shows the lowest results of all. compared to the results without feature selection and combination, no large increase in the recognition rates was observed. with svm and k-nn, the detection rate increased by only about 1 %, with mlp it remained at about the same rate. finally, figure 9 shows the importance of the feature selection procedure in classical classification classifiers. because it is not useful to use as many features as possible for classification, but only the most important ones. this can be seen as the score progresses. the rr increases with the number of features until it reaches its maximum value. after that, the rr decreases as more characteristics are added. because a lot of the features are redundant and have a lot of noise in them. as a feature selection method halcon presents a greedy method. in this algorithm the currently most promising feature is added to the feature vector. after that, it is evaluated whether one of the newly added features is unnecessary. summarised, it is important to note that the high recognition rates after feature selection were achieved with a relatively small number of features (table 3). the average rr is over 95 % for all classifier and both data sets l1 and l3. hence the different settings on light had only low effect on the result of the optimised classifier. additionally, the standard deviation is low, indicating a stable classification process. this simplifies the future implementation of the application and reduces the required computing effort and time. 3. conclusion and future work in this work, a method for recognizing different kinds of the brick using image processing and machine learning was investigated. a very good differentiation of the selected brick categories roof tiles and facing bricks or vertically perforated bricks could be demonstrated. with an overall recognition rate of about 98 %, the three categories are separated. in the future, the data set will be extended by additional categories, and further classifier methods (also deep learning) will be tested. the algorithms obtained are to be further used for the development of optical sorting methods. acknowledgement the results were developed within the framework of the research group "sensor technology for products and processes" at mfpa weimar, which is funded by the free state of thuringia. the investigations will be continued in an aif-igf research project. we would like to express our sincere thanks for the funding from the thuringian ministry of economics, science and digital society, and the federal ministry of economics and climate protection. the responsibility for the research content lies with the authors. references [1] a. müller, i. martins, recycling of building materials: generation processing – utilization, springer vieweg (2022). doi: https://doi.org/10.1007/978-3-658-34609-6 [2] bundesverband der deutschen ziegelindustrie e. v., roadmap for a greenhouse gas neutral brick and roof tile industry in germany, futurecamp climate gmbh (2021). online [accessed 26 may 2023] figure 8. results of the reached recognition rates for the different brick types using different classifiers on l3 data set. category ii roof tiles and facing bricks category iii vertically perforated bricks category v others figure 9. mean recognition rate (rr) of the mlp classifier (on lighting 1 data set) in dependence of the feature. table 3. mean value of recognition rates (rr) and standard deviation (sd) for all classifiers and different lightings (l1 = lighting 1, l3 = lighting 3). classifier l1 (ring light) l3 (incident light) svm (rr and sd in %) rr sd features 97.85 0.42 23 97.92 0.42 23 mlp (rr and sd in %) rr sd features 98.51 0.16 20 98.37 0.24 19 k-nn (rr and sd in %) rr sd features 96.40 0.54 18 96.05 0.31 13 https://doi.org/10.1007/978-3-658-34609-6 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 https://cerameunie.eu/topics/cerame-uniesectors/sectors/roadmap-for-a-greenhouse-gas-neutral-brickroof-tile-industry-in-germany/ [3] mehr recycling von bauund abbruchabfällen in europa notwendig online [accessed 3 january 2020] [in german] https://www.recyclingmagazin.de/2016/10/02/352716 [4] m. landmann, a. müller, u. palzer, b. leydolph, leistungsfähigkeit von aufbereitungsverfahren zur rückgewinnung sortenreiner materialfraktionen aus mauerwerk – teil 1 und 2, at mineral processing, heft 03 und heft 04, issn 1434-9302, 55. jahrgang (2014). [in german] [5] aifigf vorhaben 18889 bg: charakterisierung sortierter ziegel-recycling-materialien anhand physikalischer und chemisch-mineralogischer eigenschaften für die generierung neuer stoffströme, schlussbericht (2019). [in german] [6] s. petereit, ressourceneffizienz ziegel aus alternativen rohstoffen. vortrag zum izf-seminar, essen, germany, 19.-20. september 2019. [in german] [7] s. sabath, charakterisierung sortierter ziegel-rc-materialien, vortrag zum izf-seminar, essen, germany, 19.-20. september 2019. [in german] [8] verein deutscher zementwerke e.v., brechsand als zementhauptbestandteil – leitlinien künftiger anwendung im zement und beton: die potenziale der recyclingbrechsande: von der aufbereitung mineralischer bauabfälle bis zur herstellung ressourcenschonender betone. information betontechnik, 11 (2019). [in german] [9] mvtec software gmbh, halcon. online [accessed 26 may 2023] https://www.mvtec.com/doc/halcon/1712/de/toc_regions_fea tures.html [10] e. linß, a. karrasch, m. landmann, sorting of mineral construction and demolition waste by near-infrared technology, hiser int. conference, 21-23 june 2017, delft, the netherlands, isbn/ean: 978-94-6186-826-8, s. 29-32 [11] aif-zim vorhaben fkz zf 4144903gr6, analyseverfahren zur automatisierten qualitätssicherung für rezyklierte gesteinskörnungen auf basis hyperspektraler bildinformationen im vis und nir. schlussbericht, (2018). [in german] [12] e. linß, d. garten, a. karrasch, k. anding, p. kuritcyn, automatisierte sortieranalyse für rezyklierte gesteinskörnungen, tagungsbeitrag, fachtagung recycling r’19, weimar, germany, 2526 september 2019. [in german] [13] f. rosenblatt, the perceptron: a probabilistic model for information storage and organization in the brain, psychological review, 6 (1958), p. 386-408. doi: 10.1037/h0042519 https://cerameunie.eu/topics/cerame-unie-sectors/sectors/roadmap-for-a-greenhouse-gas-neutral-brick-roof-tile-industry-in-germany/ https://cerameunie.eu/topics/cerame-unie-sectors/sectors/roadmap-for-a-greenhouse-gas-neutral-brick-roof-tile-industry-in-germany/ https://cerameunie.eu/topics/cerame-unie-sectors/sectors/roadmap-for-a-greenhouse-gas-neutral-brick-roof-tile-industry-in-germany/ https://www.recyclingmagazin.de/2016/10/02/352716 https://www.mvtec.com/doc/halcon/1712/de/toc_regions_features.html https://www.mvtec.com/doc/halcon/1712/de/toc_regions_features.html https://doi.org/10.1037/h0042519 human-robot collision predictor for flexible assembly acta imeko issn: 2221-870x september 2021, volume 10, number 3, 72 80 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 72 human–robot collision predictor for flexible assembly imre paniti1,2, jános nacsa1,2, péter kovács1, dávid szűr1 1 elkh sztaki, centre of excellence in production informatics and control, kende street 13–17, 1111 budapest, hungary 2 széchenyi istán egyetem, egyetem square 1, 9026 győr, hungary section: research paper keywords: collaborative robot; human–robot collaboration; virtual reality; collision prediction citation: imre paniti, jános nacsa, péter kovács, dávid szűr, human-robot collision predictor for flexible assembly, acta imeko, vol. 10, no. 3, article 12, september 2021, identifier: imeko-acta-10 (2021)-03-12 section editor: bálint kiss, budapest university of technology and economics, hungary received february 15, 2021; in final form august 16, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by the tkp2020-nka-14 grant and by the h2020 project epic grant no. 739592. corresponding author: imre paniti, e-mail: imre.paniti@sztaki.hu 1. introduction according to the international federation of robotics 2019 report, the average robot density in the manufacturing industry has grown to a new global record of 113 units per 10,000 employees [1]. although the automation of smalland mediumsized enterprises (smes) is supported within the european union according to the european commission’s digital economy and society index report 2019 [2], the share of large enterprises that use industrial or service robots is four times higher than that of smes, and the use of robots varies widely with company size. one of the most commonly asked questions in the semirobotised industry is how to make production more efficient, which is related to a study [3], where robots in an assembly operation could reduce the idle time of an operator by 85 %. therefore, using collaborative robots (cobots) in a factory for assembly tasks could lead to greater efficiency, which means shorter production times. this statement can also be useful for the assembly of different products or product families, which requires a set of different fixtures or reconfigurable fixtures, such as those based on the parallel kinematic machine in [4] or the fixed but flexibly useable gripper presented in this article. however, the problem is that despite well-defined task sequences, the changeover from one product to another in a collaborative operation could lead to human failures and, consequently, to collisions with the cobot due to the previous habitual sequence of actions. by definition, a cobot has to operate with strict safety installations (protective stop execution when a certain force in a collision is reached), as outlined in iso/ts 15066:2016 [5], iso 10218‑1:2011 [6] and iso 10218‑2:2011 [7], but these protective stops could cause a significant cumulative delay in production. this depends largely on how the robot program has been written, i.e. whether operations can be continued after a protective stop. review articles such as those of hentout et al. [8] and zacharaki et al. [9] present solutions for pre-collision approaches in the frame of human–robot interaction (hri). pre-collision control methods, referred to as ‘prevention’ methods, are techniques intended to ensure safety during hri by monitoring either the human, the robot or both and modifying robot control parameters prior to incidence of collision or contact [9]. precollision approaches can be distinguished between reactive control strategies, proprioceptive sensor-based strategies and abstract the performance of human–robot collaboration can be improved in some assembly tasks when a robot emulates the effective coordination behaviours observed in human teams. however, this close collaboration could cause collisions, resulting in delays in the initial scheduling. besides the commonly used acoustic or visual signals, vibrations from a mobile device can be used to communicate the intention of a collaborative robot (cobot). in this paper, the communication time of a virtual reality and depth camera-based system is presented in which vibration signals are used to alert the user of a probable collision with a ur5 cobot. preliminary tests are carried out on human reaction time and network communication time measurements to achieve an initial picture of the collision predictor system’s performance. experimental tests are also presented in an assembly task with a three-finger gripper that functions as a flexible assembly device. mailto:imre.paniti@sztaki.hu acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 73 exteroceptive sensor-based control [8]. however, these approaches are all manifested in robot control parameter modification rather than in operator warnings. both the above studies refer to the work of carlos morato et al. [10], who presented a similar solution by creating a framework using multiple kinects to generate a 3d model with bounding spheres for human movements in real time. the proposed framework calculates human–robot interference in a 3d space with a physics-based simulation engine. the deficiency of the study is the pre-collision strategy for safe human–robot collaboration because this results in the complete stoppage of the robot. this is indeed a safe protocol, as it reduces the production break time, but it does not eliminate it completely. the aim of this paper is to highlight the importance of a new pre-collision strategy that does not modify the trajectories but relies fully on the warning of the operator (using a non-safetycritical system), especially when flexible/reconfigurable fixtures are used. section 2 provides an overview of standards and definitions related to robotic and cobotic systems, especially in relation to protective separation distance, which is crucial for the proposed solution. section 3 presents an experimental environment and use cases in which the proposed solution can be used. section 4 describes the new pre-collision approach and its system elements in detail, together with some communication measurement results to demonstrate the feasibility of the solution. finally, section 5 presents a summary with conclusions. 2. standards and definitions for cobot use in general, when using a robotic arm with a gripper the 2006/42/ec machinery directive [11] and the 2014/35/eu low voltage directive [12], together with iso/ts 15066:2016 [5] and 16 standards, have to be considered [13]. these are detailed in table 1. according to iso 10218‑1:2011 [6], a collaborative workspace is a space within the operating space where the robot system (including the workpiece) and a human can perform tasks concurrently during production operations, and a collaborative operation is a state in which a purposely designed robot system and an operator work within a collaborative workspace. according to iso/ts 15066:2016 [5], collaborative operations may include one or more of the following methods: • a safety-rated monitored stop, • hand guiding, • speed and separation monitoring, • power and force limiting. in powerand force-limiting operations, physical contact between the robot system (including the workpiece) and an operator can occur either intentionally or unintentionally. power and force-limited collaborative operations require robot systems specifically designed for this particular type of operation using built in measurement units. according to iso/ts 15066 [5], risk reduction is achieved, either through inherently safe processes in the robot or through a safety-related control system, by keeping hazards associated with the robot system below threshold limit values, which are determined during the risk assessment. if an operator wants to maintain a safe distance in a collaborative operation, iso/ts 15066:2016 robots and robotic devices collaborative robots (clause 5.5.4: speed and separation monitoring) [5], en iso 13850:2015 [19], en iso 13855:2010 [15], en iec 60204-1:2018 [20] and en iec 62046:2018 [26] should be applied together with the following regulations and standards: directive 2006/42/ec [11], en iso 10218-1:2011 [6] and en iso 10218-2:2011 [7]. in addition, en iso 12100:2010: safety of machinery general principles for design risk assessment and risk reduction [18] should be considered. in speed and separation monitoring, the protective separation distance is the shortest permissible distance between any moving hazardous part of the robot system and any human in the collaborative workspace, and this value can be fixed or variable. during automatic operations, the hazardous parts of the robot system should never get closer to the operator than the protective separation distance, which is calculated based on the concepts used to create the minimum distance formula in iso 13855:2010 [15]. the protective separation distance sp can be described by formula (1): 𝑆p(𝑡0) = 𝑆h + 𝑆r + 𝑆s + 𝐶 + 𝑍d + 𝑍r, (1) where sp(t0) is the protective separation distance at time t0 (present or current time); sh is the contribution to the protective separation distance attributable to the operator’s change in location; sr is the contribution to the protective separation distance attributable to the robot system’s reaction time; ss is the contribution to the protective separation distance due to the robot system’s stopping distance; c is the intrusion distance, as defined in iso 13855, which is the distance that a part of the body can intrude into the sensing field before it is detected; zd is the position uncertainty of the operator in the collaborative workspace as measured by the presence sensing device resulting from the sensing system measurement tolerance; and zr is the position uncertainty of the robot system, resulting from the accuracy of the robot position measurement system [5]. based on this, the authors propose to extend the protective separation distance (1) with an extra distance based on the communication time of a pre-collision system (spc) and with a contribution to the protective separation distance attributable to table 1. standards in manufacturing when using a robotic arm with a gripper. standard ref. en iso 10218-1:2011 [6] en iso 10218-2:2011 [7] iso/tr 20218-1:2018 [14] en iso 13855:2010 [15] en iso 13849-1:2015 [16] en iso 13849-2:2012 [17] en iso 12100:2010 [18] en iso 13850:2015 [19] en iec 60204-1:2018 [20] en iec 62061:2005 [21] en iso 11161:2007 [22] en iso 13854:2017 [23] en iso 13857:2019 [24] en iso 14118:2017 [25] en iec 62046:2018 [26] en iso 13851:2019 [27] acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 74 the robot operator’s reaction time (sort) to avoid speed reductions or protective stops. this would result in a modified protective separation distance (sp*): 𝑆p ∗ = 𝑆p + 𝑆pc + 𝑆ort . (2) however, the proposed system in this paper is, as has already been mentioned, an extra non-safety certified solution. the purpose of the presented measurements in this paper is to determine the above-mentioned time parameters (communication time and reaction time) of the additional distances (spc and sort) in this specific environment. 3. experimental environment and use cases robots are usually moved on prespecified trajectories that are defined in the robot’s program, and, in most cases, a new task involves starting a new robot program. another method is to move the high-level robot control from the robot to a computer, and the robot then continuously receives the required movements and other actions via a stream. in this case, the robot runs a general-purpose program or framework that interprets and executes the external instructions received. in this scenario, the framework is called ursztaki, developed by the sztaki research laboratory for engineering and management intelligence. ursztaki has three kinds of instructions: (a) basic instructions that constitute the robot's programming language, (b) instructions for the robot add-ons (e.g. the gripper and force sensor) integrated into the robot language by the accessory suppliers and (c) frequently used, more complex task instructions (e.g. putting down or picking up an object when the table distance is unknown). the third type of instruction constitute the real features of ursztaki. it should also be mentioned that the expansion of the ur robot's functions and language is possible with the help of socalled urcaps (which is a platform where users, distributors and integrators can demonstrate accessories that run successfully in ur robot applications [28]), and currently, ursztaki can also be installed as an urcap. the experimental layout consists of a ur10 robot with a force sensor and a two-finger gripper. the environment was designed to support different assembly tasks, either fully robotised or collaborative. to equip partly or even fully different components, universal mounting technology was required instead of special fixtures. another gripper (with three fingers) was used that allowed a wide variety of fixings. all three fingers could be moved independently of the selected adaptive gripper fixed to the robot worktable (figure 3). the three-finger gripper from robotiq [29] has four different modes for operating the fingers (figure 2). in the ‘pinch’ mode on the top left side of figure 1, the gripper acts as a two-finger model, and the fingers move closely together to be able to pick up small objects. the next mode is the ‘scissor’ mode in which the closing–opening ability of the gripper is used to pick up an object. in the third ‘wide’ mode, the fingers are fan-like, and they provide a firm wide grip for longer objects. for the leftmost ‘normal’ grip, the three fingers move in parallel and, depending on the relative position of the object, the fingertips also turn for greater precision in ‘normal’ and ‘wide’ mode. this is the encompassing grip. from the software point of view, both grippers can be directly programmed from the robot's program code. despite the fact that both grippers are from the same manufacturer, which could make the development easier, the commands of one of the grippers had to be modified to avoid conflict between the individual instructions. a typical scenario is that the robotic arm picks up and transfers a part to the fixed gripper, which grabs it, and after that, another part is placed or pressed with the desired force by the robotic arm on the part fixed by the immobile gripper. there are some detailed tasks, such as the insertion of a spring into a housing, which have to be performed by the human operator. in this environment, it is also possible for the robot to hold a screwdriver and fasten the assembled parts with screws at the set torque limit (figure 3 and figure 4). the prototype was designed specifically for the previously shown push-button element. however, it can be easily redesigned for another part, or a universal piece can be made to support different types of product assembly. following the parallel movement of the fingers, a form-fitting shape is created that holds the part motionless while the required actions are carried out. because the holder is connected to the fingertips, slippage is also prevented in cases where the pressing figure 1. demonstration environment. figure 2. the four different modes of the three-finger robotiq gripper [29]. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 75 force applied is too great or an inappropriate human movement occurs. the proposed solution with the immobile three-finger gripper satisfies the requirements of a flexible fixture for certain parts. in this scenario, human–robot collision problems might occur if the human operator forgets the predefined assembly task sequence when beginning the assembly of a new product, reaches for an assembly part and the hand trajectory intersects that of the robot. to demonstrate a flexible assembly with the three-finger gripper, an additional application was developed in which both grippers were used to perform the assembly task, requiring human intervention at certain points of the assembly process at the same time. in the task, a didactic element, which had been packaged with a transparent plastic lid and a metal base, were pushed together at the beginning of the operation (presumably, this packaging material came from the supplier). the operation steps of the complete assembly were the following: 1. pick up the packaging material with the robot arm and fix the base with the three-finger gripper. 2. remove the lid from the base and put it down (figure 5). 3. pick up the didactic element and place it onto the metal base. 4. put the plastic lid back on the base. 5. fix the packed object, release the three-finger gripper and put it back in its starting position. inserting the didactic element is the bottleneck in the assembly process. normally, the robot finds the hole with a small spiral movement using force sensing. since the gap between the meter and the base is narrow, this operation is not always successful (see figure 6), in which case, human intervention is possible or necessary to avoid any wastage. in some instances, the next operation (putting the lid back) corrected the skewed didactic element, and it slipped into the base. however, the success of a process should not be based on coincidence, and this is when a collision predictor system can be very useful. an easy movement by the operator can prevent wastage, thereby reducing costs. it is a simple operation sequence, but because of the positioning errors, human intervention may be required during two of the steps. figure 3. illustration of the robotised screwdriving of a push-button element in which the spring has to be inserted manually. figure 4. illustration of the robotised screwdriving of a ball valve element. figure 5. illustration of the second step. figure 6. illustration of the failed insertion of the didactic element. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 76 4. pre-collision approach as a predictor in order to avoid collisions with the robot, either the robot trajectory has to be modified in real time (which might cause additional production time, something companies want to avoid) or the human operator has to be warned with a pre-defined understandable signal so the human movement can be modified in time. the warning signal can be given to the operator in several ways: visual, acoustic or tactile. in this paper, the latter has been developed as part of a predictor of human–robot collision (prehuroco) framework. the subject of the prediction in this case is the predetermined movement of the robot, which can be recorded and will occur after a certain time, so a similar framework had to be created to that described in [10]. however, instead of a digital twin of the robot (real-time 3d visualisation of the robot), a pre-played robot model motion was used together with the 3d skeleton model of the operator. the virtual collisions of the two models were used as trigger signals to warn the operator before a real collision. 4.1. requirement analysis the following features were needed for the candidate software library, based on the requirement analysis of prehuroco: 1) fully open source: the system must fulfil all the security requirements of a real manufacturing system; therefore, complete control of the source code is obligatory. 2) modular: the system should be divided into various software components, so the candidate software library must support responsibility encapsulation. 3) distributed: in a manufacturing system, many computers and internet-of-things (iot) devices can be connected; therefore, the prehuroco software components must have the ability to run on different computers or iot devices. 4) cross platform: as the distribution requirement is for many computers and devices with different operating systems to be connected, the candidate framework should be cross platform. 5) programming language variability: as the distribution and cross-platform requirements are for different devices and computer operating systems in manufacturing scenarios, the candidate software library should support different application programming interfaces (apis). 6) scalability: prehuroco software components should be developed independently of whether they run on the same computer or not. in terms of performance, the software components should be be easily put together in one machine or one application and easily distributed. 7) rapid prototyping: the candidate framework should provide examples or even pre-made components that can be improved during prehuroco implementation because the proposed system should deal with • rigid-body simulation, • visualisation (including vr or ar), • real-time 3d scanning, • an x3d model format and • various communication protocols. unity engine [30] and unreal engine [31] are well-known crossplatform game engines. apertusvr [32] is a software and hardware vendor-free, open-source software library. it offers a no-vendor-lock-in approach for integrating vr technologies into industrial software systems. the comparison of the candidate frameworks considering the requirements is summarised in table 2. based on the prehuroco requirement analysis, the apertusvr software library was chosen for implementing the system. with the help of this software library, a distributed software ecosystem was created via the intranet/internet, which was divided into two main parts, the core and plugins. the core system is responsible for the internet/intranet communication between the elements of the distributed software ecosystem, and it synchronises the information between them during the session. the plugin mechanism makes it possible to extend the capability of any solution created by the apertusvr library. plugins can access and manipulate the information within the core system. 4.2. explanation of the prehuroco system the system is distributed into five major responsibilities: 1) 3d scanning of the human operator, 2) streaming the joint angles of the robot, 3) collision detection between the human and the robot, 4) alerting the human to the possible collision and 5) visualising the whole scenario. in the present study, these responsibilities were implemented with the help of the apertusvr library, and each responsibility was encapsulated into six plugins [33]: the collision detection plugin, the visualisation plugin, the kinect plugin, the websocket server plugin, the x3d loader plugin and the nodejs plugin. the seventh element was a websocket client, which was implemented in the form of an html site using the jquery javascript library and the vibration api method [34] for mobile phones; but for more comfortable use, the websocket client could also run on a smart watch. figure 7 shows the realised system with the connections and applied protocols in an experimental set up with an ur5 robot. collision detection plugin [35]: this plugin was created based on the pre-made apertusvr ‘bulletphysics’ plugin. previously, this plugin had been able to run rigid-body simulations, but collision events were not created during these simulations. the apertusvr rigid-body abstraction was enhanced by the functionality of collision events. visualisation plugin [36]: this plugin was used as-is from the apertusvr repository for visualisation purposes. kinect plugin [37]: this plugin was created based on the premade apertusvr ‘kinect’ plugin. previously, this plugin had table 2. comparison of different frameworks in relation to the prehuroco requirements. requirement unity engine unreal engine apertusvr open source partially yes yes modular yes yes yes distributed partially corner case yes cross platform yes partially partially prog. lang. variability partially corner case yes scalability partially partially yes rapid prototyping yes yes yes acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 77 been able to create the skeleton of the tracked human or even its point cloud, but rigid bodies were not created. for collision detection, rigid bodies are mandatory; therefore, rigid bodies were created based on the geometries of the human skeletons. websocket server plugin [38]: this plugin was created based on the pre-made apertusvr ‘websocketserver’ plugin. previously, this plugin had been able to forward all events developed in the core. for collision detection, only the collision event of the rigid bodies is necessary. during the implementation of that plugin, a filter feature was added to forward only the desired event into the websocket connection. x3d loader plugin [39]: this plugin was created based on the pre-made apertusvr ‘x3dloader’ plugin. previously, this plugin had been able to parse the x3d format and create only the geometries of the robot. for collision detection, rigid bodies are mandatory; therefore, rigid bodies were created based on the parsed geometries. nodejs plugin [40]: this plugin was used as-is from the apertusvr repository and allows a web server to be run to receive the joint angle of the ur5 robot via http requests. in the prehuroco system, these plugins are encapsulated in different applications. these different applications can be run on different computers to distribute the computational power and achieve real-time collision prediction. as the diagram in figure 7 shows, these applications communicate through internet/intranet communication via different protocols. the collision detection application has to be run on a highperformance computing (hpc) server to process the virtual collisions in real time. the kinect application can run on a dedicated computer for the kinect device or on the same computer that calculates the virtual collisions. the x3dloader and the nodejs plugins are integrated into one application and can run on the dedicated computer for the ur5 robot. the websocket server application can also be run on a different computer to ensure security and locality requirements. the joint positions are stored in a jsonlist file, which is generated by executing the whole robot program. during the execution, the joint positions are ‘grabbed’ and saved with a given frequency. the speed of the simulation is equal to the speed of the robot movement, and the ‘forecast’ can be determined by the delay between the simulation starting time and the real robot execution start time. 4.3. modified prehuroco system and measurements during the validation process, the prehuroco system was reconfigured to eliminate any unnecessary delay in the system. the reconfiguration process was achieved by the apertusvr configuration feature; thus, all the plugins were reused without any modification. the previously distributed prehuroco system was therefore easily reconfigured to form a single application (figure 8) and was able to run on a single computer. the elimination of unnecessary network connections/delays was a crucial step in avoiding any latency in the system. through this approach, the human–robot-collision calculation time and the human-operator reaction time were measured precisely. timestamps were buffered before and after the collision events, the websocket message transmission/receipt and the human operator pressing the button on a bluetooth keyboard. the proposed framework was tested on two local network topologies. in the first case, the calculations were divided into a cloud-service-based computer (with four virtual cpus, 8 gb ram, running a windows 10 operating system) and an hpc server (ideum with intel i7-8700, rtx 2080 8 gb gddr6 nvidia graphics card, dual 250 gb nvme m.2 ssd, 32 gb 2400 mhz ddr4 ram, running a windows 10 operating system), and the collision events were delivered to the websocket client with significant delay. by running all apertusvr plugins on the ideum and sending only the collision events via a wireless lan connection (2.4 ghz wi-fi) the user experience was quasi real-time. figure 9 shows a virtual collision test running on the ideum (hpc server) with the skeleton model of a single operator (1), virtual ur5 robot movement simulation (2), a real robot (3), a kinect sensor (4) and a mobile phone (5) with an android operating system running the websocket client to vibrate the device. the 3d scene was visualised with a top camera view, but arbitrary camera views are possible. to avoid the execution of large javascript files locally on the android mobile phone, external calls to cdn.jsdelivr.net and code.jquery.com were used. the ping time to these services were measured with an android application (pingtools version 4.52), which gave 9 ms and 30 ms as the average from three measurements, respectively. figure 7. prehuroco system elements and connections with the applied protocols. figure 8. reconfigured prehuroco system. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 78 the second network topology was used to measure the communication time of the system with five more people of different genders and ages (see figure 10). the reaction time of each operator was measured using an android application (reaction test version 1.3), which vibrates at randomised short time intervals (a couple of seconds) and calculates the average of five measurements. the average calculation time of the human–robot model collision until the http-request send was 98 ms, the average time from the http-request send to the keypress event was 1,355 ms and the average reaction time was 449 ms. each virtual collision with keyboard pressing as confirmation was tested three times. according to a bluetooth keyboard performance test, ‘microsoft delays in a non-interference test environment by approximately 40 to 200 milliseconds’ [41], so the calculation time for the human–robot collision together with the network communication time would be less than 1 s using this prehuroco configuration. however, by using raknet instead of http requests the performance of the system can be significantly improved. raknet communication time measurements from 223 collision events showed that only 36.52 ms was needed on average. furthermore, it is worth mentioning that with 5g communication an average two-way latency of 1.26 ± 0.01 ms would be possible, as noted in [42]. the kinect plugin creates a simplified skeleton model from the human operator, which needs improvement. an anthropomorphic skeleton model or voxelisation could be a solution in the future. it should be highlighted that the communication time increased by the human reaction time should not exceed the δt time between the pre-played simulated motion and the actual motion of the robot. a jsonlist file of the simulated ur5 robot movement is provided in [43]. 5. conclusion in this paper, a commercially available gripper as a flexible fixture for assembly and a new pre-collision approach as a predictor for human–robot collaboration were presented. the proposed framework was realised with the help of a modular, distributed, open-source cross platform (apertusvr) with different programming api support and scalability solutions. seven interconnected system modules were developed with the goal of monitoring the movement of the human operator in 3d space, calculating collisions with a virtual robot (with preplayed movements rather than the movement of a real robot) and alerting the human operator before a real collision could occur. successful virtual collision tests with six candidates showed that the operator received the warning signal immediately (under 1 s) in the form of a mobile-device vibration to modify the planned movement. in some cases, real-time path planning is required, especially in a changing environment, such as when the position of the workpiece to be gripped is variable (e.g. litter picking). in a collaborative environment, this is a serious security challenge that the whole system has to manage. the static parts of the environment can be checked regularly through collision detection, but the presence of the human means that ‘simple’ collision detection is not sufficient. this was the main reason for the current research and development presented in this paper. acknowledgement this research has been supported by the ‘thematic excellence program – national challenges subprogram – establishment of the center of excellence for autonomous transport systems at széchenyi istván university (tkp2020nka-14)’ project and by the european commission through the h2020 project epic (https://www.centre-epic.eu/) under grant no. 739592. references [1] ifr press releases. online [accessed 16 august 2021] https://ifr.org/ifr-press-releases/news/robot-race-the-worldstop-10-automated-countries [2] european commission, digital economy and society index report 2019, integration of digital technology. online [accessed 16 august 2021] https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=59 979 [3] j. shah, j. wiken, b. williams, c. breazeal, improved humanrobot team performance using chaski, a human-inspired plan execution system, proc. of the 6th int. conf. on human-robot interaction, lausanne switzerland, 8-11 march 2011, pp. 29-36. doi: 10.1145/1957656.1957668 figure 9. virtual collision test. figure 10. virtual collision measurements with five additional candidates. https://ifr.org/ifr-press-releases/news/robot-race-the-worlds-top-10-automated-countries https://ifr.org/ifr-press-releases/news/robot-race-the-worlds-top-10-automated-countries https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=59979 https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=59979 https://doi.org/10.1145/1957656.1957668 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 79 [4] t. gaspar, b. ridge, r. bevec, m. bem, i. kovač, a. ude, z. gosar, rapid hardware and software reconfiguration in a robotic workcell, proc. of the 18th ieee int. conf. on advanced robotics (icar), hong kong, china, 10-12 july 2017, pp. 229236. doi: 10.1109/icar.2017.8023523 [5] iso/ts 15066:2016, robots and robotic devices collaborative robots. online [accessed 16 august 2021] https://www.iso.org/standard/62996.html [6] iso 10218‑1:2011, robots and robotic devices safety requirements for industrial robots part 1: robots. online [accessed 16 august 2021] https://www.iso.org/standard/51330.html [7] iso 10218‑2:2011, robots and robotic devices safety requirements for industrial robots – part 2: robot systems and integration. online [accessed 16 august 2021] https://www.iso.org/standard/41571.html [8] a. hentout, m. aouache, a. maoudj, i. akli, human–robot interaction in industrial collaborative robotics: a literature review of the decade 2008–2017, advanced robotics 33 (2019) pp. 764799. doi: 10.1080/01691864.2019.1636714 [9] a. zacharaki, i. kostavelis, a. gasteratos, i. dokas, safety bounds in human robot interaction: a survey, safety science 127 (2020) 104667. doi: 10.1016/j.ssci.2020.104667 [10] c. morato, k. kaipa, b. zhao, s. k. gupta, safe human robot interaction by using exteroceptive sensing based human modeling, proc. of the asme 2013 international design engineering technical conferences and computers and information in engineering conference. volume 2a: 33rd computers and information in engineering conference. portland, oregon, usa. 4-7 august 2013, 10 pp. doi: 10.1115/detc2013-13351 [11] european commission, 2006/42/ec machinery directive. online [accessed 16 august 2021] https://eur-lex.europa.eu/legalcontent/en/txt/?uri=celex%3a32006l0042 [12] european commission, 2014/35/eu low voltage directive. online [accessed 16 august 2021] https://eur-lex.europa.eu/legalcontent/en/txt/?uri=celex:32014l0035 [13] covr project database for directives and standards. online [accessed 16 august 2021] https://www.safearoundrobots.com/toolkit/documentfinder [14] iso/tr 20218-1:2018 robotics safety design for industrial robot systems part 1: end-effectors. online [accessed 16 august 2021] https://www.iso.org/standard/69488.html [15] en iso 13855:2010 safety of machinery positioning of safeguards with respect to the approach speeds of parts of the human body. online [accessed 16 august 2021] https://www.iso.org/standard/42845.html [16] en iso 13849-1:2015 safety of machinery safety-related parts of control systems part 1: general principles for design. online [accessed 16 august 2021] https://www.iso.org/standard/69883.html [17] en iso 13849-2:2012 safety of machinery safety-related parts of control systems part 2: validation. online [accessed 16 august 2021] https://www.iso.org/standard/53640.html [18] en iso 12100:2010 safety of machinery general principles for design risk assessment and risk reduction. online [accessed 16 august 2021] https://www.iso.org/standard/51528.html [19] en iso 13850:2015 safety of machinery emergency stop function principles for design. online [accessed 16 august 2021] https://www.iso.org/standard/59970.html [20] en iec 60204-1:2018 safety of machinery electrical equipment of machines part 1: general requirements. online [accessed 16 august 2021] https://standards.iteh.ai/catalog/standards/sist/e7d3ec34-16ab476d-b979-1de5762a3ed7/sist-en-60204-1-2018 [21] en iec 62061:2005 safety of machinery functional safety of safety-related electrical, electronic and programmable electronic control systems. online [accessed 16 august 2021] https://standards.iteh.ai/catalog/standards/sist/4c933a51-d926457b-b3da-4bfaef9908ac/sist-en-62061-2005 [22] en iso 11161:2007 safety of machinery integrated manufacturing systems basic requirements. online [accessed 16 august 2021] https://www.iso.org/standard/35996.html [23] en iso 13854:2017 safety of machinery minimum gaps to avoid crushing of parts of the human body. online [accessed 16 august 2021] https://www.iso.org/standard/66459.html [24] en iso 13857:2019 safety of machinery safety distances to prevent hazard zones being reached by upper and lower limbs. online [accessed 16 august 2021] https://www.iso.org/standard/69569.html [25] en iso 14118:2017 safety of machinery prevention of unexpected start-up. online [accessed 16 august 2021] https://www.iso.org/standard/66460.html [26] en iec 62046:2018 safety of machinery application of protective equipment to detect the presence of persons. online [accessed 16 august 2021] https://standards.iteh.ai/catalog/standards/sist/b62f0bb2-9011413a-a717-caf55f66f289/sist-en-iec-62046-2018 [27] en iso 13851:2019 safety of machinery two-hand control devices principles for design and selection. online [accessed 16 august 2021] https://www.iso.org/standard/70295.html [28] universal robots, urcap software platform of universal robots. online [accessed 16 august 2021] https://www.universal-robots.com/ [29] robotiq website. online [accessed 16 august 2021] www.robotiq.com [30] unity, cross-platform game engine. online [accessed 16 august 2021] https://unity.com [31] unreal engine, cross-platform game engine. online [accessed 16 august 2021] https://www.unrealengine.com/en-us/ [32] apertusvr documentation, gitbook. online [accessed 16 august 2021] https://apertus.gitbook.io/vr/ [33] prehuroco sample files on github. online [accessed 16 august 2021] https://github.com/mtasztaki/apertusvr/tree/0.9.1/sam ples/collisiondetection [34] vibration api (second edition), w3c recommendation, 18 october 2016. online [accessed 16 august 2021] https://www.w3.org/tr/vibration/ [35] collision detection plugin, apertusvr on github. online [accessed 16 august 2021] https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugi ns/physics/bulletphysics [36] visualisation plugin, apertusvr on github. online [accessed 16 august 2021] https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugi ns/render/ogrerender [37] kinect plugin, apertusvr on github. online [accessed 16 august 2021] https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugi ns/track/body/kinect [38] websocket server plugin, apertusvr on github. online [accessed 16 august 2021] https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugi ns/languageapi/websocketserver https://doi.org/10.1109/icar.2017.8023523 https://www.iso.org/standard/62996.html https://www.iso.org/standard/51330.html https://www.iso.org/standard/41571.html https://doi.org/10.1080/01691864.2019.1636714 https://doi.org/10.1016/j.ssci.2020.104667 https://doi.org/10.1115/detc2013-13351 https://eur-lex.europa.eu/legal-content/en/txt/?uri=celex%3a32006l0042 https://eur-lex.europa.eu/legal-content/en/txt/?uri=celex%3a32006l0042 https://eur-lex.europa.eu/legal-content/en/txt/?uri=celex:32014l0035 https://eur-lex.europa.eu/legal-content/en/txt/?uri=celex:32014l0035 https://www.safearoundrobots.com/toolkit/documentfinder https://www.iso.org/standard/69488.html https://www.iso.org/standard/42845.html https://www.iso.org/standard/69883.html https://www.iso.org/standard/53640.html https://www.iso.org/standard/51528.html https://www.iso.org/standard/59970.html https://standards.iteh.ai/catalog/standards/sist/e7d3ec34-16ab-476d-b979-1de5762a3ed7/sist-en-60204-1-2018 https://standards.iteh.ai/catalog/standards/sist/e7d3ec34-16ab-476d-b979-1de5762a3ed7/sist-en-60204-1-2018 https://standards.iteh.ai/catalog/standards/sist/4c933a51-d926-457b-b3da-4bfaef9908ac/sist-en-62061-2005 https://standards.iteh.ai/catalog/standards/sist/4c933a51-d926-457b-b3da-4bfaef9908ac/sist-en-62061-2005 https://www.iso.org/standard/35996.html https://www.iso.org/standard/66459.html https://www.iso.org/standard/69569.html https://www.iso.org/standard/66460.html https://standards.iteh.ai/catalog/standards/sist/b62f0bb2-9011-413a-a717-caf55f66f289/sist-en-iec-62046-2018 https://standards.iteh.ai/catalog/standards/sist/b62f0bb2-9011-413a-a717-caf55f66f289/sist-en-iec-62046-2018 https://www.iso.org/standard/70295.html https://www.universal-robots.com/ http://www.robotiq.com/ https://unity.com/ https://www.unrealengine.com/en-us/ https://apertus.gitbook.io/vr/ https://github.com/mtasztaki/apertusvr/tree/0.9.1/samples/collisiondetection https://github.com/mtasztaki/apertusvr/tree/0.9.1/samples/collisiondetection https://www.w3.org/tr/vibration/ https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/physics/bulletphysics https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/physics/bulletphysics https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/render/ogrerender https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/render/ogrerender https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/track/body/kinect https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/track/body/kinect https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/languageapi/websocketserver https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/languageapi/websocketserver acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 80 [39] x3d loader plugin, apertusvr on github. online [accessed 16 august 2021] https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugi ns/languageapi/jsapi/nodejsplugin/js/plugins/x3dloader [40] nodejs plugin, apertusvr on github. online [accessed 16 august 2021] https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugi ns/languageapi/jsapi/nodejsplugin [41] bluetooth keyboard performance test. online [accessed 16 august 2021] http://www.technical-direct.com/en/bluetooth-keyboardperformance-test/ [42] interactivity test: examples from real 5g networks (part 3) . online [accessed 16 august 2021] https://www.rohde-schwarz.com/us/solutions/test-andmeasurement/mobile-network-testing/stories-insights/articleinteractivity-test-examples-from-real-5g-networks-part-3_253380.html [43] jsonlist file of the simulated ur5 robot movement. online [accessed 16 august 2021] https://github.com/mtasztaki/apertusvr/blob/89aefbc9 b2a0e7524092b87d728ad539cfc0a856/plugins/languageapi/jsa pi/nodejsplugin/js/plugins/httpsimulator/ur5.jsonlist https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/languageapi/jsapi/nodejsplugin/js/plugins/x3dloader https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/languageapi/jsapi/nodejsplugin/js/plugins/x3dloader https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/languageapi/jsapi/nodejsplugin https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/languageapi/jsapi/nodejsplugin http://www.technical-direct.com/en/bluetooth-keyboard-performance-test/ http://www.technical-direct.com/en/bluetooth-keyboard-performance-test/ https://www.rohde-schwarz.com/us/solutions/test-and-measurement/mobile-network-testing/stories-insights/article-interactivity-test-examples-from-real-5g-networks-part-3-_253380.html https://www.rohde-schwarz.com/us/solutions/test-and-measurement/mobile-network-testing/stories-insights/article-interactivity-test-examples-from-real-5g-networks-part-3-_253380.html https://www.rohde-schwarz.com/us/solutions/test-and-measurement/mobile-network-testing/stories-insights/article-interactivity-test-examples-from-real-5g-networks-part-3-_253380.html https://www.rohde-schwarz.com/us/solutions/test-and-measurement/mobile-network-testing/stories-insights/article-interactivity-test-examples-from-real-5g-networks-part-3-_253380.html https://github.com/mtasztaki/apertusvr/blob/89aefbc9b2a0e7524092b87d728ad539cfc0a856/plugins/languageapi/jsapi/nodejsplugin/js/plugins/httpsimulator/ur5.jsonlist https://github.com/mtasztaki/apertusvr/blob/89aefbc9b2a0e7524092b87d728ad539cfc0a856/plugins/languageapi/jsapi/nodejsplugin/js/plugins/httpsimulator/ur5.jsonlist https://github.com/mtasztaki/apertusvr/blob/89aefbc9b2a0e7524092b87d728ad539cfc0a856/plugins/languageapi/jsapi/nodejsplugin/js/plugins/httpsimulator/ur5.jsonlist non-destructive investigation of the kyathos (6th-4th centuries bce) from the necropolis volna 1 on the taman peninsula by neutron resonance capture and x-ray fluorescence analysis acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 6 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 non-destructive investigation of the kyathos (6th-4th centuries bce) from the necropolis volna 1 on the taman peninsula by neutron resonance capture and x-ray fluorescence analysis nina simbirtseva1,2, pavel v. sedyshev1, saltanat mazhen1,2, almat yergashov1,2, andrei yu. dmitriev1, irina a. saprykina3, roman a. mimokhod3 1 frank laboratory of neutron physics, joint institute for nuclear research, dubna, russia 2 institute of nuclear physics, almaty, 050032, the republic of kazakhstan 3 institute of archaeology of the russian academy of sciences, moscow, russia section: research paper keywords: neutron resonance capture analysis; non-destructive neutron analysis; xrf analysis citation: nina simbirtseva, pavel v. sedyshev, saltanat mazhen, almat yergashov, andrei yu. dmitriev, irina a. saprykina, roman a. mimokhod, nondestructive investigation of the kyathos (6th-4th centuries bce) from the necropolis volna 1 on the taman peninsula by neutron resonance capture and xray fluorescence analysis, acta imeko, vol. 11, no. 3, article 20, september 2022, identifier: imeko-acta-11 (2022)-03-20 section editor: francesco lamonaca, university of calabria, italy received march 5, 2021; in final form august 31, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: nina v. simbirtseva, e-mail: simbirtseva@jinr.ru 1. introduction neutron resonance capture analysis (nrca) is known as technique for non-destructive investigations of various objects, including archeological artifacts and objects of cultural heritage. the induced activity in experiments with bronze artefacts is practically absent, what was evaluated in several works one of them you can find in ref. [1]. the method relies on registration neutron resonances in radiative capture and measurement the yield of reaction products in these resonances. the energy positions of resonances give information about isotope and elemental composition of an object. the area under the resonances can be used to calculate the number of the element or isotope’s nuclei. nrca is applied for various studies at different institutes and sources, such as the gelina pulsed neutron source of the institute of reference materials and measurements of the joint research center (gel, belgium) [2] the isis pulsed neutron and muon source in the united kingdom [3] and the j-parc pulsed neutron source in japan [4]. at present, the nrca is also used at the frank laboratory of neutron physics (flnp) in the joint institute nuclear research (jinr), dubna, russia [5]. the experiments are carried out at the intense resonance neutron source (iren) facility [6]-[8] with multi-sectional liquid scintillator detector (210 liters), which is used for the registration prompt gamma-quanta [9]. one of these experiments at the iren facility was carried out for the kyathos which was transferred by the institute of archeology of the russian academy of sciences. abstract the method of neutron resonance capture analysis (nrca) is currently being developed at the frank laboratory of neutron physi cs. the analysis determines the elemental and isotope compositions of objects non-destructively, which makes it suitable measurement tools for artefacts without sampling. nrca is based on the registration of neutron resonances in radiative capture and the measurement of the yield of reaction products in these resonances. the potential of nrca at the intense resonance neutron source facility is demonstrated on the investigation of a kyathos (6th-4th centuries bce) from the necropolis volna 1 on the taman peninsula. in addition, x-ray analysis was applied to the same archeological object. the element composition determined by nrca and xrf data is in agreement. mailto:simbirtseva@jinr.ru https://context.reverso.net/%d0%bf%d0%b5%d1%80%d0%b5%d0%b2%d0%be%d0%b4/%d0%b0%d0%bd%d0%b3%d0%bb%d0%b8%d0%b9%d1%81%d0%ba%d0%b8%d0%b9-%d1%80%d1%83%d1%81%d1%81%d0%ba%d0%b8%d0%b9/russian+academy+of+sciences acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 2. the kyathos in 2016-2018 a sochi expedition group of institute of archaeology of the russian academy of sciences (ia ras) under the leadership of roman a. mimokhod [10] conducted excavations of an antique town soil necropolis volna 1 on the taman peninsula (figure 1). the necropolis is dated to the middle / second quarter of the 6th century bc the beginning of the 3rd century bc, the main period of its use dates back to the second half of the 6th-5th centuries bc. the burials of the volna 1 necropolis were supposedly left by the greek and barbarian population; the earliest burials may have been left by settlers who arrived from the territory of magna graecia (children's burials in amphorae, burials in the "rider's pose", etc.). the burial ground volna-1 is an important monument for studying of a problem of greekbarbarian relations on the territory of the borderland the northern black sea region, the clash and interaction of two different ethnocultural layers of the population. the social structure, economic and political position of the first greek colonies on the territory of the bosporus depended on the nature of these contacts, their duration and strength, and the development of adaptive mechanisms. this interaction process was extremely complex, multifaceted and contradictory. on the one hand, with the emergence of the greek colonies, confrontational relations with the local population inevitably developed, on the other hand, the greek colonists and the local population influenced on each other. this process was superimposed on the specificity and uniqueness of the northern black sea region, where for a long time societies of fundamentally different economic structures, the level of development of social, political and economic life came into contact. more than 2,000 burials have been uncovered within the boundaries of the necropolis. an anthropological material, representative and significant for the history and archeology of the northern black sea region, was obtained. the collection of items obtained during the excavations is represented by ceramic materials, incl. production of ceramic centers of ancient greece (container amphoras, kiliks, skyphoses, drinking bowls, lekythoses, askoses, etc.), phoenician glass, weapons (spears, swords, arrows), protective weapons (full armor, helmet of the corinthian type), jewelry (bronze, silver and gold earrings and rings, etc.), coins and other categories of burial objects. all this testifies to the fact that the volna 1 burial ground is a city necropolis. this places it in the category of the most prestigious necropolises of the bosporan kingdom. the necropolis was associated with the settlement of the same name, to which it adjoins from the north. in a settlement, existed from the pre-greek period to the 3rd century bc, the systems of urban planning, stone house-building were studied. the fact that it is not a rural settlement, but a polis is evidenced by a number of prestigious finds, including, for example, a ceramic mask, which most likely indicates the presence of a theater in the city. in the burials, objects rare for the territory of the northern black sea region were found: a bronze prosthesis with a wooden support structure for a leg, an iron plate armor, a bronze corinthian helmet of the "hermione" type, musical instruments (cithara, lyre), a wreath on a gilded bone base with bronze petals and gold beads, as well as a series of kiathoi ancient greek vessels for pouring wine [11]. there are in total 17 burials from the materials of the excavations of ia ras, where bronze kyathoi or their fragments were found. their planigraphy is illustrative. burials with these items are located in the early section of the necropolis, in the northwestern part (figure 2). figure 1. location of the volna 1 necropolis. figure 2. the scheme of the necropolis of excavations 2017 2018 and a location of the burials with kiathoi. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 in the excavations of the ia ras in 2016, which is located to the south, kiathoi were not found. a burial 656, in which the item, considered in the article, was found (figure 3, 1), was paired. one skeleton belonged to a man 45-55 years old, the second to a woman 20-25 years old. the burial was done in a box, which was made of mud blocks. the dead were laid on a klinai a wooden bed, from the legs of which characteristic recesses remained in the grave. the burial had pronounced military attributes. fragments of an iron sword have been found near the male skeleton; an accompanying burial of a horse was done next to the pit. the kyathos was found in the filling of the burial pit. its original position is unclear. however, the grave contains vessels that are clearly associated with wine drinking: skyphos (figure 3, 3) and kilik (figure 3, 4). the kyathos, as an item for scooping and pouring wine, complements this set. the volume of the scoop was 0.045 liters, that is, a quarter of a sextarius. in contrast to various similar finds made of clay (their production from clay began at the end of the 6th century bc), the kiathoi found on the territory of the necropolis volna 1 were made of metal. it moves them in to the category of special objects. presumably, these kiathoi refer to greek imports that entered the northern black sea region along with the greek colonists. 3. nrca experiment the investigations were carried out at iren facility. the main part of the iren facility is a linear electron accelerator. the facility parameters: the average energy of electrons was ~ 60 mev, the peak current was ~ 1.5 а, the width of electron pulse was ~ 100 ns, and the repetition rate was 25 hz. neutron producing target is made of tungsten-based alloy and represents a cylinder 40 mm in diameter and 100 mm in height placed within an aluminum can 160 mm in diameter and 200 mm in height. distilled water is circulated inside the can, providing target cooling and neutron moderation. water layer thickness in a radial direction is 50 mm. the total neutron yield was about 3∙1011 s-1. the measurements were carried out at the 58.6 meters flight path of the 3rd channel of the iren. the big liquid scintillator detector was used for the registration of γ-quanta. the sample was placed inside the detector. the neutron flux was permanently monitored by the snm-17 neutron counter. the signals from the detector and the monitor counter were simultaneously fed to the two independent inputs of time-todigital converter (tdc). the measurements with the sample lasted about 136 hours. the resonance energies were determined according to the formula: 𝐸 = 5227 𝐿2 𝑡 2 , (1) where 𝑡 is time of flight in microseconds, 𝐿 flight path in meters, 𝐸 kinetic energy of a neutron in ev. the resonances of silver, tin, copper and arsenic were identified on the time-of-flight spectrum (figure 4, figure 5) [12], [13]. figure 3. volna 1. items from burial 656: 1 – kyathos, 2 bowl, 3 – skyphos, 4 kilik, 5 lekythos. figure 4. the part of time-of-flight spectrum of (n,γ) reactions on the kyathos material. figure 5. the part of time-of-flight spectrum of (n,γ) reactions on the kyathos material. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 the measurements with standard samples of identified elements were made in addition to the measurement with the investigated sample. parts of time-of-flight spectra of (n, γ) reactions on the material of standard silver, tin, copper and arsenic samples are shown in figure 6 to figure 9. 4. data analysis and results five resonances of tin, two resonances of copper, one resonance of silver and one resonance of arsenic were selected during the analysis of the experimental data. only well-resolved, without overlapping, with sufficient statistics and unambiguously known parameters resonances were analyzed. the sum of the detector counts in resonance is expressed by the formula: ∑ 𝑁 = 𝑓(𝐸0) ⋅ 𝑆 ⋅ 𝑡 ⋅ 𝜀𝛾 ⋅ 𝛤𝛾 𝛤 𝐴 . (2) here, 𝑓(𝐸0) is the neutron flux density at the resonance energy 𝐸0, 𝑆 – the sample area, 𝑡 – measuring time, 𝜀𝛾 – the detection efficiency of the detector radiative capture, 𝛤𝛾 , 𝛤 – the radiative and total resonance widths. 𝐴 = ∫ [1 − 𝑇(𝐸)] 𝐸2 𝐸1 d𝐸 (3) – resonance area on the transmission curve, where 𝐸1, 𝐸2 – initial and final values of energy range near resonance. 𝑇(𝐸) = 𝑒 −𝑛 𝜎(𝐸) (4) – the energy dependence of the neutron transmission by the sample; 𝜎(𝐸) – the total cross section at this energy with doppler broadening, 𝑛 – the number of isotope nuclei per unit area. the value 𝐴 was determined from experimental data for investigated sample by the formula: 𝐴𝑥 = ∑ 𝑁𝑥 ⋅ 𝑀𝑠 ⋅ 𝑆𝑠 ∑ 𝑁𝑠 ⋅ 𝑀𝑥 ⋅ 𝑆𝑥 ⋅ 𝐴𝑠 . (5) here, ∑ 𝑁𝑥 , ∑ 𝑁𝑠 counts under the resonance peak of the investigated and standard samples, 𝑆𝑥 , 𝑆𝑠 – the area of the investigated and standard samples. 𝑀𝑥 , 𝑀𝑠 – the number of monitor counts during the measurement of the investigated and standard samples. we used a program written according to the algorithm given in [14] for calculation of 𝐴𝑠 (resonance area on the transmission curve of the standard sample) and 𝑛𝑥 (the number of isotope nuclei per unit area of the investigated sample) values. this procedure is schematically shown in (figure 10). the 𝐴𝑠 value was calculated by means of known resonances parameters and figure 6. the part of time-of-flight spectrum of (n,γ) reactions of the standard silver sample. figure 7. the part of time-of-flight spectrum of (n,γ) reactions of the standard tin sample. figure 8. the part of time-of-flight spectrum of (n, γ) reactions of the standard copper sample. figure 9. the part of time-of-flight spectrum of (n,γ) reactions of the standard arsenic sample. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 𝑛𝑥 parameters for the standard sample. the 𝑛𝑥 value was determined from the investigated sample 𝐴𝑥 value. the first measurement with the kyathos [16] hadn’t shown satisfactory results for copper and arsenic. it was decided to repeat the measurement but take into account archeological object and neutron flux features. the kyathos is a relatively long object (a length is about 16 cm) and has uneven thickness distribution (a handle and a bucket). as we know the neutron flux intensity decrease from a center to an edge it makes sense to measure the investigated object in parts and place every part to the center of the beam line. this experiment was carried out and we have obtained two spectrums (with the handle and with the bucket separately) that were summarized taking into account monitoring coefficients (figure 4, figure 5). the analysis results are presented in the table 1. the nrca has limitations correlated to the neutron flux intensity at iren facility at the present time, using of certain type of a detector system in an experiment and a matrix of elements in an investigated object. and besides that, this method has low sensitivity to elements lighter than iron and to elements that have atomic mass close to magic mass numbers (bismuth, lead, etc.) additional measurements therefore were carried out by x-ray fluorescence (xrf) on 4 points of archeological object by means portable spectrometer 5i tracer (bruker) (table 2). the element composition of the kyathos determined by nrca and xrf are in agreement; some differences in wt% could be well understood due to the different nature of the analysis. the application of the xrf analysis allows determining the elemental composition on the surface. nrca, in turn, does not measure the element and isotope compositions just on the surface but rather in the bulk (through the whole volume of the object), the analytical results are not significantly affected by surface corrosion. 5. conclusion nrca carried out at the intense resonance neutron source (iren) facility for the kyathos from the necropolis volna 1 on the taman peninsula showed that this method delivers satisfactory results. the mass of kyathos, determined by weighing is 86.7 g. according to the result of nrca the value of determining total elements mass coincides with the kyathos mass within the margin of error and considering the presence of silicon, aluminum and iron on the surface of the object. the element composition of the kyathos determined by nrca and xrf are in agreement. the results obtained confirm the using of tin bronze to make the investigated kyathos. the recorded presence of tin in the composition of the alloy, its quantitative characteristics, along with the established presence of arsenic, refers us to the type of alloy characteristic of archaic greece [17]. the presence of such elements as arsenic, silver point to the conclusion that copper was obtained from polymetallic ores. nrca allows not only identifying the elemental and isotopic composition of the sample but also makes it possible to determine the amounts of elements and isotopes in the whole volume of the object. the method is non-destructive; the induced activity of the bronze samples is practically absent. all this makes it promising for research of archaeological artifacts and objects of cultural heritage. although the number of facilities that can provide suitable neutron beams is limited, nrca might be a useful additional analyzing technique. acknowledgement the authors express their gratitude to the staff of the iren facility and to a. p. sumbaev, the head of the development of the facility, to v. v. kobets, the head of sector no.5 of the scientific and experimental injection and ring division of the nuclotron (veksler and baldin laboratory of high energy physics), for the supporting with uninterrupted operation of the facility in the process of measurements. references [1] h. postma, m. blauw, p. bode, p. mutti, f.corvi, p. siegler, neutron-resonance capture analysis of materials, journal of radioaanalitical and nuclear chemistry 248(1) (2001), pp. 115120. doi: 10.1023/a:1010690428025 [2] h. postma, p. schillebeeckx, neutron resonance analysis, in neutron methods for archaeology and cultural heritage. neutron scattering applications and techniques, ed. by n. kardjilov and g. festa (springer, cham, 2017), pp. 235–283. [3] g. gorini (ancient charm collab.), ancient charm: a research project for neutron-based investigation of cultural-heritage figure 10. dependence of value a on a number of nuclei and resonance parameters with taking into account δ (doppler effect) [15]. table 1. the results of measurements with the kyathos by nrca (the bulk). № element mass, g weight, % 1 cu 59.7 ± 3.9 68.8 ± 4.5 2 sn 5.29 ± 0.23 6.10 ± 0.26 3 as 0.1892 ± 0.0081 0.02179 ± 0.0094 4 ag 0.0131 ± 0.0014 0.0151 ± 0.0016 table 2. the results of measurements with the kyathos by xrf, averaged over four points (the surface). № element weight, % 1 cu 61.63 ± 0.31 2 si 21.57 ± 0.15 3 al 7.02 ± 0.19 4 sn 4.58 ± 0.25 5 fe 4.254 ± 0.075 6 pb 0.684 ± 0.073 7 ti 0.327 ± 0.057 8 as 0.050 ± 0.017 https://doi.org/10.1023/a:1010690428025 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 objects, nuovo cim. c 30, 2007, pp. 47–58. doi: 10.1393/ncc/i2006-10035-9 [4] h. hasemi, m. harada, t. kai, t. shinohara, m. ooi, h. sato, k. kino, m. segawa, t. kamiyama, y. kiyanagi, evaluation of nuclide density by neutron transmission at the noboru instrument in j-parc/mlf, nucl. instrum. methods phys. res., sect. a 773 (2015), pp. 137–149. doi: 10.1016/j.nima.2014.11.036 [5] n. v. bazhazhina, yu. d. mareev, l. b. pikelner, p. v. sedyshev, v. n. shvetsov, analysis of element and isotope composition of samples by neutron spectroscopy at the iren facility, physics of particles and nuclei letters 12 (2015), pp. 578–583. doi: 10.1134/s1547477115040081 [6] o. v. belikov, a. v. belozerov, yu. becher, yu. bulycheva, a. a. fateev, a. a. galt, a. s. kayukov, a. r. krylov, v. v. kobetz, p. v. logachev, a. s. medvedko, i. n. meshkov, v. f. minashkin, v. m. pavlov, v. a. petrov, v. g. pyataev, a. d. rogov, p. v. sedyshev, v. g. shabratov, v. a. shvec, v. n. shvetsov, a. v. skrypnik, a. p. sumbaev, a. v. ufimtsev, v. n. zamrij, physical start-up of the first stage of iren facility, journal of physics: conf. ser. 205, 2010, 012053. doi: 10.1088/1742-6596/205/1/012053 [7] o. v. belikov, a. v. belozerov, yu. becher, yu. bulycheva, a. a. fateev, a. a. galt, a. s. kayukov, a. r. krylov, v. v. kobetz, p. v. logachev, a. s. medvedko, i.n.meshkov, v. f. minashkin, v. m. pavlov, v. a. petrov, et al., physical start-up of the first stage of iren facility, j. phys.: conf. ser. 205 (2019), art. no. 012053. doi: 10.1088/1742-6596/205/1/012053 [8] e. a. golubkov, v. v. kobets, v. f. minashkin, k. i. mikhailov, a. n. repkin, a. p. sumbaev, k. v. udovichenko, v. n. shvetsov, the first results of the commissioning of the second accelerating section of the lue-200 accelerator of the iren installation, soobshch. oiyai r9-2017-77 (dubna, oiyai, 2017). [9] h. maletsky, l. b. pikelner, k. g. rodionov, i. m. salamatin, e. i. sharapov, detector of neutrons and gamma rays for work in the field of neutron spectroscopy, communication of jinr 13-6609 (dubna, jinr, 1972) 1-15 (in russian). [10] r. a. mimokhod, n. i. sudarev, p. s. uspensky, the necropolis volna 1(2017) (krasnodar territory, taman peninsula), rescue archaeological research materials 25 (2018), pp. 220–231. [11] r. a. mimokhod, n. i. sudarev, p. s. uspensky, necropolis volna-1 on the taman peninsula (2019), new archaeological projects. recreating the past. to the 100th anniversary of russian academic archeology. edited by makarov m., ia ran, 2019, pp. 80–83. [12] s. f. mughabghab, neutron gross sections, neutron resonance parameters and thermal gross sections, academic press, new york, 1984, isbn: 9780125097017. [13] s. i. sukhoruchkin, z. n. soroko, v. v. deriglazov, low energy neutron physics. landolt-bornstein.v. i/16b, berlin: springer verlag, 1998, isbn: 3540608575. [14] v. n. efimov, i. i. shelontsev, calculation for graphs of determining the parameters of neutron resonances by the transmission method, communications of the jinr p-641 (dubna, 1961) 1-19 (in russian). [15] p. v. sedyshev, n. v. simbirtseva, a. m. yergashov, s. t. mazhen, yu. d. mareev, v. n. shvetsov, m. g. abramzon, i. a. saprykina, determining the elemental composition of antique coins of phanagorian treasure by neutron spectroscopy at the pulsed neutron source iren in flnp jinr, pis’ma v zhurnal fizika elementarnykh chastits i atomnogo yadra, issn:1547-4771, physics of particles and nuclei letters 17(3), pp. 389-400. doi: 10.1134/s1547477120030139 [16] n. v. simbirtseva, p. v. sedyshev, s. t. mazhen, a. m. yergashov, i. a. saprykina, r. a. mimokhod, preliminary result of investigation of element composition of kyathos (6th-4th centuries bce) from the necropolis volna 1 on the taman peninsula by neutron resonance capture analysis. imeko tc4 international conference on metrology for archaeology and cultural heritage, 22-24 october 2020, trento, italy, proceedings, 2020. online [accessed 24 september 2022] https://www.imeko.org/publications/tc4-archaeo2020/imeko-tc4-metroarchaeo2020-073.pdf [17] s. orfanou, early iron age greek copper-based technology: votive offerings from thessaly. institute of archaeology, ucl, thesis submitted for phd in archaeology/archaeometallurgy, 2015, p. 87, tab. 2.2. online [accessed 28 august 2022] https://discovery.ucl.ac.uk/id/eprint/1471577/1/orfanou_orf anou%202015%20early%20iron%20age%20greek%20copperbased%20technology.pdf http://dx.doi.org/10.1393/ncc/i2006-10035-9 https://doi.org/10.1016/j.nima.2014.11.036 https://doi.org/10.1134/s1547477115040081 https://doi.org/10.1088/1742-6596/205/1/012053 https://doi.org/10.1088/1742-6596/205/1/012053 https://ui.adsabs.harvard.edu/link_gateway/2020ppnl...17..389s/doi:10.1134/s1547477120030139 https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-073.pdf https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-073.pdf https://discovery.ucl.ac.uk/id/eprint/1471577/1/orfanou_orfanou%202015%20early%20iron%20age%20greek%20copper-based%20technology.pdf https://discovery.ucl.ac.uk/id/eprint/1471577/1/orfanou_orfanou%202015%20early%20iron%20age%20greek%20copper-based%20technology.pdf https://discovery.ucl.ac.uk/id/eprint/1471577/1/orfanou_orfanou%202015%20early%20iron%20age%20greek%20copper-based%20technology.pdf experiment assisting system with local augmented body (easy-lab) in dual presence environment acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 6 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 experiment assisting system with local augmented body (easy-lab) in dual presence environment ahmed alsereidi1, yukiko iwasaki1, joi oh2, vitvasin vimolmongkolporn2, fumihiro kato2, hiroyasu iwata3 1 waseda university, graduate school of creative science and engineering, tokyo, japan 2 waseda university, global robot academic institute, tokyo, japan 3 waseda university, faculty of science and engineering, tokyo, japan section: research paper keywords: vr/ar; hands-free interface; telecommunication; teleoperation citation: ahmed alsereidi, yukiko iwasaki, joi oh, vitvasin vimolmongkolporn, fumihiro kato, hiroyasu iwata, experiment assisting system with local augmented body (easy-lab) in dual presence environment, acta imeko, vol. 11, no. 3, article 3, september 2022, identifier: imeko-acta-11 (2022)-03-03 section editor: zafar taqvi, usa received february 26, 2022; in final form july 21, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: ahmed alsereidi, e-mail: ahmed@akane.waseda.jp 1. introduction there has been a notable amount of research and development in telepresence robotics, with the occurrence of covid, telepresence robotics has been used to improve subject experiments. in previous studies, researchers and engineers frequently ask individuals to complete a task, watch their behaviour, or evaluate the usability of new technologies. as researchers, we often need to observe how a subject act in an experiment and provide guidance and instructions during an experiment. easy-lab system was designed to allow such experiments to be carried from a remote location. the designed system uses a 6 dofs robotic head that utilizes differential gears to imitate human head-waist motion. the neck, waist, and detachable mechanism make up the mechanical design of the detachable robotic head. also, the maximum latency between the user and the robot is 25 ms, which is low enough for human perception [1]. this was verified by comparing the human head motion and robot head motion side by side, both moved identically as shown in figure 1. figure 1. detachable head robot used for testing. abstract this paper introduces a system designed to support conducting experiments of subjects when the situation does not allow experimenter and subject to be in the same place such as the covid19 pandemic where everyone relied on video conference applications which has its limitation. due to the difficulty of directing with a video conferencing system using solely video and voice. the system we developed allows an experimenter to actively watch and interact with the subject. even if you're operating from a distant area, it is still possible to conduct experiments. another important aspect this study will focus on is the case of when there are several subjects required and the experimenter must be able to guide both subjects equally well. the system proposed uses a 6 dof robotic arm with a camera and a laser pointer attached to it on the subject side. the experimenter uses a head-mounted display to control it and it moves corresponding to the head movement allowing for easy instruction and intervention to the subject side. comparison with other similar research is also covered. the study will focus mainly on which viewing method is the easiest for the experimenter to use, and if teaching one subject at the time gives better results than teaching two subjects simultaneously. mailto:ahmed@akane.waseda.jp acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 this system can also intervene using a laser pointer to point at the object being worked on. the joint angles of the robot are calculated by ik from the acquired head movements of the experimenter and reflected in the real robot. photon unity networking was used to sync the motion of the experimenter and the robot remotely [2]. photon unity networking is a package usually used for multiplayer games due to its flexible matchmaking where objects can be synced over the network. figure 2 shows the system diagram of how the system sends and receives data. another point to consider is that the system must be user friendly for most people to use. tele-operated robots have been making advances in this field, as well as full-body immersion systems to imitate the human motion [3], [4]. those systems focus on immersion and are lacking in terms of usability as they are heavy and are difficult to manoeuvre. there are some cases where multiple subjects are required for a certain experiment, in this case multiple robots would be required, at least one robot per subject. there are several requirements to fulfil for this system. first, instructions given to the subjects should be transmitted correctly in as little time as possible. second, the experimenter must have clear visibility to both subject’s surroundings, third, allow the experimenter to be present in several locations at the same time (dual presence). to create a system that allows users to freely switch and reallocate their attention. in telepresence systems, to fully immerse a user in a remote environment, it is preferable that the user devotes his or her undivided attention to it. teleoperation work efficiency improves when the system delivers a greater sensation of immersion and presence [5]. in this study, we focused on creating a dual presence system, so the experimenter needs to pay attention to two remote environments simultaneously and be able to focus their attention as needed between environments. in this research, we aim to develop a system that allows experimenters to be able to achieve dual-presence and monitor both subjects. we will propose two types of visual environment presentation and evaluate them in a set of experiments and then compare and discuss the results. the methods that would be used are as follow: a) split screen: the experimenter’s head-mounted display (hmd) screen is split up from the middle horizontally and shows subject a’s environment on the top, while subject b’s environment is on the bottom. b) superimposed screen: the experimenter’s hmd screen only shows one image. either environment a or environment b, or both with 50% transparency set to all of them. the subject’s feeling towards the robot must also be put into consideration when conducting this experiment. therefore, the system was also designed to operate into different modes, mode 0, 1 and 2: a) mode 0: allows the robots in both environments to always follow the experimenter’s motion b) mode 1: the robot in environment a moves according to the experimenter’s motion and robot in environment b is static. c) mode 2: the robot in environment b moves according to the experimenter’s motion and robot in environment a is static. the goal of this paper is to study the idea of dual-presence and how well it can be implemented as well as means of improving. experiments were conducted to evaluate the system and compare the methods used and finally suggest a way to improve remote experimenting and advances multi-presence research. 2. proposed method when using easy-lab all participants improved their task performance and recorded higher scores in the subjective evaluation. this result suggests easy-lab’s effectiveness, at least in tasks that require pointing and observation from multiple sides [2]. there are three main objectives to achieve with this experiment. first, fulfil the given task successfully. second, switch between the two robots with ease and be able to exist in two different places at the same time. finally, give the subjects the feeling that the robot or at least the person operating is human. as for the experiment itself, the same task is used for all conditions. the robot head has a laser pointer attached to the camera and it follows the head movement of the user as shown in figure 3. local operator head motion is measured by vive pro hmd (htc). local software is composed by using unity (version 2019.4.17f1, unity technologies) vr simulator. tcp/ip between local pc and remote pc, ros# library was used. for this research, the system was tested over the same network in one building, in the future we would like to broaden the scale and operate it from a different town or country and measure the difference in delay. however, technology-wise, it is possible to operate from any distance if there is internet access. there are two subjects in this experiment, and the second robot head is the same as the one shown in figure 3 but placed in front of the 2nd subject. the input required from the experimenter is made simple to improve usability, the vive controller’s trigger button is used to choose which robot to control, detailed explanation on the next section. on the subject’s side, each subject is given a piece of paper with holes in it as shown in figure 4. while on the experimenter side there is an answer sheet that he can use as a reference. the pattern of the answer sheet is generated randomly every time so that the same pattern is never repeated. the experimenter points the laser pointer in hole #1 and wait for 3 seconds, after 3 figure 2. easy-lab system diagram. figure 3. easy-lab system and experimenter. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 seconds have passed, the subject inserts a thread into that hole and so on until hole #6. the answer sheet is colour coded as well to make it easier to read for the experimenter. once all the holes have been connected, the subject stops working on the task and the experimenter must notice using the camera that all holes are connected properly, completing the task. also, a timer is set as soon as the task starts, when the experimenter confirms the task completion visually, he presses a button to stop the timer. 3. system configuration 3.1. experimenter system configuration the outcome we’re studying on the experimenter side is the effect of changing the way environment visuals is shown and the usability of the proposed interface in each mode. the basic requirement to present visuals of the detachable head’s vision is to familiarize with remote location environments and be aware of the state of the subject and the task he/she is performing. in addition, another important requirement is that the user can allocate their attention to the two environments at will while using this interface to perform the co-presence tasks. in this section, we design both the concurrent vision presentation system to relay environment information and the modes for control. regarding the environment information, the easiest way for most humans to be familiarized with an environment is to see it; so, we designed a system that presented images of the two environments simultaneously from a first-person view, since it provides a high level of immersion [6]. recent studies have proposed several presentation systems for visuals, such as a split screen by arranging several images in one screen [7], and others of a screen superimposition type used for superimposing halftransparent images [8]. in this research, we use both methods. the screen superimposition viewing method is used due to being able to provide two first-person view images simultaneously. while the second viewing method splits the two environments to two screens. the other requirement of this system is to allow users able to easily switch or reallocate their attention. researchers have proposed several methods for switching attention easily between two transparent images, such as changing the transmission ratio of two images via foot pedal [9], by the user's gazing point [10] but for this research we aimed for a simpler method which is pressing a button on the vive controller since the hmd used is also vive, the modes changing sequence changes as follow with every button click; mode 0 -> mode 1 -> mode 2. 4. experimenter viewing method the first viewing method is the two superimposed screens, by setting two depth-separated planes on different places as shown on (figure 5 a). the experimenter can see both subjects at the same time, transparency of 50% is used for both images. moreover, we want to test the difference between superimposed and split screens, so the 50% transparency affect is used for all viewing methods. the second viewing method is where the experimenter can see both images in the split screen method (figure 5 b). 4.1. experimenter operation modes ideally, the system should be conducted with several tasks to verify its usability in and avoid dependance of the task itself as much as possible. but this one task should be sufficient for evaluating dual presence viewing and control method and the evaluation is based on meeting the following conditions: a) point at the correct holes as viewed in the answer sheet b) instruct the subjects as quickly as possible, depending on the view and control method, the time is expected to vary. c) make sure the subjects perform the task correctly. in addition, notice when a subject makes a mistake as soon as possible. for every experiment, the experimenter was given time to test each mode and train for ~3 minutes to familiarize with the system. also, the task was performed 4 times, with conditions changing every time and they are as follow: a) superimposed screens/ mode 0; the user can see both environments at the same time and the robots both move all the time. when changing from mode 0 to 1 in this case, mode 1 only shows the answer sheet and the user can go back to operating the robots by pressing the same button again and return to mode 0. b) superimposed screens/ mode 1&2; the user can see only environment a when in mode 1 and control the robot on the same environment, while robot in environment b stops. vise-versa when in mode 2. moreover, in this case, mode 0 shows the answer sheet. c) split screen/ mode 0; the user can see both environments at the same time and the robots both move all the time. when changing from mode 0 to 1 in this case, mode 1 only shows the answer sheet and the user can go back to operating the robots by pressing the same button again and return to mode 0. d) split screen/ mode 1&2; the user can see both environments when in mode 1 but control only the robot on environment a, while robot in environment b stops. figure 4. paper used to connect the dots in the experiment. figure 5. overview of the viewing methods. a) superimposed. b) split screen. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 vise-versa when in mode 2. moreover, in this case, mode 0 shows the answer sheet. 4.2. results and discussion on experimenter operation in this chapter, we verify the usability of the developed system through a user study. for this experiment, we asked the cooperation of six robotics researchers (mean age 24, sd 1.26, male 5, female 1) who were previously involved in subject experiments. each experimenter was instructing 2 subjects at the same time, the 12 participants who acted as subjects are also researchers familiar with robotics (mean age 23, sd 1.31, male 10, female 2) the experimenter and the two subjects were in locations where they can’t see each other, the only communication method is by using the laser pointer attached to each robot and guide the subjects to complete the task. next, the experimenter points at the holes in the correct order as described in the answer sheet provided to him when the task starts. since there are two subjects, the experimenters all chose to instruct subject a to connect the first hole, then moved to guide subject b immediately and repeated this until both finished the task (6 dots connected). depending on the method used, the average number of errors changed as shown in (table 1), error was measured by how many threads were inserted in the wrong hole. the total number of tries is also the maximum number of possible errors and its 12 for each method. table 2 shows the average instruction time in seconds for each method. the outcome for each method used is as follows: a) 3 errors were made in total, the highest error from an operation method. this result was expected, when the two screens are superimposed, it is confusing at times. some users reported that one of the subjects connected a hole without his instruction, this is due to both robots moving at the same time and the experimenter was instructing subject b but subject a’s robot was also moving. another experimenter reported that it was difficult to distinguish between the two laser pointers. as for the time, this method had the longest operation time, it took the users more time to distinguish between the two environments in this method which increased operation time more than necessary. b) no errors were made in this method. this result was due to the fact that the user can focus entirely on one environment and ignore the other one until his instructing is finished as one of the users reported so. the average time was 179 seconds, 19 seconds faster than method (a), even though the viewing method was the same, the average time increased because it was faster to instruct one person even though switching between environments took some time. c) in this method, the total number of errors is 2, again the reason for this error is because when guiding subject b, subject a’s robot also moves and sometimes gives wrong pointing to the subject. the average time taken is 187 seconds, its faster than method (a) but slower than method (b). d) this method resulted in one error which is most likely due to human error. the average time in this method is 157 seconds, it is the fastest method between them all. one user reported that the operation was very smooth by checking the answer sheet in mode 0, instructing subject a in mode 1 and instructing subject 2 in mode 2 and repeating. we can see from the results above that the best method to use for the experimenter is method (d) if the task’s main concern is time as it gave the best results in instructing time and only one error. however, method (b) is a better candidate if the content of the task allows for no errors to be made as it’s the easiest to focus on. after the experiment was over, a questionnaire was given to the experimenters and it’s shown on (table 3), the answers were based on a linear scale from 1-7 for all questions. the results of the questionnaire for each method are as follows: a) in this method, users reported that it was hard to see most of the time. they also felt that the two environments existed in the same place. the average answer was in the middle of the scale, similar to question 4 as well. they also felt that the time taken to instruct was too long. b) for this method, most users reported that its easy to see, they also felt as if they existed in two different locations at the same time. it was easy to instruct both subjects in this method. almost no one got confused when instructing in this setting. users reported time taken to be a little lower than method (a) but it is still considered a long time. c) in this method, users reported that it was easier to see the environments. some users felt that they exist in 2 different places while others felt that they exist in the same place, most answers leading to the middle of the scale. most users were able to instruct very well. question 4 was also in the middle of the scale while all users reported that instructing time was short. d) all the results are exactly the same as (c) except for question 4, in this method no one got confused. the results of the questionnaire show that the users had a better experience overall when operating with method (d) the most. 4.3. result and discussion on subject operation the focus of the research on the subject side is to study the effect of the experimenter having to instruct two people at once and how the subject reacts to it, especially when changing operation methods. the requirements on this part are as follow: table 1. average instruction error. method instructional error (times) (a) superimposed screens/ mode 0 3 (b) superimposed screens/ mode 1&2 0 (c) split screen/ mode 0 2 (d) split screen/ mode 1&2 1 table 2. average instruction time. subject instructional time (s) (a) superimposed screens/ mode 0 198.7 (b) superimposed screens/ mode 1&2 179.3 (c) split screen/ mode 0 187.4 (d) split screen/ mode 1&2 157.1 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 a) fulfil the given task successfully with minimum errors and quickly. b) be able to feel the presence of the experimenter instructing them. before starting the task, only the experimenter knows which type of operation mode was used. however, depending on the mode used, the subjects reacted differently to the task, so a survey of four questions was conducted after each task to further investigate. the answer of the questionnaire (table 4) had a linear scale from 1-7, similar to the experimenter survey. the methods used are also the same as (table 2) and the results are as follow: a) in this method, some users did not complete the task successfully and had a harder time following the instructions. most users also felt that the instructor is always watching, and they felt his presence most of the time. b) most users had no issues completing the task in this mode. on the other hand, they felt the presence of the instructor less than method (a). c) results of this method are the same as (a) d) results of this method are the same as (b). from the answers of the survey, it can be seen that the viewing method of the experimenter has no effect on the subject’s performance. while the operation mode is different, users felt more at ease when the robots were moving all the time and made them feel the presence of the instructor more. to further prove this, wilcoxon signed rank test shown in figure 6 was used to verify which robot was easier to follow with 7 being hard to follow and 1 being easy to follow. the score was significantly improved when mode 1&2 was used compared to mode 0 as the test statistic is lower than the critical value (5 < 8) so we reject ho. this is sufficient evidence that there is a difference between the two modes in terms of which one is easier to follow. 5. discussion on the practical application of easy-lab in this section, the practical application of the proposed method presented in this study is discussed. the advantages of the using easy-lab as in dual presence settings and the methods used can be clarified by comparing this method with other manipulation methods. following the comparison, concerns about using this interface in real life are discussed. 5.1. comparison with other similar methods based on the results of the previous section, the proposed method was compared with other similar methods: a. gesture cam: a video communication system for sympathetic remote collaboration the sharedview system. the operator wears the sharedview. the sharedcamera’s image is sent to the display at the instructor’s sight and the instructor uses gestures in front of the display. the display and the gestures are received by a camera and sent back to the operator’s hmd. in this way, the instructor can give instruction with gestures [11]. easy-lab provides a more modern use case and adds the ability to increase the number of robots as needed, in this research, two robots were required. b. use of gaze and hand pointers in mixed reality remote collaboration the mixed reality remote collaboration system setup. the system supports the use of hand gesture, sketch, hand pointer, and gaze pointer visual cues for communication in the collaboration. the system tracks the remote expert’s hands to use the visual cues and employs a 360-degree camera to share the task space [12]. this system requires more practice to get used to its operation and takes more time than easy-lab to instruct someone. c. telesar vi: telexistence surrogate anthropomorphic robot vi telesar vi is a newly developed telexistence platform for the accel embodied media project. it was designed and implemented with a mechanically unconstrained full-body master cockpit and a 67 degrees-of-freedom (dof) anthropomorphic avatar robot. the avatar robot can operate in a sitting position since the main area of operation is intended to be manipulation and gestural. the system provides a full-body experience of our extended “body schema,” which allows users to maintain an up-to-date representation in space of the positions of their different body parts, including their head, torso, arms, hands, and legs [13]. while this system provides more precise movements, it is too expensive to use for most researchers and requires experience to operate, its heavy and large size makes it hard to move as well. 5.2. limitations in this study, only a laser pointer was used to give instructions in the operation method, the purpose was to ensure that the operation method does not interfere with verifying which viewing method or operation mode is better, so it was made simple. a disadvantage to this is that its hard to transmit table 3. experimenter evaluation questionnaire. question qe1 can you see both environment a&b easily qe2 do you feel that you exist in 2 different places at the same time or do you feel like that both environments exist in the same place qe3 were you able to instruct the 2 subjects equally well qe4 did you get confused with the instructing when you switched environments qe5 did you feel that the time taken to instruct the students was too long figure 6. overview of the viewing methods. a) superimposed. b) split screen. table 4. experimenter evaluation questionnaire. question qe1 where you able to fulfil the task successfully qe2 was the instruction of the robot easy to follow qe3 were you able to feel the presence of the person instructing you qe4 did you feel that the instructor is always watching you and not someone else acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 information other than the pointing, a button to stop the laser pointer in one environment should be added to reduce confusion as much as possible. another feature that would be useful in this system is adding a method of transmitting audio of the experimenter to the subjects being instructed. one more restriction faced was the lack of experimenters, to provide more concrete results and finding, the number of subjects should be increased. 6. conclusions in this research, we proposed using easy-lab to perform dual presence operation control. the task performed had 4 different settings. first, two different viewing method were selected and tested accordingly, superimposed screens and split screens. it was found that the split screen operation method provided better results, the time taken to complete the task was faster with minimum number of errors. there were two operation modes used in this system, one mode moved all robots at the same time while the other allowed the experimenter to choose one robot to control at a time. the best control method in terms of ease of use and least number of errors is to control one robot at a time. furthermore, on the subject side, users had an easier time following the instructions of the robot when one robot at a time was being controlled. in the future, an intuitive posture instruction method will be developed that allows more information to be transmitted and provide more sense of being present in multiple places at the same time. other than the control method, the next step is to increase the number of robots and subjects to evolve the system from being dual presence to multi-presence. we have yet to test the limit of how many subjects a single person can instruct using easy-lab. by increasing the number of subjects, the control method must also be revised to accommodate such system. finally, this system has potential to be used by the masses in education, conferences, etc. therefore, further testing is needed in these environments to verify that as this study might potentially suggest a new method of working with other humans from remote places. acknowledgement this research is supported by waseda university global robot academic institute, waseda university green computing systems research organization and by jst erato grant number jpmjer1701, japan. references [1] v. vimolmongkolporn, f. kato, t. handa, y. iwasaki, h. iwata, design and development of 6 dofs detachable robotic head utilizing differential gear mechanism to imitate human head-waist motion; 2022 ieee/sice international symposium on system integration (sii), narvik, norway, 9-12 january 2022, pp. 467-472. doi: 10.1109/sii52469.2022.9708793 [2] y. iwasaki, j. oh, t. handa, a. a. sereidi, v. vimolmongkolporn, f. kato, h. iwata, experiment assisting system with local augmented body (easy-lab) for subject experiments under the covid-19 pandemic, acm siggraph 2021 emerging technologies, virtual, 9-13 august 2021, pp. 1-4. doi: 10.1145/3450550.3465345 [3] i. yamano, t. maeno, five-fingered robot hand using ultrasonic motors and elastic elements, proc. of the 2005 ieee international conference on robotics and automation, barcelona, spain, 18-22 april 2005, pp. 2673–2678. doi: 10.1109/robot.2005.1570517 [4] j. butterfass, m. grebenstein, h. liu, g. hirzinger, dlr-hand ii: next generation of a dextrous robot hand, proc. of the 2001 icra, ieee int. conference on robotics and automation (cat. no.01ch37164), seoul, korea (south), 21-26 may 2001, vol. 1, pp. 109–114 doi: 10.1109/robot.2001.932538 [5] w. zhou, j. zhu, yutao chen, jie yang, erbao dong, hao zhang, xuming tang, visual perception design and evaluation of electric working robots, ieee int. conference on mechatronics and automation, tianjin, china, 4-7 august 2019, pp. 886–891. doi: 10.1109/icma.2019.8816366 [6] h. debarba, e. molla, b. herbelin, r. boulic, characterizing embodied interaction in first and third person perspective viewpoints, ieee symposium on 3d user interfaces (3dui), arles, france, 23-24 march 2015, pp.67–72, 2015. doi: 10.1109/3dui.2015.7131728 [7] r. sato, m. kamezaki, j. yang, s. sugano, visual attention to appropriate monitors and parts using augmented reality for decreasing cognitive load in unmanned construction, proc. of the 6th int. conference on advanced mechatronics, no. 15-210, december 2015, p. 45. doi: 10.1299/jsmeicam.2015.6.45 [8] t. miura, behavioral and visual attention, kazama shobo, chiyoda, japan, 1996, isbn 978-4-7599-1936-3. [9] s. iizuka, y. iwasaki, h. iwata, research on the detachable body -validation of transparency ratio of displays for the co-presence dual task, the robotics and mechatronics conference, hiroshima, japan, 5-8 june 2019, paper no.2a2-l04, 2019 (in japanese). [10] m. y. saraiji, s. sugimoto, c. l. fernando, k. minamizawa, s. tachi, layered telepresence: simultaneous multi presence experience using eye gaze based perceptual awareness blending, acm siggraph 2016, anaheim, usa, 24-28 july 2016, posters, pp. 1-2. doi: 10.1145/2945078.2945098 [11] h. kuzuoka, t. kosuge, m. tanaka, gesturecam: a video communication system for sympathetic remote collaboration, proc. of the 1994 acm conference on computer supported cooperative work (cscw '94), chapel hill north carolina usa, 22-26 october 1994, pp. 35-43. doi: 10.1145/192844.192866 [12] s. kim, a. jing, h. park, s. h. kim, g. lee, m. billinghurst, use of gaze and hand pointers in mixed reality remote collaboration, 9th int. conference on smart media and applications (sma), jeju, republic of korea, 17-19 september 2020, pp. 1-6. [13] susumu tachi, yasuyuki inoue, fumihiro kato, telesar vi: telexistence surrogate anthropomorphic robot vi, int. journal of humanoid robotics 17, 05(2020), 2050019. doi: 10.1142/s021984362050019x [14] htc vive, 2011. online [accessed 26 february 2022] https://www.vive.com/eu/product/vive/ [15] arduino. online [accessed 26 february 2022] https://www.arduino.cc/. [16] c. zaiontz, wilcoxon signed-ranks table, 2020. online [accessed 26 february 2022] http://www.real-statistics.com/statistics-tables/wilcoxonsignedhttp://www.real-statistics.com/statistics-tables/wilcoxonsigned-ranks-table/ranks-table/ [17] unity technologies japan/ucl, unity-chan!, 2014. online [accessed 26 february 2022] https://unity-chan.com/ https://doi.org/10.1109/sii52469.2022.9708793 https://doi.org/10.1145/3450550.3465345 https://doi.org/10.1109/robot.2005.1570517 https://doi.org/10.1109/robot.2001.932538 https://doi.org/10.1109/icma.2019.8816366 https://doi.org/10.1109/3dui.2015.7131728 http://dx.doi.org/10.1299/jsmeicam.2015.6.45 http://dx.doi.org/10.1145/2945078.2945098 https://doi.org/10.1145/192844.192866 https://doi.org/10.1142/s021984362050019x https://www.vive.com/eu/product/vive/ https://www.vive.com/eu/product/vive/ https://www.arduino.cc/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ https://unity/ https://unity-chan.com/ https://unity-chan.com/ 10-22-3-ce acta imeko december 2011, issue 0, 2 www.imeko.org acta imeko | www.imeko.org december 2011 | issue 0 | 2 journal contacts keywords: acta imeko, editorial, contacts citation: paul p.l. regtien, journal contacts, acta imeko, no. 0, december 2011, p. 2, identifier: imeko-acta-00(2011)-01-02 editor: paul regtien, measurement science consultancy, the netherlands received december 28, 2011; in final form december 29, 2011; published december 30, 2011 copyright: © 2011 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: this work was supported by measurement science consultancy, the netherlands corresponding author: paul p. l. regtien, e-mail: paul@regtien.net about the journal acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is: 2221-870x. editorial and publication board professor paul p.l. regtien (netherlands vice president for publications) dr dirk röske (germany information officer) professor antónio da cruz serra (portugal chairman of the advisory board) professor pasquale daponte (italy chairman of the technical board) francisco allegria (portugal – editorial office) sergio rapuao (italy – editorial office) imeko technical committee chairmen (ex officio) about imeko the international measurement confederation, imeko, is an international federation of actually 39 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses principal contact: paul p. l. regtien measurement science consultancy (msc) julia culpstraat 66 7558 jb hengelo (ov) the netherlands email: paul@regtien.net acta imeko attn. dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig germany support contact dirk röske email: dirk.roeske@ptb.de disarmadillo: an open source, sustainable, robotic platform for humanitarian demining acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 9 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 disarmadillo: an open source, sustainable, robotic platform for humanitarian demining emanuela elisa cepolina1, alberto parmiggiani2, carlo canali3, ferdinando cannella3 1 snail aid – technology for development, via fea 10, 16142 genova, italy and industrial robotics facility, italian institute of technology, via morego, 30, 16163 genova, italy 2 mechanical workshop, italian institute of technology, via san quirico, 19/d, 16163 genova, italy 3 industrial robotics facility, italian institute of technology, via morego, 30, 16163 genova, italy section: research paper keywords: humanitarian demining; open source hardware; appropriate technology citation: emanuela elisa cepolina, alberto parmiggiani, carlo canali, ferdinando cannella, disarmadillo: an open source, sustainable, robotic platform for humanitarian demining, acta imeko, vol. 11, no. 3, article 8, september 2022, identifier: imeko-acta-11 (2022)-03-08 section editor: zafar taqvi, usa received march 9, 2022; in final form august 30, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: emanuela elisa cepolina, e-mail: emanuela.cepolina@iit.it 1. introduction “peace agreements may be signed and hostilities may cease, but landmines and explosive remnants of war (erw) are an enduring legacy of conflict”, states, in its first sentence, the landmine monitor, a comprehensive assessment of progresses in eliminating landmines, cluster munitions and other erw, published annually. according to [1], in year 2020 alone, the number of casualties of mines/erw was more than 7000, with approximately 2500 people killed and the rest injured. out of them, the majority (80%) were civilians, half of whom children. at the moment, at least 60 states and other areas are contaminated by antipersonnel landmines. among them, the countries considered to be massively contaminated (with more than 100 km2 of contaminated land) are afghanistan, bosnia and herzegovina, cambodia, croatia, ethiopia, iraq, turkey, yemen and ukraine, with the latter recently bombed with cluster munitions [2]. among these, many are also facing severe hunger, with iraq, afghanistan and yemen having more than 34% of the total population undernourished, while ethiopia more than 25% [3]. while the utmost importance of releasing land to local communities for food production and economic development is evident, the lack of intensive mechanization of the demining process surprises. disarmadillo represents a breakthrough sustainable innovation in mechanical demining technologies; it has been designed to stay behind when demining is over and serve longterm agricultural development of the country where it helped release land to local communities. it is affordable and being based on mature agricultural technology its maintenance and running costs are minimized. instead of being designed to clear mines, it is designed to collect information about mine presence either from sensors and from light ground processing and vegetation cutting. thanks to its low cost, more units can be used at the same time, helping release land to local communities faster. when not used in demining operations, disarmadillo can be abstract the mine action community suffers from a lack of information sharing among stakeholders. since 2004, snail aid has been working on disarmadillo, a dramatic shift in paradigm: an open source hardware platform for humanitarian demining. developed mainly thanks to volunteers’ work across more than 15 years, the machine is now going to get a push thanks to the project disarmadillo+, in collaboration between snail aid technology for development and the italian institute of technology. the new version of the machine will be improved in terms of manoeuvrability, modularity, versatility, without compromising its characteristic features. the re-design will take into account the need of keeping the cost low and the technology appropriate to the context where it will work. the ability of the machine to serve two different purposes will also be preserved: the machine will keep on being easily convertible to its original agricultural nature, being developed around a commercial off-the-shelf powertiller. the paper presents the machine and the research work foreseen within the new project. mailto:emanuela.cepolina@iit.it acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 reconverted to its original agricultural use and help securing food production. the paper is organised as follows. section 2 introduces humanitarian demining and the machines currently employed in it, together with an overview of robotics solutions suggested for the task, highlighting shortcomings and possible improvements. section 3 introduces disarmadillo machine concept and the philosophy behind it. section 4 is about disarmadillo architecture, and section 5 is about the features of disarmadillo+, the new version of the machine under studying. then, conclusions are drawn. 2. humanitarian demining humanitarian demining methods are based on manual demining, a procedure in which mines are manually detected and neutralized by a human deminer, equipped with simple gardening tools such as shovels and shears, prodders and, if possible, metal detectors. manual demining is the most versatile and trusted method and therefore is present in every demining program. sometimes, manual deminers work together with dogs trained to detect explosives contained in mines. when it is possible, demining machines help with the physical demining process phases, i.e., vegetation clearance, mine detection, and removal [4]. however, the number of machines in use is surprisingly low. an in-field study [5] conducted in 2012, across six organizations in six countries, recorded only 13 machines in use. the geneva international centre for humanitarian demining (gichd) electronic catalogue of mechanical equipment used for demining operations [6], currently reports only 40 machines in use: this is the sum of numbers of machines in use inserted by a single company producing four types of different machines, the other producers having not filled in this information. although these data definitely do not represent the whole picture, they show that mechanization in this field is extremely limited. several reasons can be accounted for this issue, including lack of funding, the inability to move from research and development (r&d) to practical commercial devices, the cynicism of innovation by those convinced their current practices are entirely sufficient [7], the high cost of maintenance of complex equipment in mine affected countries [8] and the lack of information sharing among stakeholders. tiramisu, d-box and demining robots are among the largest r&d projects that recently tackled humanitarian demining. while the first two ran in parallel and were both cofunded by the european union (eu) within the 7th funding framework programme, the latter is still ongoing and is funded by nord atlantic treaty organization (nato) science for peace and security programme. among them, only tiramisu and demining robots were explicitly aimed at developing new robotic vehicles, while d-box was focused on creating an information management system [9]. the demining robots project employs a multi-sensor robotic platform developed in a previous phase of the project and designed specifically for research purposes and testing innovative mine detection methods such as impulse ground penetrating radar [10]. the robotic platform, called ugo-1st, is, thus, not yet suitable to be fielded in demining operations. the tiramisu project led to the development of robotic vehicles at higher technological readiness level (trl), such as teodor, frs husky and the apt. the first is a tracked outdoor platform equipped with an array of five metal detectors, the second a four-wheel all-terrain vehicle equipped with an arm carrying a metal detector and an artificial nose, and the last is an improvement of the locostra machine, a four-wheel agricultural tractor modified to be used in mine-affected areas [11]. these and many other robotic platforms designed for demining have been analysed in [12], which highlights the need to address several requirements other than the increase in safety of human deminers, such as the speed of robotic vehicles, the ability to operate over long periods of time in varied environments, the amount of payload they can carry and their cost-efficiency. out of all platforms, [12] selects six for quantitative comparison across the identified requirements. apart from teodor, frs husky and locostra, the comparison table includes: ares, a four-independently-steered wheel vehicle [13], silo-6 [14], a hexapod walking robot, and gryphon –iv [15], a modified moon buggy vehicle equipped with a pantograph arm carrying a metal detector. apart from having better landmine detection/exposure results in field trials, gryphon iv and locostra are more promising in payload and operation time. nevertheless, these two solutions have not found application in the field yet. this might be due to the fact that their development took place within r&d projects and was limited by funding available. an opensource approach would guarantee the community to take ownership of the technology and the development to continue behind research projects timelines. the international mine action standards (imas) [16] define demining machines as machines designed to be used in hazardous areas. they are divided into machines designed to detonate hazards, machines designed to prepare the ground, and machines designed to detect hazards. machines belonging to the first group are generally heavily armoured, highly powered and very expensive to purchase. they achieve mine detonation by processing the soil at high speed with spinning tools at the front aimed at crashing or hitting whatever they encounter, thus using a large amount of power, delivered by large and high fuel consuming engines. while most machines on the demining technology market belonged to this first type in the past, recently there has been a shift toward smaller sized [7], less powerful machines designed to prepare the ground other than detonate hazards. ground preparing machines are primarily designed to improve efficiency of demining operations by reducing or removing obstacles. the size of machines has been decreasing over time answering the need for more appropriate technologies, being the logistics of heavier ones very difficult in post-conflict scenarios, at the same time, a limitation of the practical in-field use of heavier and more powerful ones, has been acknowledged. a study [5] (figure 1) has shown that the efficiency of these machines in terms of mine detonation is well below expectations and, therefore, in most cases another mine clearance asset, usually manual deminers or mine detection dogs, has to follow the machine and complete its task. the shift toward smaller machines has occurred along with a change in paradigm aiming to employ resources more efficiently. the land release process has been promoted and is now largely employed to reduce time by which suspected hazardous areas (shas) are released to local communities. according to [16] mechanical land release involves a machine being used to indicate or confirm the presence or absence of landmines and/or erw within a suspected or confirmed hazardous area. the aim is to enable the deployment of other demining assets only in areas acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 proven to contain landmines and/or erw including unexploded sub-munitions. in other words, machines to be employed in mechanical technical survey mainly need to verify the absence of mines in the given area; if they encounter an explosion, the area needs to be re-categorized and further processed. this means that machines used in technical survey need to process the ground and to resist, or not to be severely damaged by, only one explosion at time, while keeping the operator safe. although recent trends allow introducing smaller, lightly armoured and powered, more cost-efficient machines, designed to perform multi tasks in ground preparation, and major steps have been taken towards this direction, there are still progresses to be made. figure 2 reports the size and power of demining machines available now. data have been extrapolated from the gichd electronic catalogue and from websites of manufacturers that recently exhibited their equipment in conference venues. as can be seen, the average weight of demining machines available on the market is still above 10 tonnes, and the average power is 300hp.going smaller and more versatile might be not only useful for humanitarian demining, allowing the number of machines in use to increase, but also, in light of reconverting demining machines to food production, for sustainable agricultural mechanization. in fact, according to [17], about 90 % of farmers worldwide operate on a small scale, and the technology must become accessible to this large group. reference [18] highlights as a key factor for the successful adoption of agrobots in developing countries the capacity to design and offer technical solutions at a low (affordable) cost but with a high impact. again [18] estimates that small robots at an affordable price for purchase or hire represent a potential alternative in areas where manpower is scarce and conventional machinery is not available or is too costly for smallholders. 3. disarmadillo the work on disarmadillo machine started in 2004 with a one-month long visit to mine action activities in sri lanka. during the trip, groups of deminers were interviewed to start the research in the right direction, better understand local needs and establish a reciprocal trust between local people and researchers. most notably, information was gathered by working on the functional requirements for a system of demining machines to work close to the deminers. when deminers were asked about their preferences for new machine technology, they expressed a strong desire for new machines that were small, light and inexpensive. they wanted machines to help in the most boring/difficult parts of their job, particularly cutting vegetation and processing the ground, especially the hardest one, scarified using a simple rake called heavy rake, according to local procedures, to remove the soil hiding mines [19]. based on these findings, the first version of disarmadillo machine, called participatory agricultural technology machine (pat machine) was built within the first author’s phd work [20]. the work on disarmadillo continued over the years thanks to the contributions of volunteers of snail aid, a non-profit organization, and students of a secondary technical high school in genova, italy, who devoted part of their time to improving the machine and building parts of it in the school mechanical workshop. disarmadillo is, in fact, conceived to be appropriate figure 1. results from the same demining machine. on the left, test lane used in test site in germany with dummy antipersonnel (ap mines), called worm, 0 cm – 20 cm deep: 98.22 % neutralized. on the right hand side, suspected hazardous area in angola: 10 ap mines processed and left live intact (of type pomz and pp-mi-sr). figure 2. size (top image) and power (bottom image) of demining machines currently on the market. average 0 5,000 10,000 15,000 20,000 25,000 size (kg of machines) average 0 200 400 600 800 1000 power (hp of machines) acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 to the local context, and thus, components have to be suitable to be produced in not specialized workshops. in 2021, disarmadillo+ project has been approved assuring a push forward thanks to the collaboration between researchers of the italian institute of technology and snail aid – technology for development (figure 3). the core idea behind disarmadillo is to adapt power tillers to demining applications. power tillers are small agricultural machines widely used and commercially available in many mineaffected countries and their second-hand market is largely spread. they are easy to transport as they are small and light, and they are available with different types of engines. the most powerful one (approximately 14 hp) is sturdy enough for being employed in several versatile tasks, from ground processing to vegetation cutting. power tillers, also known as walking tractors, two-wheel tractors or iron buffalos have a great importance in their nations’ agriculture production and rural economies. they not only have rotovator attachments but also mouldboard and disc-plow attachments. seeders, planters, even the zero till/no-till variety can be attached. reaper/grain harvesters and micro-combine harvesters are available for them. also very important is their ability to pull trailers with over two ton cargoes. the population of powertillers in developing countries is surprisingly high. china has the highest numbers estimated to approach 16 million, thailand has nearly 3 million, sri lanka 120,000, nepal 15,000. parts of africa have begun importing chinese tractors, and nigeria may have close to 1,000. many countries of central/eastern europe also have significant populations of 2-wheel tractors, as they have been sold there for agricultural use since the 1940s [21]. 3.1. disarmadillo philosophy: open source among all of the reasons that might be found behind the scarce employment of machines in humanitarian demining, in authors’ opinion is predominant the lack of information sharing. often, researchers into new technologies for mine action do not have access to useful information being generated in the field that is treated as proprietary and not shared [22] unless after an extensive and deep personal analysis, often involving field visits that generally require important resources to be committed to the cause. at the same time, machine producers tend to market their products in the same way as military equipment, negotiating their sales, including price, in confidence. the lack of transparency of the market makes comparing the cost-efficiency of machines difficult, and the introduction of new systems not perceived as necessary. therefore, in order to create a favourable environment for more technologies to enter the demining technology market, there is need to change approach and create a more transparent, less donor-depending and more cost-efficiency oriented market. disarmadillo is aimed to be an upgrade kit that can be mounted on every type of powertiller to transform it into a demining machine supporting manual deminers in their work. when not used in demining operations, disarmadillo can be reconverted to its original agricultural use and help secure food production. while commercial off-the-shelf (cots) components needed by the kit will be listed with price and suggested purchasing sites, all components that need to be custom made will have their technical drawings available for free downloading from the internet. potentially, a new machine could be built around any powertiller by anyone interested, with as few modifications as possible. similar approaches are being successfully used by projects targeting electronics (arduino) and heavier hardware (open source ecology or do it yourself (diy) vehicles or drones). as in these well known cases, the community of users would be asked to provide its feedback on experiences with the machine and contribute to future developments. the idea to adopt an open design business model for a mine action technology is provocative and runs counter the current trends in the humanitarian mine action (hma) market highlighted in the previous part of the paper; nevertheless, it is feasible and profitable. if required by customers, all parts needed could also be delivered in a box to the customer. if necessary, upon request from the customer, assembly of all components can also be offered as a service locally (as knowledge transfer) in the mine affected country together with training on the use of the machine. thanks to its modularity, if the community devises new tools or components, old machines can be upgraded without having to jettison what works. this approach aims to challenge the traditional lack of information sharing of mine action and increase the active participation of end users in the design and decision-making process. positive implications are expected in terms of bridging the gap between scientific and operational hma communities, increased competition level, cost reduction and possibly promotion of a closer integration with development. figure 3. disarmadillo evolution over time. figure 4. disarmadillo philosophy. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 3.2. disarmadillo philosophy: versatility disarmadillo is a robotic platform designed to carry different tools. some have already been tested, some are designed and others are still in the form of ideas. thanks to the open nature of the project, partners have tested tools locally, such as the vibrating sieve, developed by prof. ross macmillan in australia. figure 5 depicts the tools conceived for disarmadillo and available on snail aid website (top image) and, as an example of them, the rake (bottom image). the rake is designed for ground processing in loose soils, where manual deminers use rakes to uncover the ground and expose mines. it penetrates the soil in front of the machine, cuts it and sieves it by lifting mines and leaving them besides for later collection by deminers. a prototype has been manufactured and successfully tested in jordan with dummy mines. 3.3. disarmadillo philosophy: demining and agricultural purpose as their job is to process the ground, agricultural machines originally conceived to work the soil could be efficiently employed in demining. since landmines impact food security via six different and somewhat reinforcing mechanisms, including access denial, loss of livestock, land degradation, reduced workforce, financial constraints and aid dependency [23], it makes sense to introduce (in mine-affected countries) multi-purpose technologies that can serve not only to demine but also for food production. agricultural technologies are mature and simple, easy repairable in every developing country in local, not specialized workshops. the modularity of agricultural technologies is another advantage; the same tools can be mounted on different tractors units and replaced by dedicated agricultural tools when demining operations are over. moreover, involving local technicians in re-designing new or improved technology helps reduce the dependency of local communities on donors’ help and facilitate local human development. empowerment is an integral part of many poverty reduction programmes. helping individuals and communities to function as agents for improving their wellbeing is essential for promoting human development and human freedom. empowerment shall not depend only on state-funded resources and opportunities but also on citizens taking responsibility for self-improvement. the handover of all mine action activities to local entities who can perform the majority of the work and gain skills while participating in the creation and maintenance of new agricultural technology for area reduction is desirable and necessary. the development of sustainable agricultural technologies and their transfer and dissemination under mutually agreed-upon terms to developing countries, is encouraged by food and agriculture organization (fao) [18]. fao also stresses the importance of supporting national efforts to foster the utilization of local know-how and agricultural technologies, to promote agricultural technology research to increase sustainable agricultural productivity, reduce post-harvest losses and enhance food and nutritional security. centres could be built with the double aim of renting and servicing machines both for humanitarian demining and agriculture, therefore representing a major step toward the integration of demining and development and the transition to local ownership, wished for since a long time. by introducing facilities where to adapt agricultural tools to demining activities, we can support r&d in agriculture. machinery could be provided as and when needed on a custom hire basis to the small and medium farmers who cannot afford to purchase their own machinery. similarly, in parallel to agricultural machines, the agro service centres could also provide machines for technical survey, based on agricultural machines. they could develop the modifications required to effectively address the demining problem locally, then hire these machines and provide assistance. as confirmed by current trends, today, in both developed and developing countries, the availability of human resources for farming is decreasing due to labour shortage both for lack of interest from young people and for weak or aging farming workforce: this means that a single worker (sometimes weak) is often in charge of large extensions of land. these factors influence the development of local agriculture and open a market share for automation also of small machines (mass lower than three tons). differently from heavy tractors, small machines with competitive costs cannot be effectively developed simply by modifying existing manual driven models: their architecture should be rethought for automation [24]. 4. disarmadillo architecture the current version of disarmadillo (figure 6.c) is built around a powertiller (figure 6.b) produced by grillo spa (www.grillospa.it), which kindly donated it to the project, together with spare parts and suggestions. the technical features of the power tiller and the constructed disarmadillo prototype are summarized in figure 6.a. the kit adds to the original powertiller a frame (figure 7), which has the dual aim of hosting two additional wheels at the front with respect to the original driven wheels and of embedding a track tensioning system. the frame is made of standard steel profiles, easy to build and maintain requiring only cutting and welding operations. agricultural tyres are replaced with special wheels designed to transmit motion to and support the tracks along their width. the frame added to the power tiller is designed to host a winch and a sort of three-point linkage system, allowing different tools to be figure 5. disarmadillo tools (top image) and a picture of the rake, tested in jordan with dummy mines (bottom image). acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 mounted at the front. the power take-off at the back of the machine can be used to power implements requiring an actuating torque. being reversible, the machine can be used indifferently forward or backwards. the machine is remotely actuated and is driven by an industrial remote-control unit, allowing major functions to be controlled remotely (figure 8). the remote control system is not substituting original manual controls; therefore, once reconverted to agricultural activities, the machine can go back to manual control. the platform rotates thanks to differential skid steering, thus by braking one of the two stub axles through which power is transmitted to the wheels through the differential gear. external band brakes are mounted on the frame, acting on the stub axles. a linear electric motor actuates each brake via cable, actuating a lever. power for the electrical motors is derived from the battery on board. the power take-off at the back of the powertiller, accessible through the frame, can be used to power tools that need a torque. 5. disarmadillo+ the disarmadillo machine has been subject to continuous research by snail aid and partners on a volunteer basis for the last fifteen years; its latest version was presented to the community during the 7th mine action technology workshop in basel in november 2018 raising considerable interest. the disarmadillo+ project will bring it to a higher level of maturity. major improvements are envisaged in terms of: • manoeuvrability and reliability, by actuating wheels with independent hydraulic motors and not any more through the differential gear powered by the internal combustion engine. each of the hydraulic motors, one per side of the machine, will be connected to a hydraulic pump actuated by the endothermic motor in a closed-loop circuit (figure 9). this new architecture would allow a narrower turn radius, and more efficient turning by actuating the two motors in opposite directions. moreover, it would allow driving the machine backwards without rotating it or changing configuration before operations start. • modularity, by splitting the frame into two parts, each portable by two persons, for easy conversion from one configuration (demining) to the other (agriculture). moreover, it is foreseen to change the points of attachment of the frame to the powertiller to reduce the number of ad-hoc flanges necessary a) b) c) figure 6. a) technical data of disarmadillo machine based on g131 powertiller produced by grillo. b) the original powertiller and c) disarmadillo as it was exhibited at the 7th mine action technology workshop in basel in 2019. figure 7. scheme of disarmadillo: red parts, frame, wheels, tracks and band brakes are added to the original powertiller (black). acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 7 to adapt the kit to different types of powertillers, exploiting the power take-off, with a pass-through system, and the axles. • human machine interface, by improving the remote control transmitter interface to expand its possibilities and make the driving more intuitive. • versatility, by investigating the possibility to study blast resistant tracks, building up on experience gained on blast resistant wheels developed for a larger machine for humanitarian demining based on a four-wheel tractor called locostra, developed by snail aid and other partners [25]. in fact, although explosive tests on a powertiller have been carried out in italy [26] and no damages have been recorded to the drive train, wheels were damaged, making maintenance necessary in case of an explosion. a solution to increase the machine's protection is to mount a front roller when the operating tool is mounted at the back. another option would be to design blast-resistant tracks that would allow retaining enough tractive integrity after an explosion occurs underneath them to continue working or to enable the withdrawal of the machine from the field for maintenance. a research [27] carried out in the 70’s exploited successfully three design principles: shock absorption by the roadwheels (embedding circular epoxy-resin rings between the hub and the rim), an almost unbreakable chain of tractive effort, and sacrificial track pads designed to fly away. a possibility would be to combine these ideas and the shock absorption exploited in locostra wheels, achieved thanks to solid rubber inner wheels embedded in steel frames, and design solid rubber roadwheels and steel track elements allowing ventilation. disarmadillo+ will offer the occasion to investigate in new tools, such as a ground driven/aerial platform: a drone borne sensor platform connected to disarmadillo+. the connection would be by tether or by another means allowing transmitting power from the ground rover to the drone, permitting long lasting flights and transferring data from the drone to the rover. importance will be given to keep complexity and cost low. as generally acknowledged and well explained by hemapala [28], when the price to performance ratio is too high, robots are academic toys. to keep the cost and complexity low, the machine will be automated gradually, according to needs. at the beginning, it will be remotely controlled. cost is also a key factor in the successful adoption of agrobots in developing countries, as stated by [18] that points out the need to design and offer technical solutions at a low (affordable) cost but with a high impact. the final shape of the disarmadillo+ machine is in the form of a remotely controlled tracked vehicle able to carry different tools. recently, there has been an increase in the supply of small remotely controlled platforms designed to perform agricultural tasks. it is interesting to analyse these types of machines available on the market in terms of size and power, as was done for demining machines. these small-size agricultural machines are sold on a much transparent market than the one of demining machines, so their cost can be obtained from producers’ websites or by browsing the internet, both for new products and second hand ones. figure 10 reports an analysis of 25 machines of this type according to their size and power; for few representative ones, also the cost is reported. as can be seen, the average size and power of these agricultural machines is much smaller than the one of demining machines. indeed, their weight is almost an order of magnitude lower than demining machines, and their rated power is approximately six times lower. the smallest of these agricultural machines, rc-751, produced by a danish company called timan, is comparable to disarmadillo+ in terms of weight and power. apart from being designed with a different philosophy, not for operating in hazardous areas, and having a higher cost (it is sold approximately at 20 k€), it offers a good reference, showing that machines with such a small size can successfully be employed in many agricultural tasks. moreover, the adoption of tools available for the rc-751 could be investigated also for disarmadillo+. a) b) c) figure 8. particular of the control unit: a) linear motors actuating brakes and the clutch, b) transmitter and c) 3d model of band brakes leverage. figure 9. disarmadillo+ scheme. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 8 6. conclusions considering the increasing consensus on the fact that mine action should be regarded as a development activity, there should be a rapid change of the current approach. the paper summarises some topics in this domain and introduces the design of a simple modular machine for assisting mine removal through ground processing and vegetation cutting. the tractor unit is chosen in the agricultural machines domain (power tillers), so as to assure full consistency with the local expertise and habits. cost and sophistication minimisation are primary objective of the project. acknowledgement andrea pinza ceo of grillo spa who keeps on providing support, spare parts, and suggestions and personally delivered the powertiller we have been working on so far, is gratefully acknowledged. references [1] landmine monitor report 2021, international campaign to ban landmines – cluster munition coalition (icbl-cmc), 2021, isbn 978-2-9701476-0-2. [2] human rights watch, ukraine: russian cluster munition hits hospital, 25 february 2022. online [accessed 28 august 2022] https://www.hrw.org/news/2022/02/25/ukraine-russiancluster-munition-hits-hospital [3] hunger map 2021 – chronic hunger, world food program, 24 september 2021. online [accessed 28 august 2022] https://reliefweb.int/sites/reliefweb.int/files/resources/wfp0000132038.pdf [4] a guide to mine action, fifth edition, gichd, geneva, march 2014, isbn 978-2940369-48-5. [5] e. e. cepolina, land release in action, the journal of erw and mine action 17(2) (2013), pp. 44-50. online [accessed 28 august 2022 https://commons.lib.jmu.edu/cisr-journal/vol17/iss2/16 [6] mechanical equipment used for demining operations catalogue, gichd. online [accessed 28 august 2022] https://www.gichd.org/en/resources/equipmentcatalogue/equipments/?tx_solr%5bfilter%5d%5b3%5d=famil y%3a2 [7] a. w. dorn, eliminating hidden killers: how can technology help humanitarian demining?, stability: international journal of security & development 8(1) (2019), pp. 1–17. doi: 10.5334/sta.743 [8] e. e. cepolina, c. bruschini, k. de bruyn, providing demining technology end-users need, international workshop on robotics and mechanical assistance in humanitarian demining (hudem), tokyo, japan, 2005, pp. 9-14. online [accessed 30 august 2022] https://infoscience.epfl.ch/record/257002/files/2005_cepolina _providingdemtech_prochudem.pdf?ln=en [9] f. curatella, p. vinetti, g. rizzo, t. vladimirova, l. de vendictis, t. emter, j. peterit, c. frey, d. usher, i. stanciugelu, j. schaefer, e. den breejen, l. gisslén, d. letalick, toward a multifaceted platform for humanitarian demining, 13th iarp workshop on humanitarian demining and risky intervention, beograd, croatia, 2015. online [accessed 30 august 2022] https://www.foi.se/download/18.7fd35d7f166c56ebe0b1008e/1 542623794109/toward-a-multifaceted-platform_foi-s--5243-se.pdf [10] v. ruban, l. capineri, t. bechtel, g. pochanin, p. falorni, f. crawford, t. ogurtsova, l. bossi, automatic detection of subsurface objects with the impulse gpr of the ugo-1st robotic platform, 2020 ieee ukrainian microwave week (ukrmw), 2020, pp. 1108-1111. doi: 10.1109/ukrmw49653.2020.9252816 [11] y. baudoin, tiramisu: fp7-project for an integrated toolbox in humanitarian demining, focus on ugv, uav, technical survey and close-in detection, int. conf. on climbing and walking robots, sydney, aus., 2013. [12] d. portugal, l. marques, m. armada, deploying field robots for humanitarian demining: challenges, requirements and research trends, mobile service robotics (2014), pp. 649-656. doi: 10.1142/9789814623353_0075 [13] p. santana, j. barata, l. correia, sustainable robots for humanitarian demining, int. journal of advanced robotic systems 4(2) (2007), pp. 207-218. doi: 10.5772/5695 [14] p. gonzalez de santos, j. a. cobano, e. garcia, j. estremera, m. armada, a six-legged robot-based system for humanitarian demining missions, mechatronics 17(8) (2007), pp. 417-430. doi: 10.1016/j.mechatronics.2007.04.014 [15] m. freese, t. matsuzawa, t. aibara, e. f. fukushima, s. hirose, humanitarian demining robot gryphon – an objective evaluation 1(3) (2008), pp. 735-753. doi: 10.21307/ijssis-2017-317 [16] imas 09.50 mechanical demining, unmas, 2013. online [accessed 28 august 2022] https://www.mineactionstandards.org/fileadmin/mas/docume nts/standards/imas-09-50-ed1-am4.pdf [17] smallholders and family farming. in: family farming knowledge platform. fao, rome. online [accessed 28 august 2022] http://www.fao.org/family-farming/themes/small-familyfarmers/en/ [18] agriculture 4.0 – agricultural robotics and automated equipment for sustainable crop production, fao, integrated crop management 24 (2020). online [accessed 28 august 2022] http://www.fao.org/3/cb2186en/cb2186en.pdf figure 10. power, size and cost of remotely controlled agricultural multi platforms currently on the market. in the last graph green bars indicate cost. average 0 20 40 60 80 100 120 140 160 r c -7 5 1 ro b o m in i r c -1 0 0 0 f 3 0 0 p ro lv 3 0 0 p ro lv 4 0 0 f r5 0 m in i m in iz ro b o cu t r c 4 0 ro b o e v o ro b o cu t r c 5 6 t ra xx r c 5 6 lv 5 0 0 t ra xx t ra xx r c 7 5 ro b o cu t r c 7 5 lv 6 0 0 f r7 0 f r7 5 ro b o m id i lv 8 0 0 ro b o m a x r e co n f m -8 0 lv 1 4 0 0 power (hp of machines) average size 0 20,000 40,000 60,000 80,000 100,000 0 1,000 2,000 3,000 4,000 5,000 s iz e / k g size (kg) and cost (€) https://www.hrw.org/news/2022/02/25/ukraine-russian-cluster-munition-hits-hospital https://www.hrw.org/news/2022/02/25/ukraine-russian-cluster-munition-hits-hospital https://reliefweb.int/sites/reliefweb.int/files/resources/wfp-0000132038.pdf https://reliefweb.int/sites/reliefweb.int/files/resources/wfp-0000132038.pdf https://commons.lib.jmu.edu/cisr-journal/vol17/iss2/16 https://www.gichd.org/en/resources/equipment-catalogue/equipments/?tx_solr%5bfilter%5d%5b3%5d=family%3a2 https://www.gichd.org/en/resources/equipment-catalogue/equipments/?tx_solr%5bfilter%5d%5b3%5d=family%3a2 https://www.gichd.org/en/resources/equipment-catalogue/equipments/?tx_solr%5bfilter%5d%5b3%5d=family%3a2 https://doi.org/10.5334/sta.743 https://infoscience.epfl.ch/record/257002/files/2005_cepolina_providingdemtech_prochudem.pdf?ln=en https://infoscience.epfl.ch/record/257002/files/2005_cepolina_providingdemtech_prochudem.pdf?ln=en https://www.foi.se/download/18.7fd35d7f166c56ebe0b1008e/1542623794109/toward-a-multifaceted-platform_foi-s--5243--se.pdf https://www.foi.se/download/18.7fd35d7f166c56ebe0b1008e/1542623794109/toward-a-multifaceted-platform_foi-s--5243--se.pdf https://www.foi.se/download/18.7fd35d7f166c56ebe0b1008e/1542623794109/toward-a-multifaceted-platform_foi-s--5243--se.pdf https://doi.org/10.1109/ukrmw49653.2020.9252816 https://doi.org/10.1142/9789814623353_0075 https://journals.sagepub.com/doi/pdf/10.5772/5690 https://doi.org/10.1016/j.mechatronics.2007.04.014 https://doi.org/10.21307/ijssis-2017-317 https://www.mineactionstandards.org/fileadmin/mas/documents/standards/imas-09-50-ed1-am4.pdf https://www.mineactionstandards.org/fileadmin/mas/documents/standards/imas-09-50-ed1-am4.pdf http://www.fao.org/family-farming/themes/small-family-farmers/en/ http://www.fao.org/family-farming/themes/small-family-farmers/en/ http://www.fao.org/3/cb2186en/cb2186en.pdf acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 9 [19] e. e. cepolina, power tillers for demining in sri lanka: participatory design of low-cost technology, in humanitarian demining. london, united kingdom: intechopen. doi: 10.5772/5422 [20] e. e. cepolina (2008), powertillers and snails for humanitarian demining: participatory design and development of a low-cost machine based on agricultural technologies [unpublished doctoral thesis], university of genova. [21] wikipedia “two-wheel tractor”. online [accessed 28 august 2022] https://en.wikipedia.org/wiki/two-wheel_tractor [22] j. lokey, it's mine and you can't have it, journal of mine action 4(2) (2000). online [accessed 28 august 2022] https://commons.lib.jmu.edu/cisr-journal/vol4/iss2/40 [23] h. garbino, the impact of landmines and explosive remnants of war on food security: the lebanese case, the journal of conventional weapons destruction 23(2) (2019). online [accessed 28 august 2022] https://commons.lib.jmu.edu/cisr-journal/vol23/iss2/6 [24] e. e. cepolina, m. przybyłko, g. b. polentes, m. zoppi, design issues and in field tests of the new sustainable tractor locostra. robotics 3 (2014), pp. 83-105. doi: 10.3390/robotics3010083 [25] g. a. naselli, g. polentes, e. e. cepolina, m. zoppi, a simple procedure for designing blast resistant wheels, procedia engineering 64 (2013), pp. 1543 1551. doi: 10.1016/j.proeng.2013.09.236 [26] e. e. cepolina, m. u. hemapala, power tillers for demining: blast test, international journal of advanced robotic systems 4(2) (2007) pp. 253-257. online [accessed 28 august 2022] https://journals.sagepub.com/doi/pdf/10.5772/5690 [27] r. zumbro, mine resistant tracks, armor (1997), pp. 16-20. online [accessed 28 august 2022]. https://books.google.it/books?id=z60raaaayaaj&pg=ra1pa19&lpg=ra1pa19&dq=blast+resistant+tracks&source=bl&ots=daucbdm 9cv&sig=acfu3u142_8sw8rxkyh0mpuwg4xkfciwaq&hl =en&sa=x&ved=2ahukewjai8vm4yh2ahuvq_edhrzabd gq6af6bagmeam#v=onepage&q=%20tracks&f=false [28] m. u. hemapala, robots for humanitarian demining, in robots operating in hazardous environments. london, united kingdom: intechopen, (2017). doi : 10.5772/intechopen.70246 https://www.doi.org/10.5772/5422 https://en.wikipedia.org/wiki/two-wheel_tractor https://commons.lib.jmu.edu/cisr-journal/vol4/iss2/40 https://commons.lib.jmu.edu/cisr-journal/vol23/iss2/6 https://doi.org/10.3390/robotics3010083 https://doi.org/10.1016/j.proeng.2013.09.236 https://journals.sagepub.com/doi/pdf/10.5772/5690 https://books.google.it/books?id=z60raaaayaaj&pg=ra1-pa19&lpg=ra1-pa19&dq=blast+resistant+tracks&source=bl&ots=daucbdm9cv&sig=acfu3u142_8sw8rxkyh0mpuwg4xkfciwaq&hl=en&sa=x&ved=2ahukewjai8vm4yh2ahuvq_edhrzabdgq6af6bagmeam#v=onepage&q=%20tracks&f=false https://books.google.it/books?id=z60raaaayaaj&pg=ra1-pa19&lpg=ra1-pa19&dq=blast+resistant+tracks&source=bl&ots=daucbdm9cv&sig=acfu3u142_8sw8rxkyh0mpuwg4xkfciwaq&hl=en&sa=x&ved=2ahukewjai8vm4yh2ahuvq_edhrzabdgq6af6bagmeam#v=onepage&q=%20tracks&f=false https://books.google.it/books?id=z60raaaayaaj&pg=ra1-pa19&lpg=ra1-pa19&dq=blast+resistant+tracks&source=bl&ots=daucbdm9cv&sig=acfu3u142_8sw8rxkyh0mpuwg4xkfciwaq&hl=en&sa=x&ved=2ahukewjai8vm4yh2ahuvq_edhrzabdgq6af6bagmeam#v=onepage&q=%20tracks&f=false https://books.google.it/books?id=z60raaaayaaj&pg=ra1-pa19&lpg=ra1-pa19&dq=blast+resistant+tracks&source=bl&ots=daucbdm9cv&sig=acfu3u142_8sw8rxkyh0mpuwg4xkfciwaq&hl=en&sa=x&ved=2ahukewjai8vm4yh2ahuvq_edhrzabdgq6af6bagmeam#v=onepage&q=%20tracks&f=false https://books.google.it/books?id=z60raaaayaaj&pg=ra1-pa19&lpg=ra1-pa19&dq=blast+resistant+tracks&source=bl&ots=daucbdm9cv&sig=acfu3u142_8sw8rxkyh0mpuwg4xkfciwaq&hl=en&sa=x&ved=2ahukewjai8vm4yh2ahuvq_edhrzabdgq6af6bagmeam#v=onepage&q=%20tracks&f=false https://books.google.it/books?id=z60raaaayaaj&pg=ra1-pa19&lpg=ra1-pa19&dq=blast+resistant+tracks&source=bl&ots=daucbdm9cv&sig=acfu3u142_8sw8rxkyh0mpuwg4xkfciwaq&hl=en&sa=x&ved=2ahukewjai8vm4yh2ahuvq_edhrzabdgq6af6bagmeam#v=onepage&q=%20tracks&f=false https://doi.org/10.5772/intechopen.70246 development of a contactless operation system for radiographic consoles using an eye tracker for severe acute respiratory syndrome coronavirus 2 infection control: a feasibility study acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 8 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 development of a contactless operation system for radiographic consoles using an eye tracker for severe acute respiratory syndrome coronavirus 2 infection control: a feasibility study mitsuru sato1, mizuki narita2, naoya takahashi1, yohan kondo1, masashi okamoto1, toshihiro ogura2 1 department of radiological technology, school of health sciences, niigata university, niigata, japan 2 department of radiology, gunma prefectural college of health sciences, gunma, japan section: research paper keywords: infection control; sars-cov-2; eye-tracking manipulation; contactless device; radiographic console citation: mitsuru sato, mizuki narita, naoya takahashi, yohan kondo, masashi okamoto, toshihiro ogura, development of a contactless operation system for radiographic consoles using an eye tracker for severe acute respiratory syndrome coronavirus 2 infection control: a feasibility study, acta imeko, vol. 11, no. 2, article 38, june 2022, identifier: imeko-acta-11 (2022)-02-38 section editor: francesco lamonaca, university of calabria, italy received march 29, 2022; in final form june 8, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: mitsuru sato, e-mail: mitu-sato@clg.niigata-u.ac.jp 1. introduction an outbreak of severe acute respiratory syndrome coronavirus (sars-cov-2) infection occurred in wuhan, china, in december 2019 [1]–[5]. since then, the virus has been transmitted worldwide, and consequently, the world health organization declared it a pandemic on march 11, 2020. this infection can be primarily transmitted via droplets and contact routes [6]-[8]. currently, infection control measures, including social distancing, using masks or face shields, and frequent handwashing and disinfection, are important [9], [10], particularly in the medical field. hospitals with isolation wards for patients with coronavirus disease 2019 (covid-19) are taking various measures to prevent infection transmission. the environment where patients with covid-19 are treated is zoned as clean (cold zone), intermediate (warm zone), and unclean (hot zone) areas [11]-[13] to prevent hospital-wide infection. however, it is difficult to completely control infection despite such isolation measures [14]. chest radiography is important for the management of covid-19, and portable x-ray machines are used in isolation wards. in most hospitals, after imaging patients in the isolation ward is completed, all areas that may have come in contact with the patient or been exposed to droplets, including the device, the flat panel detector, and imaging console attached to the flat panel detector system, should be disinfected [13]. however, medical staff can be mentally and physically exhausted during a abstract sterilization of medical equipment in isolation wards is essential to prevent the transmission of severe acute respiratory syndrome coronavirus 2 (sars–cov-2) infection. particularly, the radiographic console of portable x-ray machines requires frequent disinfection because it is regularly moved; this requires considerable infection control effort as the number of patients with coronavirus disease 2019 (covid-19) increases. to evaluate the application of a system facilitating noncontact operation of radiographic consoles for patients with covid-19 to reduce the need for frequent disinfection. we developed a noncontact operation system for radiographic consoles that used a common eye tracker. we compared calibration errors between with and without face shield conditions. moreover, the use of console operation among 41 participants was investigated. the calibration error of the eye tracker between with and without face shield conditions did not significantly differ. all (n = 41) observers completed the console operation. pearson’s correlation coefficient analysis showed a strong correlation (r = 0.92, p < 0.001) between the average operation time and the average number of misoperations. our system that used an eye tracker can be applied even if the operator uses a face shield. thus, its application is important in preventing the transmission of infection. mailto:mitu-sato@clg.niigata-u.ac.jp acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 pandemic, making it difficult to perform appropriate disinfection. moreover, disinfection may not have been strictly practiced prior to the pandemic [15]-[17]. when several patients with covid-19 undergo imaging, the console should be disinfected to prevent secondary infections, such as due to methicillin-resistant staphylococcus aureus and vancomycinresistant enterococci infections, even if imaging was conducted in the same ward. nevertheless, it is difficult to disinfect a plastic bag covering a complex-structures medical device while wearing personal protective equipment. therefore, the radiographic console, which is touched frequently, can be a source of infection [18]. reducing the frequency of touching the imaging equipment, which can be achieved using contactless input devices, is important in addressing these issues. currently, several contactless devices are available. however, their use for protection against sars-cov-2 infection has not been reported. previous studies have assessed the use of contactless input devices to operate medical devices without touching them in the clinical setting [19]-[24]. such devices are effective in maintaining sterile rooms [19]. therefore, this study aimed to assess the use of an eye tracker, which does not require body movement, as a contactless input device. image display systems have been successfully manipulated via eye tracking during interventional radiology, thereby allowing images to be paged and magnified using the observer’s eye movements alone [25]. in this study, we applied such technology to develop a radiographic console operation system for infection control during portable x-ray imaging of patients with covid19. face shields are used as personal protective equipment in the management of patients with covid-19, and they create an obstruction between the eye tracker and the eyes. therefore, we evaluated our operating system by assessing calibration errors between with and without face shield conditions, average time required for console operation, and average number of misoperations. 2. material and methods 2.1. development of a contactless operation system using an eye tracker in this study, we used tobii pceye mini (tobii, stockholm, sweden) as the eye tracker for our contactless operation system (figure 1). this small and light weight devices has the following measurements: width, 169.5 mm; height, 17.8 mm; thickness, 12.4 mm; and weight, 59 g. it can be easily installed on the radiographic console used in portable x-ray systems. the usable distance from the eye detector ranges from 45 to 85 cm, the sampling rate of the eye tracker is 60 hz, and the recommended screen size is up to 19 in. we used a computer with the following specifications: windows 10 home 64 bit, intel core i7-6700hq central processing unit, and nvidia geforce gtx 960m; its screen size was 17 in (monitor size: width, 38.4 cm; height, 21.6 cm). the eye tracker could be easily used with a universal serial bus connection; however, it requires prior calibration. the system provided with the tobii pceye mini was used for calibration. there was a function in this system to inform the observer whether the position of the observer is appropriate in order to properly calibrate the system. in addition, instructions are displayed on the screen regarding the location to be gazed for calibration so that it can be facilitated. specifically, we entered the gaze detection range and gazed at seven points on the screen, i.e., centre, upper right, upper centre, upper left, lower right, lower centre, and lower left. we used the pupil centre corneal reflection method to calibrate the operator’s gaze point by measuring the corneal reflection point of the irradiated infrared light and the position of the pupil when gazing at each point, which improved with the eye tracker [26]. we developed a contactless operation system for radiographic consoles that used an eye tracker to prevent the transmission of sars-cov-2 infection. we used microsoft visual studio (microsoft, redmond, wa, usa) as an integrated development environment, c# (microsoft), and the nuget package of tobii.interaction v.0.7.3 (tobii core sdk, tobii). the principle of the operation was based on the characteristics of eye movement. the two types of fixations are saccade, which is a quick movement of the gaze, and fixation, which requires the maintenance of gaze on the same spot [27], [28]. the vector was calculated as the amount of gaze point movement on the screen for 0.02 s to detect the fixation state. in this study, we considered the saccade state if the amount of movement exceeded 200 pixels (0.02 cm per pixel, which is approximately 4 cm). an amount of movement of 200 pixels in 0.02 s could be considered a saccade [25]. if the amount of movement was below this threshold, the system was considered in fixation state. the only commands required to operate the radiographic console were moving the cursor and clicking. therefore, we developed a method to move the cursor in accordance with the movement of the gaze point and to click when the fixation state is reached. figure 2 shows the use of the console operation system for imaging. in total, 41 students from radiological technology participated, and they were briefed in advance about the operation system. table 1 shows the characteristics of the observers. 2.2. considerations of the system because eye tracking may not be possible when a face shield is worn, the developed system was used with and without face shield. we used a face shield (logi meister, osaka, japan) made of polyethylene terephthalate with following measurements: height, 22 cm; width, 33 cm; and thickness, 0.25 mm. the distance from the eyeball to the face shield is approximately 4 cm (figure 3). we performed (1) a comparison of calibration errors between the two face shield conditions and (2) an analysis of console operation. in experiment 1, calibration of the eye detector was performed with and without a face shield. subsequently, the error between the actual gaze point coordinates on the screen and the detected gaze point coordinates on the screen was figure 1. overview of tobii pceye mini (tobii, stockholm, sweden). acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 measured at the following nine points on the monitor: top left, top, top right, left, centre, right, bottom left, bottom, and bottom right. a gazing point was displayed on the screen to measure the error (figure 4). the points on the four corners of the screen were displayed at a distance of 150 pixels each in the x and y coordinates from the screen edge (resolution: 1920 pixels × 1080 pixels) to assess the calibration error in the periphery as much as possible. the other five points were placed at the centre of the x and y coordinates. the coordinates of the mouse cursor while the operator was gazing at each point were measured five times. the measurement results were averaged to reduce measurement error due to nystagmus. nystagmus is a cyclic and involuntary oscillatory movement of the eyeball. the effect of physiological nystagmus was reduced. the distance was calculated from the obtained coordinates. the calibration error was defined as the distance between actual gaze point which is the location coordinates of gazed by observer and the detection point coordinates for each of the nine points. figure 2. operation of the radiographic console used in isolation wards. real-time gaze analysis allows operations required in conducting examinations, such as clicking buttons based on gaze duration. table 1. characteristics of the participants observer eye condition 1 bare 2 with glasses 3 with soft contact lenses 4 with soft contact lenses 5 with soft contact lenses 6 with soft contact lenses 7 with soft contact lenses 8 with soft contact lenses 9 with soft contact lenses 10 with soft contact lenses 11 with soft contact lenses 12 with glasses 13 with soft contact lenses 14 with soft contact lenses 15 with soft contact lenses 16 bare 17 with soft contact lenses 18 bare 19 with soft contact lenses 20 with glasses 21 with soft contact lenses 22 with soft contact lenses 23 with soft contact lenses 24 with soft contact lenses 25 bare 26 bare 27 bare 28 with soft contact lenses 29 with soft contact lenses 30 with soft contact lenses 31 with soft contact lenses 32 with soft contact lenses 33 with soft contact lenses 34 with soft contact lenses 35 with soft contact lenses 36 with soft contact lenses 37 with soft contact lenses 38 with glasses 39 with soft contact lenses 40 with soft contact lenses 41 with soft contact lenses figure 3. image of the face shield made of polyethylene terephthalate (logi meister, osaka, japan) used in this study. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 in experiment 2, the developed system was used to evaluate the operability of the console simulating the radiographic console attached to the flat panel detector system while the operator is wearing a face shield. this radiographic console simulated that used in the clinical setting (console advance dr-id300cl; fujifilm, tokyo, japan) and was divided into the patient selection (figure 5 a), radiographic item confirmation (figure 5 b), and (3) radiographic screens (figure 5 c). the clicked locations are listed below. • (1) patient selection screen: select a patient from the list (figure 5 a-1) • select button (figure 5 a-2) • (2) radiographic item confirmation screen: start examination button (figure 5 b-3) • (3) radiographic screen: select the radiographic item (figure 5 c-4) • re-imaging process (figure 5 c-5) • add the same type of imaging (figure 5 c-6) • click the end button (figure 5 c-7) these buttons measured 26 cm × 1 cm, 3 cm × 1 cm, 4 cm × 2 cm, 10 cm × 2 cm, 2 cm × 2 cm, 2 cm × 2 cm, and 4 cm × 4 cm, respectively. the experimental procedure was based on the actual examination procedure, and the console was operated in the order of patient selection, confirmation of radiographic items, and operation of the radiographic screen (figure 6). the time between the start of the operation and the completion of clicking the end button was measured. the number of clicks on the screen was recorded, as was the number of clicks caused by accidental eye pauses. the procedure mentioned above was performed five times with each observer. 2.3. statistical analysis the data obtained were the calibration errors at each of the nine positions under the two face shield conditions in experiment 1. we used the paired t test to investigate whether significant differences existed in the average calibration error of all observers. we also investigated whether significant differences existed in the average calibration error at each point for all observers. a value of p < 0.05 was considered statistically significant. the average operation time and the average misoperation time for the radiographic console were obtained in experiment 2. moreover, we investigated whether the calibration errors obtained from experiment 1 was correlated with the average operation time and the average number of misoperations. furthermore, the correlation between the average operation time and the average number of misoperations was evaluated via pearson’s product–moment correlation coefficient analysis. the results ranged from −1.0 to 1.0, with −1.0 and 1.0 representing a negative and positive correlation, respectively. an absolute value of < 0.2 was defined as almost no correlation; 0.2 – 0.4, weak correlation; 0.4 – 0.7, medium correlation; and  0.7, strong correlation. a p value of < 0.05 indicated a correlation between the two face shield conditions. figure 4. illustration of the nine measurement points evaluated in this study. the calibration error was measured by calculating the difference in the coordinates between the detection and actual gaze points. figure 5 a. components of the radiographic console used in this study (patient selection). the numerals in boldface indicate the button click steps. figure 5 b. components of the radiographic console used in this study (radiographic item confirmation). the numerals in boldface indicate the button click steps. figure 5 c. components of the radiographic console used in this study (radiographic screens). the numerals in boldface indicate the button click steps. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 5 3. results 3.1. experiment 1 the average ± standard deviation (sd) calibration errors at all points for all observers with a face shield and those without a face shield were 1.22 ± 0.94 cm and 1.19 ± 0.79 cm, respectively (figure 7). no significant difference between the two face shield conditions was observed. data about the average calibration error at each point for all observers are shown in figure 8. the nine measurement points corresponded to top left, top, top right, left, centre, right, bottom left, bottom, and bottom right. only measurement point 7, which represented the bottom left, had a significantly larger calibration error in the no face shield condition. data about the average calibration error for all points for each observer are shown in figure 9. there was no tendency for operation either with a face shield or that without a face shield to be consistently larger, although significant differences were observed for 16 of the 41 observers. 3.2. experiment 2 although students and not radiologists participated in this study, they were able to operate the console easily. there was no significant difference between the calibration error of the eye tracker with and without the face shield. the results of the experiment revealed that all observers (n = 41) were able to operate the console. the average operation time was 37.89 ± 24.22 s, and the average number of misoperations was 5.4 ± 4.1. pearson’s product–moment correlation coefficient analysis found a very strong positive correlation (r = 0.92, p < 0.001) between the average time required to complete the operation and the average number of misoperations (figure 10 a). it found no correlation (r = 0.24, p = 0.13) between the average operation time and the calibration error (figure 10 b) also between the average number of misoperations and the calibration error (r = 0.28, p = 0.08) (figure 10 c). 4. discussion there have been no studies to utilize eye tracker for infection control of sars-cov-2. in previous study, there are methods to manipulate image display systems using motion sensors [19]-[24]. however, there are very few cases in which manipulation was achieved using an eye tracker. therefore, the study by use of an figure 6. overall scheme of the experimental procedure. figure 7. comparison of calibration errors between the use and nonuse of a face shield at all measurement points for all observers. figure 8. calibration errors according to measurement point. figure 9. calibration errors of all measurement points according to observer. significant differences were detected between some observers. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 6 eye tracker was state-of-the-art. furthermore, it is versatile and useful because it can be used not only for sars-cov-2, but also for infection control against other viruses and pathogenic bacteria. the method proposed in this study can not only prevent the risk of contact infection but also save supplies and improve the time efficiency of disinfection. the mean calibration errors at all points for all observers did not significantly differ with and without the use of face shield. forty-one observers participated in this study, representing a relatively large number of people. one of the primary characteristics of eye detectors is that the four corners of the screen are prone to errors during calibration. in this study, we followed the guidance of the calibration system attached to the eye tracker, made the distance between the computer screen and the observer constant, and calibrated the system to have the same geometric arrangement. however, because the pupil centre corneal reflection method was used to detect the shape of the eyeball and the position of the reflection of infrared light on the eyeball, even a slight shift in position may have affected the calibration error. nevertheless, poor operability on the four corners of the screen is not an uncommon outcome. in such a case, operability can be improved by placing the buttons closer to the centre and making them appear with gazing time or eye movement. the larger the calibration error, the larger is the gap between the gazed position and the coordinates of the mouse cursor. therefore, there was a possibility of accidental clicking on a position other than the button owing to calibration errors. moreover, the larger the calibration error, the more difficult it was to adjust the click position with the gaze point, which could increase the average operation time. pearson’s product–moment correlation coefficient analysis showed that there was a very strong correlation between the average operation time and the average number of misoperations. however, there was no correlation between the average calibration error and the average operation time and average number of misoperations. although it is natural for the operation time to increase with the number of misoperations, because the average calibration error does not correlate with the average number of misoperations, it is possible that a small calibration error, such as that in this study, would not have a significant impact on usability. nevertheless, when we obtained feedback on the system’s operability from observers after the experiment, several stated that the buttons were difficult to operate because of their small size and that they were able to click on the button by moving their eyes according to the amount of the shift when they clicked on a position different from the one they were looking at. therefore, it is possible to reduce the effect of calibration error via the operator’s effort, although the operation time will tend to increase because of misoperation. despite the system’s potential, its user interface needs to be improved because it can be easily operated at the same level as the current touch pad or touch panel operation for clinical use. however, the average operation time was longer (20–80 s) than that when the same operation was performed with a mouse (approximately 10 s). the eye tracker was affected by the calibration error, and the detection results showed coordinates that were slightly different from the actual gaze point. therefore, even a calibration error of a level that would not be problematic in such studies as a gaze analysis would be problematic in such cases as this one that require detailed button operation of the imaging console. we considered that this was a result of the small size of the buttons. if the button size is smaller than the calibration error, then it could be difficult to operate (figure 11). this study showed that it is possible to operate imaging consoles while wearing a face shield. although it is difficult to immediately introduce this technology to the clinical setting, it can be used in clinical practice in the near future because the usability of the radiographic console can be improved by simply increasing the size of the buttons. furthermore, the observers in this study were students; the system we developed is useful as it can be operated by observers who are not familiar with radiographic consoles. taken together, the proposed manipulation method that used an eye tracker for infection control is not ready for clinical use. however, because it can be used even when a face shield is worn, its clinical application is feasible with improvements in the radiographic console. it is necessary to improve the eye tracker and the ui in order to use the method in actual clinical situation. the first priority should be improvement of the ui. as mentioned above, the influence of calibration error can be reduced by improving the button size. although the observer determined the operating position according to the system's instructions during the experiment, operation was possible even if the observer's position was slightly moved. however, there were cases when the gazing position could not be detected correctly while the operating position differed significantly from the position at the calibration. as a mentioned above, the usable distance of the eye figure 10. a) – c) results of pearson’s product–moment correlation coefficient analysis of the manipulation experiment data obtained for each observer. figure 11. effect of button size characteristics on difficulty in operating our contactless system that used an eye tracker. if the button size is larger than the calibration error, then the misoperation can be reduced if the button is pressed. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 7 tracker ranges from 45 to 85 cm, which is approximately equal to the distance from which touch panel operation and/or mouse operation is performed. therefore, it is usable within the range of normal use. it was possible that the eye tracker would be unusable in the existence of natural light. however, experiments in this study were conducted in a laboratory where natural light existed and under fluorescent lighting. in other words, the experiments were conducted in an environment similar to a hospital room. in addition, there are cases in which eye tracker was used in hospital rooms in previous studies, although these studies were not directly operating computers [29], [30]. therefore, we believe that the system in this study can be used without problems in the isolation ward where it is planned to be used. 5. conclusions feasibility of a contactless operation system for radiographic consoles using an eye tracker for sars-cov-2 infection control has been demonstrated by this study. the system developed in this study is extremely useful even when the operator uses a face shield. however, because of its long average operation time, a radiographic console designed with an eye tracker should be developed for daily clinical use. thus, our proposed method can be useful for controlling not only sars-cov-2 infection but also other potential conditions in the future. 6. acknowledgement the authors thank the students of gunma prefectural college of health sciences who participated as study observers for their assistance in the measurements as well as haruka hirasawa and madoka hasegawa of gunma prefectural college of health sciences for their helpful assistance. 7. research ethics and patient consent all procedures involving human participants were performed in accordance with the ethical standards of the relevant institutional and/or national research committee and the declaration of helsinki and its later amendments or comparable ethical standards. informed consent was obtained from all participants. this research, which involves the development of a medical device operation system, specifically an image display system, using a contactless device, was approved by the ethical review committee of gunma prefectural college of health sciences (approval no.: 2020-16). this work was not previously published in part or its entirety. 8. declaration of conflict of interests the authors declare that there is no conflict of interest. 9. funding this research did not receive any specific grant from any funding agency in the public, commercial, or not-for-profit sector. references [1] c. huang, y. wang, x. li, (another 26 authors), clinical features of patients infected with 2019 novel coronavirus in wuhan, china, lancet 395 (2020), pp. 497–506. doi: 10.1016/s0140-6736(20)30183-5 [2] h. shi, x. han, n. jiang, y. cao, o. alwalid, j. gu, y. fan, c. zheng, radiological findings from 81 patients with covid-19 pneumonia in wuhan, china: a descriptive study, lancet infect dis. 20 (2020), pp.425–434. doi: 10.1016/s1473-3099(20)30086-4 [3] x. yang, y. yu, j. xu, (another 13 authors), clinical course and outcomes of critically ill patients with sars-cov-2 pneumonia in wuhan, china: a single-centered, retrospective, observational study, lancet respir med. 8 (2020), pp. 475–481. doi: 10.1016/s2213-2600(20)30079-5 [4] n. chen, m. zhou, x. dong, (another 11 authors), epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in wuhan, china: a descriptive study. lancet. 395 (2020), pp. 507–513. doi: 10.1016/s0140-6736(20)30211-7 [5] j. shigemura, r. j. ursano, j. c. morganstein, m. kurosawa, d. m. benedek,https://onlinelibrary.wiley.com/action/dosearch? contribauthorraw=kurosawa%2c+mie public responses to the novel 2019 coronavirus (2019-ncov) in japan: mental health consequences and target populations, psychiatry clin neurosci. 74 (2020), pp. 281–282. doi: 10.1111/pcn.12988 [6] j. a. otter, c. donskey, s. yezli, s. t douthwaite, transmission of sars and mers coronaviruses and influenza virus in healthcare settings: the possible role of dry surface contamination, j hosp infect. 92 (2016), pp. 235. doi: 10.1016/j.jhin.2015.08.027 [7] a. wilder-smith, c. j. chiew, v. j. lee, can we contain the covid-19 outbreak with the same measures as for sars?, lancet infect dis. 20 (2020), pp. 102. doi: 10.1016/s1473-3099(20)30129-8 [8] p. pena, j. morais, a. q. gomes, et al. sampling methods and assays applied in sars-cov-2 exposure assessment, sci total environ. 775 (2021), pp. 145903. doi: 10.1016/j.scitotenv.2021.145903 [9] morawska l, tang jw, bahnfleth w, c. viegas, how can airborne transmission of covid-19 indoors be minimised?, environ int. 142 (2020), pp. 105832. doi: 10.1016/j.envint.2020.105832 [10] world health organization. epidemic-prone and pandemicprone acute respiratory diseases. summary guidance: infection prevention & control in health-care facilities, geneva: world health organization, 2007. online [accessed 22 december 2021]. https://apps.who.int/iris/handle/10665/69793 [11] f. ogawa, h. kato, k. sakai, k. nakamura, m. ogawa, m. uchiyama, k. nakajima, y. ohyama, t. abe, i. takeuchi, environmental maintenance with effective and useful zoning to protect patients and medical staff from covid-19 infection, acute med surg. 7 (2020), pp. 536. doi: 10.1002/ams2.536 [12] k. mimura, h. oka, m. sawano, a perspective on hospitalacquired (nosocomial) infection control of covid-19: usefulness of spatial separation between wards and airborne isolation unit, j breath res. 15 (2021), pp. 042001. doi: 10.1088/1752-7163/ac1721 [13] p. an, y. ye, m. chen, y. chen, w. fan, y. wang, management strategy of novel coronavirus (covid-19), pneumonia in the radiology department: a chinese experience, diagn interv radiol. 26 (2020), pp. 200. doi: 10.5152/dir.2020.20167 [14] world health organization. infection prevention and control of epidemicand pandemic-prone acute respiratory infections in health care, geneva: world health organization, 2014. online [accessed 22 december 2021). http://apps.who.int/iris/bitstream/10665/112656/1/97892415 07134_eng.pdf?ua=1 https://doi.org/10.1016/s0140-6736(20)30183-5 https://doi.org/10.1016/s1473-3099(20)30086-4 https://doi.org/10.1016/s2213-2600(20)30079-5 https://doi.org/10.1016/s0140-6736(20)30211-7 https://onlinelibrary.wiley.com/action/dosearch?contribauthorraw=kurosawa%2c+mie https://onlinelibrary.wiley.com/action/dosearch?contribauthorraw=kurosawa%2c+mie https://doi.org/10.1111/pcn.12988 https://doi.org/10.1016/j.jhin.2015.08.027 https://doi.org/10.1016/s1473-3099(20)30129-8 https://doi.org/10.1016/j.scitotenv.2021.145903 https://doi.org/10.1016/j.envint.2020.105832 https://apps.who.int/iris/handle/10665/69793 https://doi.org/10.1002/ams2.536 https://doi.org/10.1088/1752-7163/ac1721 https://doi.org/10.5152/dir.2020.20167 http://apps.who.int/iris/bitstream/10665/112656/1/9789241507134_eng.pdf?ua=1 http://apps.who.int/iris/bitstream/10665/112656/1/9789241507134_eng.pdf?ua=1 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 8 [15] d. pittet, improving compliance with hand hygiene in hospitals, infect control hosp epidemiol. 21 (2000), pp. 381–386. doi: 10.1086/501777 [16] e. girou, f. oppein, handwashing compliance in a french university hospital: new perspective with the introduction of hand-rubbing with a waterless alcohol-based solution, j hosp infect. 48 (2001), pp. 55–57. doi: 10.1016/s0195-6701(01)90015-5 [17] d. pittet, compliance with hand disinfection and its impact on hospital-acquired infections, j hosp infect 48 (2001), pp. 40–46. doi: 10.1016/s0195-6701(01)90012-x [18] r. pintaric, j. matela, s. pintaric, suitability of electrolyzed oxidizing water for the disinfection of hard surfaces and equipment in radiology, j environ health sci eng. 13 (2015), 6 p. doi: 10.1186/s40201-015-0160-8 [19] j. h. tan, c. chao, m. zawaideh, a. c. roberts, t. b. kinney, informatics in radiology: developing a touchless user interface for intraoperative image control during interventional radiology procedures, radiographics 33 (2013), pp. 61–70. doi: 10.1148/rg.332125101 [20] g. c. s. ruppert, l. o. reis, p. h. j. amorim, t. f. de moraes, j. v. lopes da silva, touchless gesture user interface for interactive image visualization in urological surgery, world j urol. 30 (2012), pp. 687–691. doi: 10.1007/s00345-012-0879-0 [21] m. g. jacob, j. p. wachs, r. a. packer, hand-gesture-based sterile interface for the operating room using contextual cues for the navigation of radiological images, j am med inform assoc 20 (2013), pp. 183–186. doi: 10.1136/amiajnl-2012-001212 [22] t. ogura, m. sato, y. ishida, n. hayashi, k. doi, development of a novel method for manipulation of angiographic images by use of a motion sensor in operating rooms, radiol. phys technol. 7 (2014), pp. 228–234. doi: 10.1007/s12194-014-0259-0 [23] sato m, ogura t, yasumoto y, et al. development of an image operation system with a motion sensor in dental radiology. radiol phys technol. 8 (2015), pp. 243–247. doi: 10.1007/s12194-015-0313-6 [24] a. mewes, b. hensen, f. wacker, c. hansen, touchless interaction with software in interventional radiology and surgery: a systematic literature review, int j comput. assist radiol. surg. 12 (2017), pp. 291–305. doi: 10.1007/s11548-016-1480-6 [25] m. sato, m. takahashi, h. hoshino, t. terashita, n. hayashi, h. watanabe, t. ogura, development of an eye-tracking image manipulation system for angiography: a comparative study, acad radiol (2020), pp. 1–10. doi: 10.1016/j.acra.2020.09.027 [26] tobii. how do tobii eye trackers work? learn more with tobii pro, stockholm: tobii, 2015. online [accessed 22 december 2020]. https://www.tobiipro.com/learn-and-support/learn/eyetracking-essentials/how-do-tobii-eye-trackers-work/ [27] k. holmqvist, m. nyström, r. andersson, r. dewhurst, h. jarodzka, eye tracking. a comprehensive guide to methods and measures, uk: oxford university press. 2011, isbn: 9780199697083, pp 1-560. [28] a. van der gijp, c. j. ravesloot, h. jarodzka, m. f. van der schaaf, i. c. van der schaaf, j. p. j. van schaik, th. j. ten cate, how visual search relates to visual diagnostic performance: a narrative systematic review of eye-tracking research in radiology, adv health sci educ theory pract. 22 (2017), pp. 765–787. doi: 10.1007/s10459-016-9698-1 [29] r. bates, m. donegan, h. o. istance, j. p. hansen, k.-j. räihä, introducing cogain: communication by gaze interaction, univ access inf soc, 6(2007), 159–166. doi: 10.1007/s10209-007-0077-9 [30] m. debeljak, j. ocepek, a. zupan, eye controlled human computer interaction for severely motor disabled children. computers helping people with special needs, icchp 2012. lecture notes in computer science, 7383(2012), pp. 153-156. https://doi.org/10.1086/501777 https://doi.org/10.1016/s0195-6701(01)90015-5 https://doi.org/10.1016/s0195-6701(01)90012-x https://doi.org/10.1186/s40201-015-0160-8 https://doi.org/10.1148/rg.332125101 https://doi.org/10.1007/s00345-012-0879-0 https://doi.org/10.1136/amiajnl-2012-001212 https://doi.org/10.1007/s12194-014-0259-0 https://doi.org/10.1007/s12194-015-0313-6 https://doi.org/10.1007/s11548-016-1480-6 https://doi.org/10.1016/j.acra.2020.09.027 https://www.tobiipro.com/learn-and-support/learn/eye-tracking-essentials/how-do-tobii-eye-trackers-work/ https://www.tobiipro.com/learn-and-support/learn/eye-tracking-essentials/how-do-tobii-eye-trackers-work/ https://doi.org/10.1007/s10459-016-9698-1 https://doi.org/10.1007/s10209-007-0077-9 journal contacts acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 2 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 journal contacts about the journal acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are mainly based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. about imeko the international measurement confederation, imeko, is an international federation of actually 42 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses principal contact prof. francesco lamonaca university of calabria department of computer science, modelling, electronic and system science via p. bucci, 41c, vi floor, arcavacata di rende, 87036 (cs), italy e-mail: editorinchief.actaimeko@hunmeko.org support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig, germany e-mail: dirk.roeske@ptb.de editor‐in‐chief francesco lamonaca, italy founding editor‐in‐chief paul p. l. regtien, netherlands associate editor dirk röske, germany copy editors egidio de benedetto, italy silvia sangiovanni, italy layout editors dirk röske, germany leonardo iannucci, italy domenico luca carnì, italy editorial board leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france paolo carbone, italy lorenzo ciani, italy catalin damian, romania pasquale daponte, italy luca de vito, italy sascha eichstaedt, germany ravi fernandez, germany luigi ferrigno, italy edoardo fiorucci, italy alistair forbes, united kingdom helena geirinhas ramos, portugal sabrina grassini, italy leonardo iannucci, italy fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom yasuharu koike, japan dan kytyr, czechia francesco lamonaca, italy aime lay ekuakille, italy massimo lazzaroni, italy fabio leccese, italy rosario morello, italy michele norgia, italy franco pavese, italy pedro miguel pinto ramos, portugal nicola pompeo, italy sergio rapuano, italy renato reis machado, brazil álvaro ribeiro, portugal gustavo ripper, brazil dirk röske, germany maik rosenberger, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy michela sega, italy enrico silva, italy pier giorgio spazzini, italy krzysztof stepien, poland ronald summers, uk marco tarabini, italy tatjana tomić, croatia joris van loco, belgium zsolt viharos, hungary bernhard zagar, austria davor zvizdic, croatia mailto:editorinchief.actaimeko@hunmeko.org mailto:dirk.roeske@ptb.de acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 section editors (vol. 7 11) yvan baudoin, belgium piotr bilski, poland francesco bonavolonta, italy giuseppe caravello, italy carlo carobbi, italy marcantonio catelani, italy mauro d’arco, italy egidio de benedeto, italy alessandro depari, italy alessandro germak, italy istván harmati, hungary min-seok kim, korea bálint kiss, hungary momoko kojima, japan koji ogushi, japan vilmos palfi, hungary jeerasak pitakarnnop, thailand md zia ur rahman, india fabio santaniello, italy jan saliga, slovakia emiliano sisinni, italy ciro spataro, italy oscar tamburis, italy zafar taqvi, usa jorge c. torres-guzman, mexico ioan tudosa, italy ian veldman, south africa rugkanawan wongpithayadisai, thailand claudia zoani, italy on the design and characterisation of a microwave microstrip resonator for gas sensing applications acta imeko issn: 2221-870x june 2021, volume 10, number 2, 54 61 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 54 on the design and characterisation of a microwave microstrip resonator for gas sensing applications giovanni gugliandolo1, davide aloisio2, giuseppe campobello1, giovanni crupi3, nicola donato1 1 department of engineering, university of messina, italy 2 cnr itae, messina, italy 3 biomorf department, university of messina, italy section: research paper keywords: microwaves; resonators; gas sensors; metrological evaluation, humidity citation: giovanni gugliandolo, davide aloisio, giuseppe campobello, giovanni crupi, nicola donato, on the design and characterisation of a microwave microstrip resonator for gas sensing applications, acta imeko, vol. 10, no. 2, article 9, june 2021, identifier: imeko-acta-10 (2021)-02-09 section editor: ciro spataro, university of palermo, italy received january 17, 2021; in final form may 4, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: giovanni gugliandolo, e-mail: giovanni.gugliandolo@unime.it 1. introduction nowadays, the research interests in the development of sensors with extremely low-power consumption is growing because of the increasingly energy-saving requirements of the expanding market. this can be seen by the recent high demand of portable battery powered devices often used in wireless sensor networks (wsns) for industrial (e.g., harmful gas detection) [1], [2], healthcare (e.g., wearable or implantable devices) [3]-[6], and environmental (e.g., weather forecast) [7]-[10] monitoring applications. several sensor typologies have been investigated in order to achieve the best trade-off between performance and power consumption with a focus on size, weight, and production costs. in this context, microwave devices are considered as an attractive solution thanks to their interesting features in terms of cost, power consumption, and response time. they have been employed for materials characterization [11]-[13] as well as for gas sensing applications [14]. microwave gas sensors have the ability to operate at room temperature without the need of a heater [15], [16]. moreover, they are fully compatible with wireless technology so that they can be easily integrated into wireless smart nodes [17]-[19]. in particular, the planar microstrip technology is widely employed in the fabrication of microwave components like antennas, filters, and resonators. such devices are often used in sensing applications because of their low cost, easy fabrication, and good performance [20]-[24]. the microwave microstrip sensors are attractive especially for gas sensing applications, where the frequency-dependent dielectric properties of the sensing material are related with the adsorption of the target gas of interest on the sensing layer, deposited on the microstrip propagative structure. the progress in nanotechnologies has enabled advancements in the use of gas sensors using nanostructured materials as sensing layers [14], [25]-[27]. abstract this study focuses on the microwave characterisation of a microstrip resonator aimed for gas sensing applications. the developed oneport microstrip resonator, consisting of three concentric rings with a central disk, is coupled to a 50-ω microstrip feedline through a small gap. a humidity sensing layer is deposited on this gap by drop-coating an aqueous solution of ag@α-fe2o3 nanocomposite. the operation principle of the developed humidity sensor is based on the change of the dielectric properties of the ag@α-fe2o3 nanocomposite when the relative humidity is varied. however, it should be underlined that, depending on the choice of the sensing material, different target gases of interest can be detected with the proposed structure. the frequency-dependent response of the sensor is obtained using the reflection coefficient measured from 3.5 ghz to 5.6 ghz, with relative humidity ranging from 0 %rh to 83 %rh. the variation of the humidity concentration strongly impacts on the two resonances detected in the measured reflection coefficient. in particular, an increase of the humidity level leads to lowering both resonant frequencies, which can be used as sensing parameters for humidity monitoring purpose. an exponential function has been used to accurately model the two resonant frequencies as a function of the humidity. mailto:giovanni.gugliandolo@unime.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 55 following on from the results of our previous study [28], we present here a thorough investigation of a one-port gas transducer based on a microwave microstrip resonator, which is validated as humidity sensor by using an ag@α-fe2o3 nanocomposite as a sensing material. the experimental-based investigation is performed by focusing the analysis on both magnitude and phase of the reflection coefficient ( ) and its corresponding impedance (z). in particular, we monitored the relative humidity over the broad range going from 0% to 83% at room temperature, and by assessing the sensing performance of the developed gas transducer to change in the relative humidity in terms of variations in the frequency-dependent behaviour of . as shown later in this paper, two dips are clearly visible in the magnitude of  for the proposed sensor at approximately 3.7 ghz and 5.4 ghz, and their appearance is shifted towards lower frequencies when the humidity level is increased. hence, the resonant frequencies (fr1 and fr2) associated to the two dips observed in  can be directly used as humidity sensing parameters. to this end, a sensitivity-based investigation is developed in order to assess the sensing performance of the proposed microwave sensor for humidity monitoring application. the humidity-dependent variations in the two resonant frequencies are accurately modelled by using an exponential function. this article is structured as follows. section ii is dedicated to the design of the microstrip resonator, which is based on using three concentric rings with a central disk. this choice was made after a careful analysis of the performance of different resonator topologies through computer simulations. compared to the traditional ring configuration [29], [30], the proposed topology allows improvement in the quality (q-) factor and, thus, in the detection process. section iii is devoted to the development of the humidity sensor, which is based on using an ag@α-fe2o3 nanocomposite as sensing material. it is worth noting that the high porosity of the nanostructure allows enhancement of the interaction with water vapour, thereby leading to an improved humidity sensitivity. section iv is focused on the description of the fitting of the measurements locally around the two observed resonances by using a lorentzian function. section v is dedicated to the description of the setup for frequencyand humidity-dependent characterization and to the presentation of the experimental results. finally, the conclusions are drawn in the last section. 2. resonator design and simulation the proposed gas transducer is based on a concentric rings microstrip (crm) resonator acting as a propagative structure for the electromagnetic waves. this novel topology of propagative structure is composed by three concentric copper rings with a 6mm copper central disk and a 50-ω microstrip feedline coupled to the resonator through a 0.2-mm gap. the matlab antenna toolbox was used for the design process. as illustrated in figure 1, four different resonator topologies were considered during the design step based on computer simulations over the frequency range going from 3 ghz to 6 ghz: the classic ring resonator, two concentric rings, three concentric rings, and three concentric rings with a disk in the middle. starting from the traditional configuration, the coupling gap and the ring thickness were optimized in terms of q-factor. later, additional rings were included in the design considering a constant spacing. figure 3 shows the frequencydependent behaviour of the magnitude of the simulated  for the four studied topologies. as can be observed, the computer simulations show that all investigated topologies have two resonances appearing in , which can be detected as two marked dips occurring at about 3.7 ghz (i.e., dip 1) and 5.4 ghz (i.e., dip 2), respectively. the two dips are more clearly visible in figure 1. illustration of the four studied resonator topologies: (a) traditional ring, (b) two concentric rings, (c) three concentric rings, and (d) three concentric copper rings with a central disk. 3.0 3.5 4.0 4.5 5.0 5.5 6.0 -25 -20 -15 -10 -5 0 5  ( d b ) frequency (ghz) resonator rings+dot resonator 1 ring resonator 2 rings resonator 3 rings figure 2. behaviour of the magnitude of the simulated reflection coefficient versus frequency from 3.0 ghz to 6.0 ghz for the four studied resonator topologies. 3.4 3.5 3.6 3.7 5.2 5.3 5.4 5.5 5.6 -25 -20 -15 -10 -5 0 5  ( d b ) frequency (ghz) resonator rings + disk resonator 1 ring resonator 2 rings resonator 3 rings figure 3. illustration of the two dips appearing in the magnitude of the simulated reflection coefficient for the four studied resonator topologies. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 56 figure 2, where the observation of  is limited to the two narrow frequency bands around fr1 and fr2. to assess the microwave performance of the studied resonator topologies, the quality factor improvement was evaluated by using the single-ring configuration as a reference for comparison. figure 4 shows the q-factor improvement for both resonances as a function of the number of concentric rings. it is worth noting that the selected topology (consisting of three rings with a central disk) allows achieving an improvement in the qfactor equal to 6% and 44% at fr1 and fr2, respectively. the crm resonator was fabricated on a 3.2-mm fr4 substrate [31] with copper as conductor for both top and ground layers by using the protomat s103 pcb milling machine. the dielectric constant (εr) and the loss tangent (tanδ) of the substrate are 4.3 and 0.025, respectively. an sma connector was soldered at the end of the 50-ω microstrip feedline to connect the resonator with a vector network analyzer (vna) for measuring γ. 3. sensor development to obtain the gas sensor, a sensing material was deposited on the surface of the propagative structure. in particular, an aqueous solution of ag@α-fe2o3 nanocomposite was deposited on the gap placed between the external ring and the microstrip feedline by drop coating. the description and synthesis of this humidity sensing material is reported in [32]. the effect of the sensing material deposition on the frequency-dependent behaviour of γ of the developed structure was measured from 3.5 ghz to 5.6 ghz using the agilent 8753es vna with a one-port calibration (short open load, agilent 85052 mechanical calibration kit). as shown in figure 5, both dips in γ become much more pronounced after deposition, improving the quality factor of both dips. for the sake of completeness, the real and imaginary parts of the resonator input impedance for the selected frequency ranges are reported in figure 6. 4. resonator parameters evaluation estimating the resonant frequency (fr), quality factor (q), and dip amplitude (ar) from a discrete frequency response is not a trivial task. a simple linear interpolation of the available discrete data can lead to an inaccurate estimation of these quantities, especially when the data are affected by noise. a better fitting approach consists in using a lorentzian function [33], [34], which 2 3 4 -10 0 10 20 30 40 50 dip at 3.7 ghz dip at 5.4 ghz number of rings q f a c to r im p ro v e m e n t (% ) figure 4. analysis of the quality factor improvement of two resonances observed in the simulated reflection coefficient as a function of the number of rings of the resonator structure. 3.5 3.6 3.7 3.8 5.2 5.3 5.4 5.5 5.6 -50 -40 -30 -20 -10 0 (a)  m a g n it u d e ( d b ) frequency (ghz) after deposition before deposition ranges investigated for sensing response 3.5 3.6 3.7 3.8 5.2 5.3 5.4 5.5 5.6 -200 -150 -100 -50 0 50 100 150 200 (b)  p h a s e ( ° ) frequency (ghz) after deposition before deposition ranges investigated for sensing response figure 5. behaviour of the (a) magnitude and (b) phase of the measured reflection coefficient as a function of frequency, from 3.5 ghz to 5.6 ghz, for the studied resonator before (red lines) and after (blue lines) deposition of the sensing material. 3.5 3.6 3.7 3.8 5.2 5.3 5.4 5.5 5.6 0 25 50 75 100 125 150 175 200 (a) z r e a l p a rt (  ) frequency (ghz) after deposition before deposition ranges investigated for sensing response 3.5 3.6 3.7 3.8 5.2 5.3 5.4 5.5 5.6 -200 -150 -100 -50 0 50 100 150 200 z i m a g in a ry p a rt (  ) frequency (ghz) after deposition before deposition ranges investigated for sensing response (b) figure 6. behaviour of the (a) real and (b) imaginary parts of the impedance as a function of frequency from 3.5 ghz to 5.6 ghz, for the studied resonator before (red lines) and after (blue lines) deposition of the sensing material. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 57 allows achieving a good estimation of the resonant parameters fr, q, and ar. a more accurate result can be achieved by using a complex function to fit both real and imaginary parts of the spectrum [35], [36]. this technique can be useful in several applications in which the calibration procedure is impracticable (e.g., in cryogenic measurement systems) [36]. the frequency-dependent behaviour of the magnitude of γ of the microwave resonator was modelled as a lorentzian function: |𝛤(𝑓)| = 𝑐0 − 𝑎0 𝜋 ∙ 1 2 𝐺 (𝑓 − 𝑓𝑅 ) 2 + ( 1 2 𝐺) 2 , (1) where f is the frequency, c0 and a0 are two real coefficients, and g is the full width at half maximum. from equation (1), ar and q can be calculated respectively as: 𝐴𝑅 = 𝑐0 − 𝑎0 ∙ 2 𝜋𝐺 , (2) 𝑄 = 𝑓 ∆𝑓 = 𝑓𝑅 𝐺√√2 − 1 , (3) where δf is the resonator half-power bandwidth. the levenberg-marquardt algorithm was used for fitting the measured data points with the lorentzian function. it is found that the lorentzian curve allows fitting very well the two observed resonant dips, so that it is possible to obtain a smooth behaviour of the magnitude of γ over a continuous spectrum of frequencies for the estimation of the resonant parameters. as an illustrative example, figure 7 reports the lorentzian fitting applied to the magnitude of the measured γ over a narrow frequency band around the second resonance. by using the fitting process, the parameters fr, q, and ar can be accurately estimated over the whole considered humidity range. 5. experimental results the sensor was placed in a test chamber filled with a controlled atmosphere, where the electrical signal was supplied via an rf feed-through for connection with the agilent 8753es vna (see figure 8). the test chamber consists of a modified petri dish made in polystyrene, able to provide both a controlled atmosphere and good microwave propagation avoiding signal perturbations. the developed sensor was characterized at seven different values of the relative humidity concentration, ranging from 0 %rh to 83 %rh, at room temperature. the 0 %rh nominal value was set by means of the certification of the gas bottles (0.5%). the test gas mixture was set by means of a fully automated gas control system made by a certified gas bottle and a bubbler inside a thermostatic bath. the system is equipped with an array of bronkhorst® mass flow controllers able to set a flux of 100 cm3/min in the test chamber, providing a fast set and purge for each test value of the humidity concentration. the diagram of the gas apparatus is shown in figure 9. after performing a one-port calibration, the reflection coefficient was measured at each humidity condition. figure 10 and figure 11 illustrate the impact of the relative humidity on the measured 5.4682 5.4684 5.4686 5.4688 5.4690 5.4692 5.4694 -56 -54 -52 -50  m a g n it u d e ( d b ) frequency (ghz) measurement lorentzian fitting figure 7. illustration of the lorentzian fitting (red line) of the magnitude of the measured (black line) reflection coefficient over a narrow frequency band around the second resonance for the studied resonator. figure 8. illustration of (a) the sensor prototype placed in test chamber and (b) the frequencyand humidity-dependent characterization procedure. figure 9. illustration of the automated gas control and measurement system. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 58 behaviour of the complex reflection coefficient over two narrow frequency bands around the two observed dips, which were detected at approximately 3.7 ghz and 5.4 ghz. it can be seen that the size and the shape of the dips change significantly with the humidity values. it should be mentioned that humidity-dependent variations are observed in all the three parameters fr, q-factor, and ar for both resonances. nevertheless, q-factor and ar do not follow a clear monotonic trend (see figure 12 and figure 13). on the other hand, it is worth noting that both resonant frequencies decrease with increasing the humidity level (see figure 14), thereby enabling the use of the two resonant frequencies as humidity sensing parameters. with the aim to evaluate the humidity sensing performance of the developed gas transducer for the whole investigated humidity range, we used an exponential function to fit the two resonant frequencies as a function of humidity: 𝑓𝑅 = 𝐴 ∙ e (− 𝑅𝐻 𝐵 ) + 𝐶, (4) where fr represents the considered resonant frequency, rh is the relative humidity value, a, b, and c are the fitting parameters. the calibration curve for both dip 1 and dip 2 is depicted in figure 14(a); in table 1 the fitting parameters are reported, while the calibration fit residuals are shown in figure 14(b). for dip 1 3.696 3.698 3.700 3.702 3.704 -60 -55 -50 -45 -40 -35 -30 -25 -20  m a g n it u d e ( d b ) frequency (ghz) 0 % rh 22 % rh 28 % rh 39 % rh 54 % rh 74 % rh 83 % rh increasing humidity (a) 3.696 3.698 3.700 3.702 3.704 -40 -20 0 20 40 60 80 100 120 140 160  p h a s e ( ° ) frequency (ghz) 0 % rh 22 % rh 28 % rh 39 % rh 54 % rh 74 % rh 83 % rh increasing humidity (b) figure 10. behaviour of the (a) magnitude and (b) phase of the measured reflection coefficient over a narrow frequency band around the first resonance for the studied resonator, for seven relative humidity values. 5.463 5.466 5.469 5.472 5.475 -65 -60 -55 -50 -45 -40 -35 -30 -25 -20 (a)  m a g n it u d e ( d b ) frequency (ghz) 0 % rh 22 % rh 28 % rh 39 % rh 54 % rh 74 % rh 83 % rh increasing humidity 5.463 5.466 5.469 5.472 5.475 -160 -140 -120 -100 -80 -60 -40 -20 0 20 40  p h a s e ( ° ) frequency (ghz) 0 % rh 22 % rh 28 % rh 39 % rh 54 % rh 74 % rh 83 % rh increasing humidity (b) figure 11. behaviour of the (a) magnitude and (b) phase of the measured reflection coefficient over a narrow frequency band around the second resonance for the studied resonator at seven relative humidity values. 0 20 40 60 80 2000 3000 4000 5000 6000 7000 dip 1 @ 3.7 ghz dip 2 @ 5.4 ghz relative humidity (%) q f a c to r 2000 3000 4000 5000 6000 7000 q fa c to r figure 12. analysis of the quality factor of two resonances observed in the measured reflection coefficient of the resonator as a function of the humidity. 0 20 40 60 80 -56 -54 -52 -50 dip 1 @ 3.7 ghz dip 2 @ 5.4 ghz relative humidity (%)  m in ( d b ) -62 -60 -58 -56 -54 -52 -50  m in (d b ) figure 13. magnitude of the measured reflection coefficient of the resonator at the first (black) and second (blue) resonances as a function of the humidity. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 59 residuals are almost within ± 200 khz that, considering an absolute sensitivity of 26.4 khz/%rh, corresponds to ± 7.6 %rh. on the other hand, dip 2 exhibits a higher sensitivity (29.3 khz/%rh) with lower calibration fit residuals in comparison to dip 1: ± 100 khz, or ± 3.4 %rh. as an alternative, it is possible to use both dips for humidity detection, thereby reducing the measurement error and increasing accuracy [37]. for the sake of completeness, the impact of the humidity variations is reported also for the impedance associated to the measured , focusing on the two narrow frequency bands around the two dips. figure 15 and figure 16 show that a higher humidity implies that the real part decreases close to dip 1 and then increases close to dip 2, whereas the imaginary part is shifted towards higher values in both frequency bands. 6. conclusions a one-port microwave gas transducer was developed by coupling a microstrip resonator for electromagnetic wave propagation with an ag@α-fe2o3 nanocomposite for humidity 0 20 40 60 80 -300 -200 -100 0 100 200 300 (b) c a li b ra ti o n f it r e s id u a ls ( k h z ) relative humidity (% rh) dip 1 @ 3.7 ghz dip 2 @ 5.4 ghz figure 14. calibration curve for both dips (a) and calibration fit residuals (b). table 1. fitting parameters values with standard errors for the two dips observed. parameter dip 1 dip 2 value standard error value standard error a (mhz) 2.68 0.302 2.92 0.104 b (%rh) 37.21 10.349 28.18 2.485 c (mhz) 3699.51 0.3043 5467.92 0.088 r2 = 0.994 r2 = 0.956 3.694 3.696 3.698 3.700 3.702 3.704 3.706 46 48 50 52 54 56 58 (a) z r e a l p a rt (  ) frequency (ghz) 0 % rh 22 % rh 28 % rh 39 % rh 54 % rh 74 % rh 83 % rh increasing humidity 3.694 3.696 3.698 3.700 3.702 3.704 3.706 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 (b) z i m a g in a ry p a rt (  ) frequency (ghz) 0 % rh 22 % rh 28 % rh 39 % rh 54 % rh 74 % rh 83 % rh increasing humidity figure 15. behaviour of the (a) real and (b) imaginary parts of the measured impedance over a narrow frequency band around the first resonance for the studied resonator, for seven relative humidity values. 5.460 5.463 5.466 5.469 5.472 5.475 45 46 47 48 49 50 51 52 53 54 (a) z r e a l p a rt (  ) frequency (ghz) 0 % rh 22 % rh 28 % rh 39 % rh 54 % rh 74 % rh 83 % rh increasing humidity 5.460 5.463 5.466 5.469 5.472 5.475 -3 -2 -1 0 1 2 (b) z i m a g in a ry p a rt (  ) frequency (ghz) 0 % rh 22 % rh 28 % rh 39 % rh 54 % rh 74 % rh 83 % rh increasing humidity figure 16. behaviour of the (a) real and (b) imaginary parts of the measured impedance over a narrow frequency band around the second resonance for the studied resonator, for seven relative humidity values. 0 20 40 60 80 3.698 3.699 3.700 3.701 3.702 3.703 (a) dip 1 @ 3.7 ghz dip 2 @ 5.4 ghz fitting relative humidity (% rh) f re q u e n c y (g h z ) 5.467 5.468 5.469 5.470 5.471 5.472 f re q u e n c y (g h z ) acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 60 monitoring purpose. the sensing performance of this prototype was established by monitoring relative humidity from 0 %rh to 85 %rh at room temperature. to this end, the sensor was placed in a test chamber consisting of a modified petri dish made in polystyrene. by using a vna, the reflection coefficient was measured over the 3.5 ghz … 5.6 ghz frequency range, under seven different conditions of relative humidity. it was observed that the frequency-dependent behaviour of the reflection coefficient exhibits two marked dips that change in intensity, broadness, and location when the relative humidity is varied. in particular, the two detected resonant frequencies progressively shift towards lower values with increasing humidity, enabling their use as effective sensing parameters. the humiditydependent behaviour of the two resonant frequencies was accurately reproduced by using an exponential function. the sensitivity-based analysis showed that the higher resonant frequency is the most sensitive parameter to change when the relative humidity is varied. finally, it should be highlighted that, although the reported analysis was limited to the humidity sensing application, the developed transducer can be applied also for the detection of different target gases by selecting an appropriate sensing material tailored to the specific sensing application. references [1] p. c. jain, r. kushwaha, wireless gas sensor network for detection and monitoring of harmful gases in utility areas and industries, 2012 sixth international conference on sensing technology (icst), kolkata, india, 18-21 dec. 2012, pp. 642-646. doi: 10.1109/icsenst.2012.6461759 [2] g. campobello, m. castano, a. fucile, a. segreto, weva: a complete solution for industrial internet of things, lect. notes in computer science, 10517 (2017) lncs, pp. 231-238. doi: 10.1007/978-3-319-67910-5_19 [3] g. campobello, a. segreto, s. zanafi, s. serrano, an efficient lossless compression algorithm for electrocardiogram signals, 2018 26th european signal processing conference (eusipco), rome, italy, 3-7 sept. 2018, pp. 777-781. doi: 10.23919/eusipco.2018.8553597 [4] a. darwish, a. e. hassanien. wearable and implantable wireless sensor network solutions for healthcare monitoring, sensors 11(6) (2011), pp. 5561-5595. doi: 10.3390/s110605561 [5] x. fu, w. chen, sh. ye, y. tu, y. tang, d. li, h. chen, k. jiang, a wireless implantable sensor network system for in vivo monitoring of physiological signals, ieee transactions on information technology in biomedicine 15(4) (2011), pp. 577584. doi: 10.1109/titb.2011.2149536 [6] g. gugliandolo, g. campobello, p. capra, s. marino, a. bramanti, g. lorenzo, n. donato, a movement-tremors recorder for patients of neurodegenerative diseases, ieee transactions on instrumentation and measurement 68(5) (2019), pp. 1451-1457. doi: 10.1109/tim.2019.2900141 [7] f. kiani, a. seyyedabbasi, wireless sensor network and internet of things in precision agriculture, international journal of advanced computer science and applications 9(6) (2018). doi: 10.14569/ijacsa.2018.090614 [8] g. campobello, a. segreto, s. zanafi, s. serrano, rake: a simple and efficient lossless compression algorithm for the internet of things 2017 25th european signal processing conference (eusipco), kos, greece, 28 august-2 september 2017, pp. 2581-2585. doi: 10.23919/eusipco.2017.8081677 [9] s. ullo, m. gallo, g. palmieri, p. amenta, m. russo, g. romano, m. ferrucci, a. ferrara, m. de angelis, application of wireless sensor networks to environmental monitoring for sustainable mobility, 2018 ieee international conference on environmental engineering (ee), milan, italy, 12-14 march 2018, pp. 1-7. doi: 10.1109/ee1.2018.8385263 [10] g. borrello, e. salvato, g. gugliandolo, z. marinkovic, n. donato, udoo-based environmental monitoring system. in: de gloria a. (eds) applications in electronics pervading industry, environment and society. applepies, lecture notes in electrical engineering, 409 (2017) springer, cham. doi: 10.1007/978-3-319-47913-2_21 [11] p. österberg, m. heinonen, m. ojanen-saloranta, a. mäkynen, comparison of the performance of a microwave-based and an nmr-based biomaterial moisture measurement instrument, acta imeko 5(4) (2016), pp. 88-99. doi: 10.21014/acta_imeko.v5i4.391 [12] m. scheffler, m. m. felger, m. thiemann, d. hafner, k. schlegel, m. dressel, k. ilin, m. siegel, s. seiro, c. geibel, f. steglich, broadband corbino spectroscopy and stripline resonators to study the microwave properties of superconductors, acta imeko 4(3) (2015), pp. 47-52. doi: 10.21014/acta_imeko.v4i3.247 [13] a. alimenti, k. torokhtii, n. pompeo, e. piuzzi, e. silva, characterisation of dielectric 3d-printing materials at microwave frequencies, acta imeko 9(3) (2020), pp. 26-32. doi: 10.21014/acta_imeko.v9i3.778 [14] g. gugliandolo, d. aloisio, s. g. leonardi, g. campobello, n. donato, resonant devices and gas sensing: from low frequencies to microwave range, proc, of 14th int. conf. telsiks 2019, 23-25 october 2019, nis, serbia, article n. 9002368, pp. 21-28. doi: 10.1109/telsiks46999.2019.9002368 [15] t. guo, t. zhou, q. tan, q. guo, f. lu, j. xiong, a roomtemperature cnt/fe3o4 based passive wireless gas sensor, sensors 18(10) (2018), art 3542. doi: 10.3390/s18103542 [16] k. staszek, a. szkudlarek, m. kawa, a. rydosz, microwave system with sensor utilizing go-based gas-sensitive layer and its application to acetone detection, sensors and actuators b: chemical, 297 (2019), art. 126699. doi: 10.1016/j.snb.2019.126699 [17] b. wu, x. zhang, b. huang, y. zhao, c. cheng, h. chen, highperformance wireless ammonia gas sensors based on reduced graphene oxide and nano-silver ink hybrid material loaded on a patch antenna, sensors 17(9) (2017), art. 2070. doi: 10.3390/s17092070 [18] g. gugliandolo, k. naishadham, n. donato, g. neri, v. fernicola, sensor-integrated aperture coupled patch antenna, 2019 ieee international symposium on measurements & networking (m&n), catania, italy, 8-10 july 2019, pp. 1-5. doi: 10.1109/iwmn.2019.8805023 [19] g. gugliandolo, k. naishadham, g. neri, v. c. fernicola, n. donato, a novel sensor-integrated aperture coupled microwave patch resonator for humidity detection, in ieee transactions on instrumentation and measurement70 (2021), pp. 1-11. doi: 10.1109/tim.2021.3062191 [20] g. barochi, j. rossignol, m. bouvet, development of microwave gas sensors, sensors and actuators b, 157(2) (2011), pp. 374-379. doi: 10.1016/j.snb.2011.04.059 [21] m. h. zarifi, t. thundat, m. daneshmand, high resolution microwave microstrip resonator for sensing applications, sensors and actuators a: physical 233 (2015), pp. 224-230. doi: 10.1016/j.sna.2015.06.031 [22] d. aloisio, n. donato, development of gas sensors on microstrip disk resonators, procedia engineering 87 (2014), pp. 1083-1086. doi: 10.1016/j.proeng.2014.11.351 [23] z. marinković, g. gugliandolo, m. latino, g. campobello, g. crupi, n. donato, characterization and neural modeling of a microwave gas sensor for oxygen detection aimed at healthcare applications, sensors 20(24) (2020), art. 7150. doi: 10.3390/s20247150 https://doi.org/10.1109/icsenst.2012.6461759 https://doi.org/10.1007/978-3-319-67910-5_19 https://doi.org/10.23919/eusipco.2018.8553597 https://doi.org/10.3390/s110605561 https://doi.org/10.1109/titb.2011.2149536 https://doi.org/10.1109/tim.2019.2900141 https://dx.doi.org/10.14569/ijacsa.2018.090614 https://doi.org/10.23919/eusipco.2017.8081677 https://doi.org/10.1109/ee1.2018.8385263 https://doi.org/10.1007/978-3-319-47913-2_21 http://dx.doi.org/10.21014/acta_imeko.v5i4.391 http://dx.doi.org/10.21014/acta_imeko.v4i3.247 http://dx.doi.org/10.21014/acta_imeko.v9i3.778 https://doi.org/10.1109/telsiks46999.2019.9002368 https://doi.org/10.3390/s18103542 https://doi.org/10.1016/j.snb.2019.126699 https://doi.org/10.3390/s17092070 https://doi.org/10.1109/iwmn.2019.8805023 https://doi.org/10.1109/tim.2021.3062191 https://doi.org/10.1016/j.snb.2011.04.059 https://doi.org/10.1016/j.sna.2015.06.031 https://doi.org/10.1016/j.proeng.2014.11.351 https://doi.org/10.3390/s20247150 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 61 [24] g. gugliandolo, m. latino, g. campobello, z. marinkovic, g. crupi, n. donato, on the gas sensing properties of microwave transducers, 2020 55th international scientific conference on information, communication and energy systems and technologies (icest), niš, serbia, 10-12 sept. 2020, pp. 161-194. doi: 10.1109/icest49890.2020.9232765 [25] s. b. tooski, sense toxins/sewage gases by chemically and biologically functionalized single-walled carbon nanotube sensor based microwave resonator, j. appl. phys. 107 (2010), art. 014702. doi: 10.1063/1.3277020 [26] j. park, t. kang, b. kim, et al., real-time humidity sensor based on microwave resonator coupled with pedot:pss conducting polymer film, scientific reports 8 (2018), art. 439. doi: 10.1038/s41598-017-18979-3 [27] a. bogner, c. steiner, s. walter, j. kita, g. hagen, r. moos, planar microstrip ring resonators for microwave-based gas sensing: design aspects and initial transducers for humidity and ammonia sensing, sensors 17(10) (2017), art. 2422. doi: 10.3390/s17102422 [28] g. gugliandolo, d. aloisio, g. campobello, g. crupi, n. donato, development and metrological evaluation of a microstrip resonator for gas sensing applications, proceedings of 24th imeko tc4 international symposium and 22nd international workshop on adc and dac modelling and testing, 2020, pp. 80-84. online [accessed 14 june 2021] https://www.imeko.org/publications/tc4-2020/imeko-tc42020-16.pdf [29] m. t. jilani, w. p. wen, l. y. cheong, m. a. zakariya, m. z. u. rehman, equivalent circuit modeling of the dielectric loaded microwave biosensor, radioengineering 23(4) (2014), pp. 10381047. online [accessed 14 june 2021] https://www.radioeng.cz/fulltexts/2014/14_04_1038_1047.pdf [30] d. l. k. eng, b. c. olbricht, s. shi, d. w. prather, dielectric characterization of thin films using microstrip ring resonators, microwave and optical technology letters 57(10) (2015), pp. 2306-2310. doi: 10.1002/mop.29321 [31] j. r. aguilar, m. beadle, p. t. thompson, m. w. shelley, the microwave and rf characteristics of fr4 substrates, iee colloquium on low cost antenna technology (ref. no. 1998/206), 1998, pp. 2/1-2/6. doi: 10.1049/ic:19980078 [32] a. mirzaei, k. janghorban, b. hashemi, a. bonavita, m. bonyani, s. g. leonardi, g. neri, synthesis, characterization and gas sensing properties of ag@ -fe2o3 core-shell nanocomposites, nanomaterials 5(2) (2015), pp. 737-749. doi: 10.3390/nano5020737 [33] p. j. petersan, s. m. anlage, measurement of resonant frequency and quality factor of microwave resonators: comparison of methods, journal of applied physics 84(6) (1998). doi: 10.1063/1.368498 [34] m. p. robinson, j. clegg, improved determination of q-factor and resonant frequency by a quadratic curve-fitting method, ieee trans. on electrom. comp. 47(2) (2005), pp. 399-402. doi: 10.1109/temc.2005.847411 [35] g. gugliandolo, s. tabandeh, l. rosso, d. smorgon, v. fernicola, whispering gallery mode resonators for precision temperature metrology applications, sensors 21(8) (2021), art. 2844. doi: 10.3390/s21082844 [36] k. torokhtii, a. alimenti, n. pompeo, e. silva, estimation of microwave resonant measurement uncertainty from uncalibrated data, acta imeko 9(3) (2020), pp. 47-52. doi: 10.21014/acta_imeko.v9i3.782 [37] s. kiani, p. rezaei, m. navaei, dual-sensing and dual-frequency microwave srr sensor for liquid samples permittivity detection, measurement 160 (2020), art. 107805. doi: 10.1016/j.measurement.2020.107805 https://doi.org/10.1109/icest49890.2020.9232765 https://doi.org/10.1063/1.3277020 https://doi.org/10.1038/s41598-017-18979-3 https://dx.doi.org/10.3390/s17102422 https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-16.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-16.pdf https://www.radioeng.cz/fulltexts/2014/14_04_1038_1047.pdf https://doi.org/10.1002/mop.29321 https://doi.org/10.1049/ic:19980078 https://doi.org/10.3390/nano5020737 https://doi.org/10.1063/1.368498 https://doi.org/10.1109/temc.2005.847411 https://doi.org/10.3390/s21082844 https://doi.org/10.21014/acta_imeko.v9i3.782 https://doi.org/10.1016/j.measurement.2020.107805 photogrammetry and gis to investigate modern landscape change in an early roman colonial territory in molise (italy) acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 7 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 photogrammetry and gis to investigate modern landscape change in an early roman colonial territory in molise (italy) manuel j. h. peters 1,2, tesse d. stek3 1 department of applied science and technology, politecnico di torino, corso duca degli abruzzi 24, 10129 torino, italy 2 department of history, universidade de évora, largo dos colegiais 2, 7000-803 évora, portugal 3 royal netherlands institute in rome, via omero 10/12, 00197 roma, italy section: research paper keywords: photogrammetry; gis; landscape change; mediterranean archaeology; survey archaeology citation: manuel j. h. peters, tesse d. stek, photogrammetry and gis to investigate modern landscape change in an early roman colonial territory in molise (italy), acta imeko, vol. 11, no. 4, article 12, december 2022, identifier: imeko-acta-11 (2022)-04-12 section editor: leila es sebar, politecnico di torino, italy received april 26, 2022; in final form december 14, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: manuel j. h. peters, e-mail: manueljhpeters@gmail.com 1. introduction pedestrian field survey currently faces paradoxical developments. in the rapidly deteriorating archaeological landscapes of the mediterranean as well as elsewhere, archaeologists are becoming ever more reliant on field survey data, especially those collected in earlier times with relatively good quality surface finds (so-called legacy datasets). at the same time, over the past decades there has been increasing attention for the various biases that may occur when sampling the archaeological record by pedestrian survey. initially, much research has targeted methodological biases, amongst others to correct for the varying visibility of archaeological surface material during the surveys. this has led to a well-developed rigour in field survey practices documenting the present field conditions as systematic and well as possible [1]-[12]. the research presented in this paper focuses on one particular factor: the geomorphological change over time. erosion and depositional processes, as well as incisive anthropic actions such as mining, all influence the location and preservation of the archaeological record. understanding geomorphological change can help to better assess the value of surface distributions of archaeological finds retrieved during field work, and at least in theory also to assess the reliability of legacy survey datasets. historical aerial images have since long been used to aid in the identification of archaeological features that in the meantime have been obscured or obliterated by more recent anthropic manipulations and/or natural events. more specifically, historical aerial photographs now also allow us to generate historical digital elevation models (hdems), historical orthophotos and 3d models of areas that have often been subjected to significant landscape change over the years. this can be done by using image-based modelling (ibm) techniques such as photogrammetry, often using the principle of structure from motion (sfm). when a recent digital elevation model (dem) is subtracted from an earlier one, a map of the occurred geomorphological abstract legacy data in the form of historical aerial imagery can be used to investigate geomorphological change over time. this data can be used to improve research about the preservation and visibility of the archaeological record, and it can also aid heritage management. this paper presents a composite image-based modelling workflow to generate 3d models, historical orthophotos, and historical digital elevation models from images from the 1970s. this was done to improve the interpretation of survey data from the early roman colony of aesernia. the main challenge was the lack of high-resolution recent digital elevation models and ground control points, and a general lack of metadata. therefore, spatial data from various sources had to be combined. to assess the accuracy of the final 3d model, the root mean square error (rmse) was calculated. while the workflow appears effective, the low accuracy of the initial data limits the usefulness of the model for the study of geomorphological change. however, it can be implemented to aid sample area selection when preparing archaeological fieldwork. additionally, when working with existing survey datasets, areas with a high bias risk resulting from post-depositional processes can be indicated. mailto:manueljhpeters@gmail.com acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 change can be extracted, thereby generating useful new data. the generation of hdems from historical aerial images is not always straightforward. common problems with the generation of hdems consist of poor or lacking metadata, low image resolutions, and the absence of contemporary gps ground control points (gcps). these factors can complicate generating new data from historical photographs. additionally, the comparison of data with different coverages, resolutions, and levels of precision can cause problems, which makes estimating the difference between two dems to investigate geomorphological change challenging. issues concerning hdem generation are often resolved by using gps gcps on locations that are visible in both historical and recent remote sensing data, or by creating new gcps from accurate (and often more recent) maps in a geographic information system (gis) and using those in the ibm software. an alternative is to co-register the created dems to ones that are currently available, as demonstrated by sevara et al. [13]. common workflows use ground control points that were recorded in the field with a gps, therefore establishing deviations of less than 10 cm [14]-[18] or use highly accurate recent dems for co-registration [19]-[22]. these resources were not available in the region under investigation, therefore the objective of this study was to assess the feasibility of a composite workflow using legacy data that was lacking any physical gcps. one potential source of error is the manual placement of the control points, in both gis and ibm software, since this mostly relies on the resolution of the images and the competency of the user. furthermore, the quality of the final result largely depends on the quality of the initial data. as previously mentioned, in the case of this research no gcps or high-resolution dem were available, which significantly complicated the ibm procedure. therefore, the usefulness of this procedure for further research and analysis was investigated by assessing its accuracy after several improvements. the landscape of italy has changed significantly over the past century [23], which has a big impact on the archaeological remains. this makes it a good location to investigate the various factors influencing the surface data obtained through pedestrian surveys. the landscapes of early roman colonization (lerc) project operates in this changing landscape. the research presented in this paper was carried out using data provided by the aesernia colonial landscape project, which was started in 2011 by dr. t. d. stek as an eu marie curie fellowship at glasgow university, and subsequently, in 2013, continued in the larger framework of the lerc project, a collaboration between leiden university and the royal netherlands institute in rome (knir), funded by the netherlands organisation for scientific research (nwo) [24]. the lerc project investigates the early roman colonisation process in italy, mainly by pedestrian surveys and remote sensing [24]. this paper focuses on the landscape surrounding the latin colony of aesernia (263 bce), around the modern town of isernia, situated in the molise region in south-central italy (figure 1). in section 2 the study area and the various data sources utilised in this research will be discussed, and some of the issues mentioned. the next two sections describe the methods and the results, followed by a discussion highlighting some issues and limitations. finally, in the concluding section the main applications and limitations are stated. 2. study area and data the colony of aesernia was established in 263 bce, and the territory was previously known as a relatively undocumented area within the molise, except for the landscape research carried out in the 1980s [25]. in the past decade, extensive research has been done in this area by means of field survey, various types of aerial photography and remote sensing, and geophysics. significant issues in the region include the modern urbanisation and changes in land use, which are happening at a fast pace around the town of isernia. therefore, the initial stages of the lerc project focused on the area around the town. although the current trend in mediterranean pedestrian survey focuses on the collection of off-site data in a smaller sample area, where fields are selected for their good visibility, the lerc project covered the majority of fields, including those with low visibility [26]. this variation in visibility can be attributed to factors such as differences in land use (e.g. freshly ploughed vs. overgrown) and geomorphological change. both are thought to have impacts on the collection and interpretation of pedestrian survey data, which is why they have been subject to further investigation. figure 1. lerc research area around aesernia [3], [26]. figure 2. aerial image coverage for 1970-71 around isernia. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 3 for the present analysis, several data sources were used, including legacy data consisting of 53 aerial photographs from the isernia area (figure 2). these were produced in 1970-71 for cartographic purposes by the società per azioni rilevamenti aerofotogrammetrici (sara) [27]. the images had been digitised with a resolution of 300 dpi (1970 set) and 600 dpi (1971 set). the type of scanner and its parameters were unknown at the time of writing, therefore possible errors related to factors such as resolution, distortion, and glare were unknown. the average flight height was not exactly known either, and neither were yaw, pitch, and roll, which meant that camera properties, including focal length, could not be used. these are typical issues related to the use of legacy data, since this data is usually not created for modern research purposes. additionally, a regional orthophoto of the isernia area from 2007 was provided by the geoportal of the regione molise (using the autogr tool developed by dr. gianluca cantoro). the tinitaly/01 dem released by the istituto nazionale di geofisica e vulcanologia (ingv) in 2007 was used as control dem. this is a composite dem that is commonly used in recent archaeological research [28]. in molise, the root mean square error (rmse) of the tinitaly dem (modern dem or mdem) is 3.76 m in the non-urban areas, and 4.51 m in the urban areas [29], [30]. a hillshade visualisation of this dem was used as base map for several figures (1, 2, 6, 9, 10) in this paper. agisoft metashape professional 1.5.2 build 7838 was used to create the various models using sfm, cloudcompare 2.10.2 was used for modifying and co-registering the point clouds. arcmap 10.4.0.5524 was used to run the gis procedure, and arcscene 10.4.0.5524 to visualise the data in 3d. all data was processed in the epsg:32633, wgs84 / utm 33n coordinate system. 3. methods to accommodate the limitations of the original data, a composite workflow involving sfm, point cloud processing software, and gis was designed (figure 3). first, a preliminary model using 53 images from 1970-71 was built using sfm. although all images were of sufficient quality, they had borders that could leave traces during the creation of the 3d model and the dem, and therefore needed to be removed. this was done by manually going over every picture and creating a mask by selecting the area that had to be removed. areas that were of low quality because of glare were also removed in this way. once the preliminary model was created, gcps were placed in a gis environment on the resulting orthophoto. this was done by using features that appeared unchanged and were easily recognisable on the 1970-71 and 2007 orthophotos, such as corners of buildings and intersections of roads. the xy coordinates were obtained based on the 2007 orthophoto, and the elevation values were extracted from the mdem. this data was then imported into the sfm software, where the gcp locations had to be modified manually by keeping the sfm and gis software side by side (figure 4). the coordinates were given for each gcp in easting (m), northing (m), altitude (m). an accuracy of 3 meters was set, considering the resolution of the images when visually placing the markers in horizontal space for the xy coordinates, and the vertical accuracy of the dem07, which generated the elevation value. when all gcps and their coordinates were added, the process to generate the model was initiated again, starting with the alignment of the photos, this time using high accuracy, with a key point limit of 120000, and no tie point limit. this resulted in a sparse point cloud consisting of 387692 points. then, the camera alignment was optimised, in order to achieve a higher accuracy and to correct for possible distortions, using the focal length radial distortion coefficients k1, k2, k3, and k4; tangential distortion coefficients p1, p2, p3, and p4; and affinity and skew (non-orthogonality) transformation coefficients b1 and b2 [20]. additionally, the point cloud variance was calculated. then, by using gradual selection, points with high reprojection error (above 0.25 m) were removed, as well as points with a reconstruction uncertainty above 10 m and with a projection accuracy below 2.5 m. the filtering steps resulted in a removal of figure 3. composite ibm workflow. figure 4. placing gcps in metashape (left) by comparing them to the image in arcmap (right). acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 195061 points from the point cloud, with a final number of 192631 tie points. this filtering removed points that could result in later outliers in the dem. next, the bounding box was set to select the area that would be used in further processing. no clipping of the area was done at this stage however, since the hdem would later be clipped by the research area in arcmap. after this, the dense cloud was built, using high quality and moderate depth filtering, to avoid removing more complex landscape features, and pixel colour was calculated. the mesh was built using the height field surface type, and the dense cloud as source data. the polygon count was set to high, in order not to lose any detail. interpolation was enabled, so that holes generated in the previous filtering steps could be filled. the next step was to build the texture. while this is not necessary to create an orthophoto or dem, it does provide a better 3d visualisation of the area. the mapping mode was orthophoto, blending mode mosaic. in workflows containing black and white images, colour correction is often disabled, since this is generally only useful for data sets with extreme brightness variation. in this case however, the orthophoto would be used for the classification of the forested areas. this was primarily done based on pixel colour, which therefore had to be as accurate as possible. the hdem (figure 5) was built using the dense cloud, since this generally produces more accurate results, and has a shorter processing time. although the dem that was obtained had a resolution of 2 m, it was decided to export it at a 10 m resolution, in order to be comparable with the mdem. the orthomosaic was generated in the same reference system, using the mosaic blending mode, colour correction was not used. the orthophoto and hdem could then be exported as geotiff files. the no-data value was set at 255. ground points were classified using a 15 degrees maximum angle, 2 m maximum distance, and 25 m cell size. the ground point cloud (hdem) was then further modified in cloudcompare. an additional filtering step was carried out using statistical outlier removal, using 10 points for the mean distance estimation and 1.00 as the standard deviation multiplier threshold. duplicate points were removed as well, and the point cloud was downsampled to a 2.5 m resolution to speed up processing. then, the hdem was co-registered to the mdem using an iterative closest point (icp) algorithm. this coregistration procedure corrected the hdem, decreasing the tilt and modifying the scale to better fit the mdem. this resulted in a decrease in rmse on the whole hdem area from 13.95 m to 7.49 m. the hdem was then rasterised to a grid size of 10 metres to be comparable with the mdem. an additional compensation was carried out by measuring the deviation of the hdem in 101 new gcps in supposedly stable areas, interpolating these values with the application of kriging. in this case, ordinary kriging was used with a linear semivariogram, since this model fitted the gcps best. the kriging window was set at 12 points, with a maximum distance of 7000 m. the resulting raster showed the deviation of the hdem across the research area, which was most significant near the edges of the model (as can be expected with this type of model), especially on the north and south sides. the deviation map was then subtracted from the hdem, and the mdem was subtracted from the compensated hdem, showing the geomorphological change in the area. in order to exclude areas with vegetation or buildings from the geomorphological change model, these were masked by a combination of automated classification based on the grey values of the historical orthophoto and manual adjustments of the mask. the resulting model shows both the positive (presumably sedimentation) and negative (presumably erosion) change in the landscape (figure 6). figure 5. hdem after cleaning, including gcp locations. figure 6. geomorphological change (erosion and sedimentation). acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 5 4. results the 3d model generated by the sfm procedure was sufficient for a visual study of the landscape surrounding isernia (figure 7). in order to determine its suitability for the determination of geomorphological change, an accuracy assessment was carried out. the north-south and east-west profiles of the various dems were compared (figure 8), showing that the co-registration resolved a significant amount of tilt both in the north-south and east-west planes (dem70-71_coreg). the additional compensation using the deviation surface obtained with kriging resulted in another improvement (dem70-71_final). the rmse of the hdem relative to the mdem was calculated, and the total rmse was calculated using the relative and the mdem rmse (formulas (1), (2); table 1). the final hdem had an rmse of 5.87 m. although this is a significant possible error when dealing with landscape change, this is mostly due to the original data quality. considering the fact that the original 2007 dem has an rmse between 3.76 m and 4.51 m in the molise region, and a resolution of 10 m, the 5.87 m rmse of the final hdem was deemed an acceptable result 𝑅𝑀𝑆𝐸rel = √ ∑ (ℎ𝐷𝐸𝑀 − 𝑚𝐷𝐸𝑀)2𝑛𝑖=1 𝑛 (1) 𝑅𝑀𝑆𝐸tot = √𝑅𝑀𝑆𝐸rel 2 + 𝑅𝑀𝑆𝐸mdem 2 . (2) even though there are limits due to the rmse of the final hdem, the resulting geomorphological change map obtained by subtracting the mdem from the hdem serves as a useful indication of the changes around isernia. the resolution is not high enough to allow detection of more subtle changes; however, especially since in this procedure no physical gcps are necessary, it may be used to better select the sample areas during the planning of archaeological fieldwork. additionally, its results can assist the interpretation of the data collected by the pedestrian surveys. as such, it can help improve existing archaeological models, for example about site distribution, in a way not dissimilar to the soil map analysis by casarotto et al. [31]. a good example of the changing landscape in the area can be found south of the town of isernia, where a quarry has expanded dramatically in a few decades, leaving a lasting effect on the landscape (figure 9). the mask that was created to exclude forested areas from the geomorphological change map, can also provide a visual indication of the rapidly changing land use in the area. plots of land are no longer being used for agriculture and have been abandoned and subsequently reforested, which can clearly be seen when comparing the situation in 1970-71 with the situation in 2007 (figure 10). 5. discussion an important challenge with the presented research was the need to use input from different sources to create the hdem, and the deviations related to this approach. in order to create a georeferenced model, gcps with x, y, and z values are required. the historical aerial photos were not georeferenced, which is normally solved by using either gps gcps with xyz values and error below 10,0 cm, or by utilising lidar data to co-register the hdem. this kind of data was not available for the current research, which heavily influenced the ibm workflow. due to data limitations, there are possible errors originating from the use of the 2007 orthophoto to extract xy values and the 2007 dem to extract z values, and combining these into gcps, rather than collecting gps coordinates in the field and importing these as markers. creating and placing gcps manually like this increases the chance of user error. since the procedure is based on a visual comparison of the recent and historical images with varying resolutions, it is possible to have an error of several metres in xy value. the current research focuses mostly on changes in z value, but the aforementioned bias should not figure 7. detail of 1970-71 3d model. orange line representing 5 km. figure 8. profile comparison between elevation models in north-south (top) and east-west (bottom) direction. table 1. rmse values. relative rmse total rmse before compensation 6.48 m 7.49 m after compensation 4.51 m 5.78 m 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 11000 400 450 500 550 600 650 700 750 800 dem07 dem70-71_orig dem70-71_coreg dem70-71_final e le v a ti o n ( m ) distance (m) 0 1000 2000 3000 4000 5000 6000 7000 300 350 400 450 500 550 600 650 e le v a ti o n ( m ) distance (m) dem07 dem70-71_orig dem70-71_coreg dem70-71_final acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 6 be neglected, since a change in the horizontal plane can result in significant changes in elevation, especially in irregular terrain. the accuracy of the final model depended on the resolution and accuracy of the 2007 orthophoto, the 2007 tinitaly dem, the 1970-71 historical aerial photographs, and the user error when placing the various gcps. while the resolution and accuracy of the 2007 dem was known, this was not the case for the 2007 orthophoto and the 1970-71 aerial images. further limitations were imposed because the properties of the scanner used to digitise the original images were unknown. therefore, scanner deformities may have been introduced, contributing to possible errors in the final model. this research made an attempt at correcting some of these errors by applying filtering, coregistration, and compensation, leading to significant improvements. 6. conclusions the main goal of this research was the creation of a composite ibm workflow to build historical landscape models from historical aerial photographs using sfm, and extracting information about geomorphological change. by applying data from other sources such as a more recent orthophoto and dem to obtain ground control points in gis that could then be entered in the sfm software, it was possible to create historical orthophotos and hdems. the hdem for 1970-71 was further filtered and co-registered to the mdem from 2007. subsequently, it was corrected by applying interpolation to create a vertical deviation surface from other gcps. this resulted in a compensated hdem that was then subtracted from a modern dem in order to observe geomorphological change, while areas with buildings or vegetation were masked. although there are severe limitations resulting from the quality of the initial data in this case study, the proposed composite workflow appears effective for the creation of more accurate historical 3d models and geomorphological change maps of rapidly evolving landscapes. geomorphological change has an inherent duality, both positively (uncovering) and negatively (displacing/covering/destroying) influencing the visibility of the archaeological record. despite this, geomorphological change maps can be used to provide feedback for planning pedestrian surveys and to interpret their data critically, possibly assisting in the assessment of its accuracy and the improvement of archaeological models. author contribution manuel j. h. peters: conceptualisation, methodology, software, formal analysis, investigation, data curation, writing original draft, visualisation. tesse d. stek: validation, resources, writing – review and editing, supervision, project administration, funding acquisition. acknowledgement the first author would like to thank the lerc project for the data, and the royal netherlands institute in rome (knir) for the opportunity to spend several weeks in rome to work on this research in a stimulating environment. luisa baudino was of great help with the processing of the dem profiles. a significant portion of this work was carried out as part of an msc thesis [32] at the faculty of archaeology of leiden university, under the supervision of dr. karsten lambers. references [1] p. attema, two challenges for landscape archaeology, in p. attema, g. j. burgers, e. van joolen, m. van leusen and b. mater (eds), new developments in italian landscape archaeology. theory and methodology of field survey. land evaluation and landscape perception. pottery production and distribution, university of groningen, 13-15 april 2000, bar international series, vol. 1091, 2002, pp. 18-27. doi: 10.30861/9781841714691 [2] j. bintliff, p. howard, a. snodgrass, the hidden landscape of prehistoric greece, journal of mediterranean archaeology, vol.12, no. 2, 1999, pp. 139-168. doi: 10.1558/jmea.v12i2.139 [3] a. casarotto, spatial patterns in landscape archaeology. a gis procedure to study settlement organization in early roman colonial territories, phd thesis, leiden: leiden university press, 2018. isbn: 9789087283117. [4] r. c. dunnell, the notion site, in j. rossignol and l. wandsnider (eds), space, time, and archaeological landscapes, new york: plenum press, 1992, pp. 21-41. isbn: 978-03-064-4161-5. [5] h. feiken, dealing with biases: three geo-archaeological approaches to the hidden landscapes of italy, phd thesis, groningen: barkhuis, 2014. isbn: 978-94-914-3167-8. [6] r. francovich, h. patterson, extracting meaning from ploughsoil assemblages, oxford: oxbow books, 2000. isbn: 978-19-0018875-3. figure 9. geomorphological change due to mining activities. figure 10. masked forest (right), reforestation since 1970-71 (left). https://doi.org/10.30861/9781841714691 https://doi.org/10.1558/jmea.v12i2.139 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 7 [7] j. garcía sánchez, method matters. some comments on the influence on theory and methodologies in survey based research in italy, in r. cascino, f. de stefano, a. lepone and c. m. marchetti (eds), trac 2016, proceedings of the twenty-sixth theoretical roman archaeology conference, sapienza university of rome, 16th-19th march 2016, rome: edizioni quasar, 2017, pp. 151-164. isbn: 978-88-7140-770-8. [8] c. haselgrove, inference from ploughsoil artefact samples, in c. haselgrove, m. millet and i. smith (eds), archaeology from the ploughsoil studies in the collection and interpretation of field survey data, sheffield: university of sheffield, 1985, pp. 7-29. isbn: 978-09-060-9054-1. [9] j. lloyd, and g. barker, rural settlement in roman molise. problems of archaeological survey, in g.barker and r.hodges (eds), archaeology and italian society, oxford: bar international series, vol.102, 1981, pp. 289-304. isbn: 978-08-605-4120-2. [10] m. b. schiffer, a. p. sullivan and t. c. klinger, the design of archaeological surveys, world archaeology, vol. 10, no. 1, 1978, pp. 1-28. doi: 10.1080/00438243.1978.9979712 [11] n. terrenato, the visibility of sites and the interpretation of field survey results: towards an analysis of incomplete distributions, in r. francovich, h. patterson, g. barker (eds), extracting meaning from ploughsoil assemblages. oxford: oxbow books, 2000, pp. 60-71. isbn: 978-19-001-8875-3. [12] p. m. v. van leusen, pattern to process: methodological investigations into the formation and interpretation of spatial patterns in archaeological landscapes, phd thesis, groningen: rijksuniversiteit groningen, 2002. [13] c. sevara, g. verhoeven, m. doneus, e. draganits, surfaces from the visual past: recovering high-resolution terrain data from historic aerial imagery for multitemporal landscape analysis, journal of archaeological method and theory, vol. 25, 2018, pp. 611-642. doi: 10.1007/s10816-017-9348-9 [14] y. c. hsieh, y. c. chan and j.c.hu, digital elevation model differencing and error estimation from multiple sources: a case study from the meiyuan shan landslide in taiwan, remote sensing, vol. 8, no. 3, 2016, pp. 199-220. doi: 10.3390/rs8030199 [15] s. ishiguro, h. yamano, h. oguma, evaluation of dsms generated from multi-temporal aerial photographs using emerging structure from motion–multi-view stereo technology, geomorphology, vol.268, 2016, pp. 64-71. doi: 10.1016/j.geomorph.2016.05.029 [16] c. sevara, top secret topographies: recovering two and threedimensional archaeological information from historic reconnaissance datasets using image-based modelling techniques, international journal of heritage in the digital era, vol. 2, no. 3, 2013, pp. 395-418. doi: 10.1260/2047-4970.2.3.395 [17] c. sevara, capturing the past for the future: an evaluation of the effect of geometric scan deformities on the performance of aerial archival media in image-based modelling environments, archaeological prospection, vol.23, no.4, 2016, pp. 325-334. doi: 10.1002/arp.1539 [18] j. vaze, j. teng, g. spencer, impact of dem accuracy and resolution on topographic indices, environmental modeling & software, vol. 25, no. 10, 2010, pp. 1086-1098. doi: 10.1016/j.envsoft.2010.03.014 [19] g. verhoeven, d. taelman, f. vermeulen, computer visionbased orthophoto mapping of complex archaeological sites: the ancient quarry of pitaranha (portugal-spain), archaeometry, vol. 54, no. 6, 2012, pp. 1114-1129. doi: 10.1111/j.1475-4754.2012.00667.x [20] g. verhoeven, f. vermeulen, engaging with the canopy multidimensional vegetation mark visualisation using archived aerial images, remote sensing vol. 8, no. 9, 2016, pp. 752-769. doi: 10.3390/rs8090752 [21] g. verhoeven, m. doneus, ch. briese, f. vermeulen, mapping by matching: a computer vision-based approach to fast and accurate georeferencing of archaeological aerial photographs, journal of archaeological science, vol. 39, no. 7, 2012, pp. 2060-2070. doi: 10.1016/j.jas.2012.02.022 [22] f. j. aguilar, m. a. aguilar, i. fernández, j. g. negreiros, j. delgado, j. l. pérez, a new two-step robust surface matching approach for three-dimensional georeferencing of historical digital elevation models’, ieee geoscience and remote sensing letters, vol. 9, no. 4, 2012, pp. 589-593. doi: 10.1109/lgrs.2011.2175899 [23] p. panagos, p. borelli, j. poesen, c. ballabio, e. lugato, k. meusburger, l. montanarella, c. alewell, the new assessment of soil loss by water erosion in europe, environmental science & policy, vol. 54, 2015, pp. 438-447. doi: 10.1016/j.envsci.2015.12.011 [24] t. d. stek, j. pelgrom, landscapes of early roman colonization: non-urban settlement organization and roman expansion in the roman republic (4th-1st centuries bc), tma, vol. 50, 2013, pp. 87. [25] g. chouquer, m. clavel-lévêque, f. favory, j.-p. vallat, structures agraires en italie centro-méridionale, cadastres et paysages ruraux. rome: publications de l'école française de rome, 1987. [26] t. d. stek, e. b. modrall, r. a. a. kalkers, r. h. van otterloo, j. sevink, an early roman colonial landscape in the apennine mountains: landscape archaeological research in the territory of aesernia (central-southern italy), in s.de vincenzo (ed), analysis archaeologica vol.1, edizioni quasar, rome, italy, 2015, pp. 229291. isbn: 978-88-7140-592-6. [27] http://www.saranistri.com, accessed on 13 april 2019. [28] s. tarquini, l. nannipieri, the 10 m-resolution tinitaly dem as a trans-disciplinary basis for the analysis of the italian territory: current trends and new perspectives, geomorphology, vol. 281, 2017, pp. 108-115. doi: 10.1016/j.geomorph.2016.12.022 [29] s. tarquini, i. isola, m. favalli, f. mazzarini, m. bisson, m. t. pareschi, e. boschi, tinitaly/01: a new triangular irregular network of italy, annals of geophysics, vol. 50, no. 3, 2007, pp. 407-425. doi: 10.4401/ag-4424 [30] s. tarquini, s. vinci, m. favalli, f. doumaz, a. fornaciai and l. nannipieri, release of a 10-m-resolution dem for the italian territory: comparison with global-coverage dems and anaglyphmode exploration via the web, computers & geosciences, vol.38, no.1, 2012, pp. 168-170. doi: 10.1016/j.cageo.2011.04.018 [31] a. casarotto, t. d.stek, j. pelgrom, r. h. van otterloo and j. sevink, assessing visibility and geomorphological biases in regional field surveys: the case of roman aesernia, geoarchaeology, vol. 33, no. 2, 2017, pp. 177-192. doi: 10.1002/gea.21627 [32] m. j. h. peters, bypassing the bias. applying lmage-based modelling and gis to assess the effect of geomorphological change, topography, and visibility on survey data from the early roman colony of aesernia, ltaly, msc thesis, leiden: universiteit leiden, 2019. https://doi.org/10.1080/00438243.1978.9979712 https://doi.org/10.1007/s10816-017-9348-9 https://doi.org/10.3390/rs8030199 https://doi.org/10.1016/j.geomorph.2016.05.029 https://doi.org/10.1260/2047-4970.2.3.395 https://doi.org/10.1002/arp.1539 https://doi.org/10.1016/j.envsoft.2010.03.014 https://doi.org/10.1111/j.1475-4754.2012.00667.x https://doi.org/10.3390/rs8090752 https://doi.org/10.1016/j.jas.2012.02.022 https://doi.org/10.1109/lgrs.2011.2175899 https://doi.org/10.1016/j.envsci.2015.12.011 https://doi.org/10.1016/j.geomorph.2016.12.022 https://doi.org/10.4401/ag-4424 https://doi.org/10.1016/j.cageo.2011.04.018 https://doi.org/10.1002/gea.21627 applicability of multiple impulse-radar sensors for the recognition of a person’s action acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 7 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 applicability of multiple impulse-radar sensors for the recognition of a person’s action paweł mazurek1, szymon kruszewski1 1 warsaw university of technology, faculty of electronics and information technology, nowowiejska 15/19, 00-665 warsaw, poland section: research paper keywords: measurement data processing; impulse-radar sensors; position estimation; action recognition; healthcare citation: paweł mazurek, szymon kruszewski, applicability of multiple impulse-radar sensors for the recognition of a person’s action, acta imeko, vol. 12, no. 2, article 30, june 2023, identifier: imeko-acta-12 (2023)-02-30 section editor: eric benoit, université savoie mont blanc, france received july 17, 2022; in final form may 16, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: paweł mazurek, e-mail: pawel.mazurek@pw.edu.pl 1. introduction it is expected that the share of european population aged at least 65 years will reach 25% in 2050 [1]. the problem of organised care over elderly persons is, therefore, of growing importance. this, in turn, creates the demand for various technical solutions which could be applied for non-intrusive monitoring of elderly persons in home environments and healthcare facilities. the systems for monitoring of elderly persons are expected to predict and detect dangerous events, such as falls and harmful long lies after the falls. the falls of elderly persons belong to the most frequent reasons of their admission and long-term stay in hospitals [2]. possible solutions that could be applied for non-intrusive monitoring of elderly persons are radar-based techniques – both narrow-band [3]–[8] and broad-band [9]–[13]. the most attractive feature of these techniques is the possibility of the through-the-wall monitoring of human activity. a review of the relevant literature, including articles in scientific journals and conference papers, which appeared in the years 2019–2022, has revealed that the vast majority of researchers use radar sensors of both types for estimation of the heart rate, breathing rate and position (in two dimensions), while the attempts to detect falls are based on sensors using the doppler principle – with two exceptions: • in [14] an attempt to detect falls on the basis of threedimensional movement trajectory obtained by means of the three impulse-radar sensors is presented. in the reported approach, the monitored person's movement is compared with two model movements: a movement with a constant speed, and a movement towards the ground with an acceleration equal to the gravitational acceleration, and the classification of the movement is based on the reliability function. unfortunately, no systematic tests of the effectiveness of the method are presented in that paper. • in [15] the results of the simulation studies focused on the impact of the number of the impulse-radar sensors on the accuracy of the estimation of the threedimensional position is presented. unfortunately, the simulated setup is based on an unrealistic assumption that the sensors are placed in random locations within a monitored area. in the authors’ recent conference papers [16], [17], the results of the studies on the applicability of multiple impulse-radar abstract the research reported in this paper is devoted to the impulse-radar technology when applied for non-intrusive monitoring of elderly persons. specifically, this study is focused on a novel approach to the interpretation of data acquired by means of multiple impulseradar sensors, leading to the determination of features to be used for the recognition of a monitored person’s actions. the measurement data are first transformed into the three-dimensional coordinates of the monitored person; next, those coordinates are used as a basis for determination of features characterising the movement of that person. the results of the experimentation, based on the real-world data, show that multiple impulse-radar sensors may be successfully used for highly accurate recognition of actions such as walking, sitting, and lying down, although this accuracy is significantly affected by the quality of the three-dimensional movement trajectories which in turn is affected by the configuration of the impulse-radar sensors within the monitored area. mailto:pawel.mazurek@pw.edu.pl acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 sensors for estimation of the three-dimensional movement trajectories – which could be used for detection of dangerous events such as falls – are presented. these results show that the impulse-radar sensors may be used for accurate estimation of the three-dimensional position of a monitored person if the sensors are properly located within the monitored area. in this paper, the applicability of the impulse-radar sensors for recognition of person’s action, on the basis of the estimates of the threedimensional movement trajectories, is investigated. the novelty of the research presented in this paper consists in an algorithmic basis for recognition of actions of a person, in a monitoring system based on multiple impulse-radar sensors. the processing of the raw measurement data acquired by means of the impulse-radar sensors is divided into three stages: the transformation of the measurement data into the threedimensional coordinates of the monitored person, the calculation of the features characterising the three-dimensional movement trajectories, and the classification of the movement trajectories. the usability of the proposed features is assessed in an experiment based on a set of real-world data sequences representative of three activities of daily living: walking, sitting and lying down. moreover, the influence of the configuration of the impulse-radar sensors within the monitored area on the accuracy of the action recognition is investigated. 2. estimation of movement trajectories the measurement data used for the experimentation were acquired by means of six x4m02 impulse-radar sensors manufactured by novelda [18], [19]. an exemplary data frame, acquired by means of one of these sensors, is shown in figure 1. to properly estimate a three-dimensional movement trajectory of a monitored person, the measurement data, acquired by means of the impulse-radar sensors, have to be subjected to processing comprising [20]: the estimation of parameters of the impulse-radar signal, the smoothing of several one-dimensional trajectories of the distance between the monitored person and the corresponding impulse-radar sensors, and the transformation of the smoothed distance trajectories into the three-dimensional movement trajectory. in the research presented here: • the parameters of the impulse-radar signal have been estimated by means of a method consisting in computing the correlation function for the received signal and a known template of the emitted pulse, and the estimation of the coordinates of the maximum of this function [20]. • the distance trajectories have been smoothed by means of a method based on weighted least-squares estimator, consisting in the approximation of a sequence of data by means of a linear combination of basis functions, with the number of these functions determined automatically [17]. • the three-dimensional movement trajectories have been obtained by means of a method consisting in solving a set of equations modelling the geometrical relationships between the three-dimensional coordinates of a person and the distances between that person and the impulse-radar sensors [17]. 3. methodology of experimentation 3.1. acquisition of measurement data the measurement data used for the experimentation aimed at the assessment of the accuracy of the recognition of the monitored person’s actions, were acquired by means of six impulse-radar sensors located at various positions. two configurations of the sensors have been considered (see figure 2): • configuration #1 according to which the impulseradar sensors (r1, …, r6) were located at positions whose x-, yand z-coordinates (in meters) were respectively: [0.00, 1.70, 0.93], [0.00, 1.70, 1.43], [2.20, 1.70, 1.45], [2.20, 1.70, 0.95], [0.20, 4.50, 0.82], [2.00, 4.50, 0.83]; • configuration #2 according to which the impulseradar sensors (r1, …, r6) were located at positions whose x-, yand z-coordinates (in meters) were respectively: [0.20, 4.50, 0.82], [0.60, 2.65, 2.76], [0.00, 1.70, 0.93], [2.20, 1.70, 0.95], [2.00, 4.50, 0.83], [0.60, 3.31, 2.76]. concurrently, the person was monitored by an infrared depth sensor being a part of the kinect v2 device (cf. [21] for the description of the methodology for preprocessing of data from depth sensors). the radar sensors and the depth sensor were synchronised, and their data acquisition rate was set to 30 hz. in the experimentation, three movement scenarios were considered: • according to the first scenario, two persons walked along three predefined trajectories: an oval-shaped trajectory, a straight-line trajectory and a sine-shaped trajectory; each person repeated the action 10 times for each trajectory. • according to the second scenario, two persons sat on a chair located in three different places within the monitored area; each person repeated the action 10 times for each position of the chair. • according to the third scenario, two persons lay down on a mattress, approaching it from two different sides; each person repeated the action 15 times for each side of the mattress. thus, the whole programme of experimentation comprised the acquisition of: • 180 three-dimensional movement trajectories obtained on the basis of the data acquired by means of the impulse-radar sensors located according to configuration #1; • 180 three-dimensional movement trajectories obtained on the basis of the data acquired by means of the impulse-radar sensors located according to configuration #2; • 180 three-dimensional movement trajectories obtained on the basis of the data acquired by means of the depth sensor. figure 1. the example of raw measurement data. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 3.2. generation of features in the experimentation, the features for classification have been determined on the basis of the sequences of the estimates of the z-coordinate of the position the persons, i.e. {�̂�𝑛}, as well as the sequences of the estimates of the velocity and acceleration along the z-axis (i.e. the first and second derivatives of the zcoordinate, obtained by means of the forward difference method), denoted with {�̂�𝑧,𝑛 } and {�̂�𝑧,𝑛}, respectively. all the features are presented in table 1. for the sake of simplicity, two operators have been introduced – the operator returning the empirical mean value of a data sequence {𝑝𝑛 }: 𝑚[{𝑝𝑛}] ≡ 1 𝑁 ∑ 𝑝𝑛 𝑁 𝑛=1 (1) and the operator returning its empirical variance: 𝑠2[{𝑝𝑛 }] ≡ 1 𝑁 − 1 ∑(𝑝𝑛 − 𝑚[{𝑝𝑛 }]) 2 𝑁 𝑛=1 . (2) moreover, the velocity in the vertical dimension and the acceleration in that dimension are denoted with �̂�𝑣,𝑛 ≡ {|�̂�𝑧,𝑛|} and �̂�𝑣,𝑛 ≡ {|�̂�𝑧,𝑛|}, respectively. 3.3. classification in this study an error-correcting output codes classifier (ecoc) – suitable for multiclass classification problems – has been used [22]. the ecoc classifier has been based on multiple support vector machines (svm) – each designed to distinguish between two selected actions. the implementation of the ecoc classifier, available in the matlab statistics and machine learning toolbox [23], has been used for this purpose. before the training of the classifier, the values of the features have been standardised. the performance of the classifier has been assessed using the 10-fold cross-validation technique. the assessment of the accuracy of the classification has been based on the inspection of: • the receiver operating characteristic (roc) curves illustrating the relationship between the true positive rate (tpr) the false positive rate (fpr), i.e. two indicators defined as follows: tpr = tp tp + fn (3) fpr = fp fp + tn , (4) where – for example, in the case of walking – tp (true positive) is the number of walks classified as walks, tn (true negative) is the number of nonwalks classified as non-walks, fp (false positive) is the number of non-walks classified as walks and fn (false negative) is the number of walks classified non-walks; the area under the roc curve (auc) is a single scalar value representing the performance; • the confusion matrices visualising the results of the classification: each row of such matrix represents the instances in an actual class while each column represents the instances in a predicted class. in the experiments based on the real-world data the use of the approximations of the movement trajectories is necessary since their reference shapes cannot be properly defined: a human body has a considerable volume and generates complex echoes which cannot be attributed to any of its specific points (e.g. to plexus solaris). an arbitrary choice of such a reference point would lead to an arbitrary definition of the systematic error which could be misleading. fortunately, such a definition is not necessary for extraction of the features which are used for classification of the actions of a monitored person – the features characterising the dispersion of the values of the z-coordinate, the vertical velocity and the vertical acceleration. 4. results of experimentation in figure 2, the examples of the estimates of the threedimensional movement trajectories of a monitored person, obtained by means of the procedure described in section 2, for two configurations of the impulse-radar sensors – together with the projections on the three two-dimensional planes – are shown; in figure 3, the dispersion of the subset of the estimates of the z-coordinate, representative of walking, sitting and lying down, is presented. in figure 4, the roc curves obtained for the classification of all the three-dimensional movement trajectories, are shown; the confusion matrices are presented in figure 5. the analysis of the presented results is leading to the following conclusions: • multiple impulse-radar sensors may be successfully used for estimation of the three-dimensional position of a moving person, although the configuration of those sensors has significant influence on the uncertainty of the estimation. to properly estimate the height-component of the position of a monitored person, few impulse-radar sensors should be located at a greater height than the rest of those sensors (compare figure 2c with figure 2d as well as figure 2e with figure 2f). table 1. the features used in the experimentation. # feature 1 standard deviation of the z-coordinate: 𝜎 = √𝑠2[{�̂�𝑛 }] 2 difference between extreme values of the z-coordinate: δ = max[{�̂�𝑛 }] − min[{�̂�𝑛 }] 3 mean vertical velocity: 𝜇𝑣 = 𝑚[{𝑣𝑣,𝑛 }] 4 maximum vertical velocity: 𝑣max = max[{𝑣𝑣,𝑛 }] 5 standard deviation of the vertical velocity: 𝜎 𝑣 = √𝑠2[{�̂�𝑣,𝑛 }] 6 difference between extreme values of the vertical velocity: δ𝑣 = max[{𝑣𝑣,𝑛 }] − min[{𝑣𝑣,𝑛 }] 7 mean vertical acceleration: 𝜇𝑎 = 𝑚[{�̂�𝑣,𝑛 }] 8 maximum vertical acceleration: 𝑎max = max[{�̂�𝑣,𝑛 }] 9 standard deviation of the vertical acceleration: 𝜎 𝑎 = √𝑠2[{�̂�𝑣,𝑛 }] 10 difference between extreme values of the vertical acceleration: δ𝑎 = max[{�̂�𝑣,𝑛 }] − min[{�̂�𝑣,𝑛 }] acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 a) b) c) d) e) f) figure 2. the examples of the estimates of the three-dimensional trajectories of a moving person, obtained for two configurations of the impulseradar sensors: trajectories representative of walking (top row), trajectories representative of sitting (middle row), and trajectories representative of lying down (bottom row). the movement trajectories obtained for configuration #1 are depicted in the left column, while the movement trajectories obtained for configuration #2 are depicted in the right column. blue lines denote radar-data-based trajectories, grey lines denote depth-data-based trajectories while blue triangles indicate the positions of the radar sensors. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 a) b) c) figure 3. the dispersion of the estimates of the z-coordinate of a moving person, obtained for two configurations of the impulse-radar sensors, for walking (a), sitting (b) and lying down (c). a) b) c) figure 4. the receiver operating characteristic (roc) curves obtained for the classification of all the three-dimensional movement trajectories: radar-databased trajectories obtained for configuration #1 (a), radar-data-based trajectories obtained for configuration #2 (b), depth-data-based trajectories (c). acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 • the configuration of the impulse-radar sensors affects the estimates of the coordinates of a monitored person. in the case of configuration #2 the estimates of the z-coordinate are ca. 0.5 m greater than in the case of configuration #1; moreover, the x-y projections of the trajectories, obtained for configuration #2, seem to be scaleddown versions of the analogous projections obtained for configuration #1 (compare figure 2a with figure 2b). these discrepancies can be explained by a non-negligible volume of a human body: the impulse-radar sensors located at a greater height – and, therefore, oriented differently than the rest of those sensors – receive echoes reflected from different parts of the body of a monitored person. moreover, in the case of configuration #1 the changes in the z-coordinate, associated with the movement of the body towards the ground during sitting or lying down, may not be properly reflected in the estimates of the movement trajectory (see figure 2c and figure 2d). this phenomenon may be explained by the fact that when the person is moving towards the ground, the changes in the distance between that person and the impulse-radar sensors placed on the sides of the monitored area are not significant. in the case of the impulse-radar sensors placed on the ceiling, these changes are much greater. • the proposed features, characterising the monitored person’s movement in the vertical dimension, are sufficient to recognise walking, sitting and lying down with high accuracy, although this accuracy is significantly affected by the quality of the threedimensional movement trajectories which in turn is affected by the configuration of the impulse-radar sensors (compare figure 5a with figure 5b). the results of the classification of the three-dimensional movement trajectories obtained on the basis of the data acquired by means of the depth sensor, are the best because the depth sensor provides the most accurate estimates of the threedimensional position of the monitored person. nevertheless, the results of the classification of the trajectories obtained on the basis of the data acquired by means of the impulse-radar sensors located according to configuration #2, are only slightly worse and can be likely improved by the application of more sophisticated methods for impulse-radar data processing. 5. conclusions the novelty of the research presented in this paper consists in an approach to the interpretation of measurement data acquired by means of the impulse-radar sensors, leading to the determination of features to be used for recognition of actions of three types: walking, sitting and lying down. the data are first transformed into the three-dimensional coordinates of the monitored person; next, those coordinates are used as a basis for calculation of kinematic features characterising the monitored person’s movement in the vertical dimension. the results of the experimentation based on real-world data show that multiple impulse-radar sensors may be successfully used for highly accurate recognition of walking, sitting and lying down of a monitored person. it has to be noted, however, that the accuracy of the recognition is affected by the quality of the three-dimensional movement trajectories which in turn is affected by the configuration of the impulse-radar sensors. to properly estimate the height-component of the position of the monitored person, few impulse-radar sensors should be located at a greater height than the rest of those sensors. the results encourage the authors to focus on the development of the methods for processing of the impulse-radar data, enabling the detection of falls of the monitored person. references [1] united nations, department of economic and social affairs, population division (2019), world population prospects 2019: highlights. st/esa/ser.a/423. online [accessed 15 march 2022] https://population.un.org/wpp/publications [2] k. chaccour, r. darazi, a. h. e. hassani, e. andrès, from fall detection to fall prevention: a generic classification of fallrelated systems, ieee sensors journal, vol. 17, 2017, pp. 812822. doi: 10.1109/jsen.2016.2628099 [3] m. g. amin, y. d. zhang, f. ahmad, k. c. ho, radar signal processing for elderly fall detection the future for in-home monitoring, ieee signal processing magazine, vol. 33, 2016, pp. 71–80. doi: 10.1109/msp.2015.2502784 [4] o. boric-lubecke, v. m. lubecke, a. d. droitcour, byung-kwon park, a. singh, eds., doppler radar physiological sensing. hoboken (new jersey): john wiley & sons, inc., 2016. [5] b. y. su, k. c. ho, m. rantz, m. skubic, doppler radar fall activity detection using the wavelet transform, ieee transactions on biomedical engineering, vol. 62, 2015, pp. 865875. doi: 10.1109/tbme.2014.2367038 [6] a. gadde, m. g. amin, y. d. zhang, f. ahmad, fall detection and classifications based on time-scale radar signal characteristics, proc. spie, vol. 9077 'radar sensor technology xviii', 2014. doi: 10.1117/12.2050998 [7] d. y. wang, j. park, h. j. kim, k. lee, s. h. cho, noncontact extraction of biomechanical parameters in gait analysis using a multi-input and multi-output radar sensor, ieee access, vol. 9, 2021, pp. 138496-138508. doi: 10.1109/access.2021.3117985 [8] a.-k. seifert, m. grimmer, a. m. zoubir, doppler radar for the extraction biomechanical parameters in gait analysis, ieee journal of biomedical and health informatics, vol. 25, 2021, pp. 547-558. doi: 10.1109/jbhi.2020.2994471 [9] s. gezici, h. v. poor, position estimation via ultra-wide-band signals, proceedings of the ieee, vol. 97, 2009, pp. 386–403. doi: 10.1109/jproc.2008.2008840 [10] p. mazurek, a. miękina, r. z. morawski, comparative study of three algorithms for estimation of echo parameters in uwb radar module for monitoring of human movements, measurement, vol. 88, 2016, pp. 45-57. doi: 10.1016/j.measurement.2016.03.025 figure 5. the exemplary confusion matrices obtained for the classification of all the three-dimensional movement trajectories: radar-data-based trajectories obtained for configuration #1 (a), radar-data-based trajectories obtained for configuration #2 (b), depth-data-based trajectories (c). https://population.un.org/wpp/publications https://doi.org/10.1109/jsen.2016.2628099 https://doi.org/10.1109/msp.2015.2502784 https://doi.org/10.1109/tbme.2014.2367038 https://doi.org/10.1117/12.2050998 https://doi.org/10.1109/access.2021.3117985 https://doi.org/10.1109/jbhi.2020.2994471 https://doi.org/10.1109/jproc.2008.2008840 https://doi.org/10.1016/j.measurement.2016.03.025 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 7 [11] j. j. zhang, x. dai, b. davidson, z. zhou, ultra-wideband radarbased accurate motion measuring: human body landmark detection and tracking with biomechanical constraints, iet radar, sonar & navigation, vol. 9, 2015, pp. 154-163. doi: 10.1049/iet-rsn.2014.0223 [12] r. herrmann, j. sachs, m-sequence-based ultra-wideband sensor network for vitality monitoring of elders at home, iet radar, sonar & navigation, vol. 9, 2015, pp. 125-137. doi: 10.1049/iet-rsn.2014.0214 [13] h. li, a. mehul, j. le kernec, s. z. gurbuz, f. fioranelli, sequential human gait classification with distributed radar sensor fusion, ieee sensors journal, vol. 21, 2021, pp. 75907603. doi: 10.1109/jsen.2020.3046991 [14] w. khawaja, f. koohifar, i. guvenc, uwb radar based beyond wall sensing and tracking for ambient assisted living, in proc. 14th ieee annual consumer communications & networking conf., las vegas, nv, usa, 8-11 january 2017, pp. 142–147. doi: 10.1109/ccnc.2017.7983096 [15] t. j. daim, r. m. a. lee, indoor environment device-free wireless positioning using ir-uwb radar, in proc. 2018 ieee int. conf. on artificial intelligence in engineering and technology, kota kinabalu, malaysia, 8 november 2018, 4 pp. doi:. 10.1109/iicaiet.2018.8638458 [16] p. mazurek, choosing configuration of impulse-radar sensors in system for healthcare-oriented monitoring of persons, measurement: sensors, vol. 18, 2021, p. 100270. doi: 10.1016/j.measen.2021.100270 [17] p. mazurek, applicability of multiple impulse-radar sensors for estimation of person’s three-dimensional position, in proc. 2020 ieee int. instrumentation and measurement technology conf., dubrovnik, croatia, 25-28 may 2020, pp. 1-6. doi: 10.1109/i2mtc43012.2020.9128760 [18] novelda, x4m02 radar sensor. online [accessed 8 november 2019] https://shop.xethru.com/x4m02 [19] novelda, x4m02 radar sensor datasheet. online [accessed 29 january 2020] https://www.xethru.com/community/resources/x4m02-radarsensor-datasheet.115/ [20] j. wagner, p. mazurek, r. z. morawski, non-invasive monitoring of elderly persons: systems based on impulse-radar sensors and depth sensors: springer, cham, 2022. [21] p. mazurek, j. wagner, r. z. morawski, use of kinematic and melcepstrum-related features for fall detection based on data from infrared depth sensors, biomedical signal processing and control, vol. 40, 2018, pp. 102-110. doi: 10.1016/j.bspc.2017.09.006 [22] e. l. allwein, r. e. schapire, y. singer, reducing multiclass to binary: a unifying approach for margin classifiers, journal of machine learning research, vol. 1, 2001, pp. 113–141. doi: 10.1162/15324430152733133 [23] mathworks, matlab documentation: statistics and machine learning toolbox – classificationecoc. online [accessed 15 march 2022] https://www.mathworks.com/help/stats/classificationecoc.html http://dx.doi.org/10.1049/iet-rsn.2014.0223 http://dx.doi.org/10.1049/iet-rsn.2014.0214 http://dx.doi.org/10.1109/jsen.2020.3046991 https://doi.org/10.1109/ccnc.2017.7983096 https://doi.org/10.1109/iicaiet.2018.8638458 https://doi.org/10.1016/j.measen.2021.100270 https://doi.org/10.1109/i2mtc43012.2020.9128760 https://shop.xethru.com/x4m02 https://www.xethru.com/community/resources/x4m02-radar-sensor-datasheet.115/ https://www.xethru.com/community/resources/x4m02-radar-sensor-datasheet.115/ https://doi.org/10.1016/j.bspc.2017.09.006 http://dx.doi.org/10.1162/15324430152733133 https://www.mathworks.com/help/stats/classificationecoc.html microwave reflectometric systems and monitoring apparatus for diffused-sensing applications acta imeko issn: 2221-870x september 2021, volume 10, number 3, 202 208 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 202 microwave reflectometric systems and monitoring apparatus for diffused-sensing applications andrea cataldo1, egidio de benedetto2, raissa schiavoni1, annarita tedesco1, antonio masciullo1, giuseppe cannazza1 1 dept. of engineering for innovation, university of salento, lecce, italy 2 dept. of electrical engineering and information technology, university of naples federico ii, naples, italy section: research paper keywords: time-domain reflectometry; industry 4.0; microwave sensing; concrete monitoring; diffused monitoring; dielectric permittivity citation: andrea cataldo, egidio de benedetto, raissa schiavoni, annarita tedesco, antonio masciullo, giuseppe cannazza, microwave reflectometric systems and monitoring apparatus for diffused-sensing applications, acta imeko, vol. 10, no. 3, article 1, september 2021, identifier: imeko-acta10 (2021)-03-01 section editor: section editor received july 26, 2021; in final form september 1, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: italian ministry of universities and research (mur) through the project ‘smarter-systems and monitoring apparatus based on reflectometric techniques for enhanced revealing’ (poc01-00118), funded under the public call proof of concept d.d. n. 467 of 02/03/2018. corresponding author: egidio de benedetto, e-mail: egidio.debenedetto@unina.it 1. introduction monitoring systems represent a key enabler for the 4.0 era, crucial not only for guaranteeing the optimal functioning of the monitored systems but also for timely interventions in case of potential failures [1], [2]. as a result, the combination of the internet of things (iot) and sensing networks has become an irreplaceable tool for achieving ubiquitous monitoring in the 4.0 ecosystem [3]-[8]. however, currently, most sensor or monitoring systems are characterised by point-type sensory information; hence, to monitor large areas, it is necessary to employ a multitude of probes. this drawback can be overcome by using diffused sensing elements (ses or d-ses) that are able to achieve many functions [10] where the use of point sensors is not recommended. most ses of this type rely on the use of optical fibre systems [11], [12], but optical systems are generally expensive, and this limits their large-scale adoption. in this paper, the diagnostic technology employed resides in time-domain reflectometry (tdr) through the use of elongated ses, which have recently been successfully used to achieve diffused monitoring [13]-[21]. tdr monitoring systems are characterised by being relatively low cost, having the potential to be customised to suit the specific needs of different applications and achieving good accuracy. for these reasons, this technique represents an interesting monitoring solution with the use of specific sensing element networks (sens) that can be permanently embedded into the system to be monitored (stbm) and used throughout the service life of the stbm. thanks to the versatility of tdr, the proposed system can be customised and applied in a considerable number of fields; but this paper focuses on three contexts: abstract most sensing networks rely on punctual/local sensors; they thus lack the ability to spatially resolve the quantity to be monitored (e.g. a temperature or humidity profile) without relying on the deployment of numerous inline sensors. currently, most quasi-distributed or distributed sensing technologies rely on the use of optical fibre systems. however, these are generally expensive, which limits their large-scale adoption. recently, elongated sensing elements have been successfully used with time-domain reflectometry (tdr) to implement diffused monitoring solutions. the advantage of tdr is that it is a relatively low-cost technology, with adequate measurement accuracy and the potential to be customised to suit the specific needs of different application contexts in the 4.0 era. based on these considerations, this paper addresses the design, implementation and experimental validation of a novel generation of elongated sensing element networks, which can be permanently installed in the systems that need to be monitored and used for obtaining the diffused profile of the quantity to be monitored. three applications are considered as case studies: monitoring the irrigation process in agriculture, leak detection in underground pipes and the monitoring of building structures. mailto:egidio.debenedetto@unina.it acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 203 i) localising leaks in underground pipelines (sen-w), ii) agricultural water management (sen-a) for the optimisation of water resources, iii) building monitoring (sen-b) through the ex-ante monitoring of concrete curing and ex-post monitoring for the detection of dielectric anomalies that result from the degradation or stress of the structure. the paper is organised as follows. section 2 describes the basic theoretical background of tdr, while section 3 describes the design and implementation of the proposed distributed monitoring system. in section 4, the experimental results of the practical implementation of the proposed sens are reported, and finally, in section 5, conclusions are drawn, and the future work is outlined. 2. theoretical background tdr is an electromagnetic (em) measurement technique typically used for monitoring purposes, such as food analysis [22], cable fault localisation [23]-[26], soil moisture measurements [27], liquid level measurements [28], device characterisation [29], [30], biomedical applications [31] and dielectric spectroscopy [20]. in tdr measurements, the em stimulus is usually a step-like voltage signal that propagates along the se, which is inserted or in contact with the system under test (sut). the signal travels along the se, and it is partially reflected by impedance changes along the line and/or by dielectric permittivity variations. through the analysis of the reflected signal, it is possible to retrieve the desired information on the sut. generally, in tdr measurements, the direct measurement output is the time-domain reflection coefficient (ρ), expressed as 𝜌 = 𝑣refl(𝑡) 𝑣inc(𝑡) , (1) where 𝑣refl(𝑡) is the amplitude of the reflected signal and 𝑣inc(𝑡) is the amplitude of the incident signal. the value of ρ is represented as a reflectogram, which shows ρ as a function of the travelled apparent distance (𝑑app ). as is well known, the quantity 𝑑app is related to the actual distance, 𝑑real, by the following equation: 𝑑app = 𝑑real ∙ √𝜀app = 𝑐 ∙ 𝑡 2 √𝜀app , (2) where 𝜀app is the effective dielectric permittivity of the propagating medium (which describes the interaction between the electromagnetic signal and the sut), 𝑐 is the velocity of light in free space and 𝑡 is the travel time, which is the time that it takes for the em signal to travel back and forth. the signal propagation velocity inside the medium depends on the dielectric properties of the material in terms of effective relative dielectric permittivity, 𝜀app, which describes the interaction between the electromagnetic signal and the sut. if the em signal is propagated in a vacuum, then 𝜀app ≅ 𝜀𝑟,air ≅ 1. however, if the se is inserted in a material different to air, then the propagating em signal will propagate more slowly, and the effective dielectric constant of the material in which the se is inserted through the estimation of the apparent length (𝑑app) is evaluated from the reflectogram: 𝜀app = ( 𝑑app 𝑑real ) 2 . (3) based on these considerations, using the tdr reflectogram, it is possible to estimate the dielectric characteristics of the propagating medium and/or to localise when these dielectric variations occur. 3. system and design implementation 3.1. diffused sensing element as mentioned in section 1, the present study focuses on three main application fields. in detail, the sen-w refers to a leak localisation system in underground water and sewer pipes, sena is dedicated to the real-time monitoring of the soil watercontent profile in agriculture and, finally, sen-b addresses the use of the tdr and elongated ses to evaluate the humidity profile of concrete structures and to identify destructive phenomena early, such as those resulting from rising damp. this section describes in detail the design and implementation of the d-ses, the tdr measuring instruments used for experimental validation and the processing algorithm adopted. before the implementation of the system, in the design phase, full-wave simulations were carried out to identify the optimal se configuration, which is useful for optimising the performance of the system in terms of sensitivity to changes in the dielectric characteristics of the stbm. the configuration of the diffused se is shown in figure 1. it consists of a coaxial cable and two conductors that run parallel to each other and are mutually insulated through a plastic jacket. the figure also shows the cross-section dimensions of the se. the sensing portion is placed along the direction of the electrical impedance profile under test so as to provide diffused monitoring of the stbm. the em signal propagation occurs between the two conductors and is influenced by the dielectric characteristics of the surrounding material; this aspect is exploited to identify the area in the reflectogram in which dielectric variations are observed. the coaxial cable has the same length as the sensitive portion and is integrated with the ses to calibrate the apparent distance as a real distance. in fact, because the dielectric characteristics of the coaxial cable are known, by propagating the tdr signal along the coaxial cable, it is possible to evaluate the actual distance of figure 1. designed diffuse sensing element: two-wire-like conductors and a coaxial cable. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 204 the se from (3), as reported in [15]. this aspect is especially important in practical applications in which the length of the dses is not necessarily known in advance. 3.2. tdr instruments and the measurement algorithm for the tdr measurements, two measuring instruments were used, the tdr-307usbm and the tdr200. the former generates a pulse signal, and the cost is relatively low (about €800). the latter generates a step-like em signal, and the cost is approximately five times the cost of the tdr-307usbm. both measuring instruments are portable. however, in the case of the tdr-307usbm, it is worth mentioning that signal attenuation and dispersion phenomena limit the usability of tdr when used on long cable systems. it is possible to modify the amplitude and the width of the tdr test pulse signal along with the gain in the input stage amplifier; however, this requires the tdr measurements to be repeated many times to find the right setting. to overcome this issue, an innovative and dedicated algorithm [16] has been developed to automatically optimise these two different electrical parameters of the tdr signal as a function of the d-se length in order to compensate the performance in terms of attenuation and dispersion effects. 4. experimental results 4.1. experimental results for sen-w water-pipeline monitoring is extremely important given the considerable amount of water loss caused by leakages and other possible hydraulic failures. figure 2 shows a schematisation of the setup configuration. during the installation phase, the d-se is placed along the pipeline to be inspected, and the connection to the measuring system is ensured through an access point. points b and e indicate the beginning and the end of the d-se, while l indicates the position of a leak. it can be seen that there is a section of cable (running vertically) that allows the se to be connected to the measurement instrument. clearly, variations in dielectric permittivity that may occur along this vertical se section are not of interest for the leak localisation. to overcome this issue, this portion of se is electromagnetically shielded (by means of a metal shield). in this way, the sensing portion starts from point b. another advantage is that, thanks to this shielding, the vertical portion can be of any length, as it does not influence the localisation of the leak (hence, it is not necessary to know a priori the burial depth of the pipe). this approach allows easier identification of the interface relative to the start point of the se relative to the buried pipeline, excluding the portion of the connected section from the measured data. for the experiments on sen-w, a dedicated testbed was set up, in which a d-se was buried approximately 30 cm figure 2. schematisation of the tdr-based system for sen-w (dimensions not to scale). (b) figure 3. sen-w: a) comparison of the test reflectogram and reference reflectogram for leak detection. b) localisation of the dielectric permittivity variation (dpv): reference reflectogram is superimposed on the test reflectogram. the difference between the two reflectograms is also shown (red curve). 0.0 50.0 100.0 150.0 200.0 250.0 300.0 -100.0 -50.0 0.0 50.0 100.0 150.0 200.0  k t d r a d c 's o u tp u t d (m) reference (no dpv) test (dpv @ 200 m) difference (k) dpv acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 205 underground. the soil was irrigated at predetermined distances from the beginning of the se in a similar way to a leakage condition when water escapes from an underground pipe. the main parameter describing the interaction between the signal and the system under test is the relative permittivity εr, as water leakage causes a local variation in its value, which is immediately detectable in the measurement. for these measurements, the tdr ri-307usbm was used. in fact, because ses in these applications may be a hundred meters long, the tdr200 instrument is unsuitable, as its electronics could be easily damaged as a result of electrostatic discharges occurring when connecting long cables. figure 3(a) shows the measurement results obtained for a leak at d = 8 m. first, the reference reflectogram was acquired, i.e. when there is no leak (red curve). then, the reflectogram in the presence of the emulated leak was acquired. it can be observed that, in the presence of the leak, there is a distinct variation in the output reflectogram. the position of the leak is estimated by applying (2) to the different portions of the reflectogram in order to identify the abscissa of the minimum corresponding to the leak. it is also interesting to analyse the detection of a leak (or a dielectric permittivity variation (dpv)) at long distances (i.e. 200 m), which is when the use of the aforementioned algorithm becomes essential. figure 3(b) shows the tdr curves that are automatically processed; the difference between the test reflectogram and the one in the presence of a dpv allows the dpv to be localised. it should be mentioned that in applying (2), an assumption of constant permittivity in the soil along the se is made. this assumption is acceptable considering that, generally, the variation in permittivity caused by leakages is significantly higher than the natural variation of permittivity in the soil (which may be due to temperature variations or to slightly different soil compaction in different areas). 4.2. experimental results for sen-a in this application context, monitoring soil moisture content in agriculture allows the use of water to be optimised, reducing wastage. in particular, the basic idea is to bury the d-ses in correspondence with cultivations and to carry out tdr measurements to retrieve the actual water-content profile of the soil all along the cultivations. subsequently, the irrigation systems can be automatically activated/deactivated according to the actual irrigation needs of the plants. in addition, through multiplexing systems, up to 512 d-ses could be used simultaneously to perform the widespread monitoring of multiple crop rows. for the experimental tests, a 30-m-long dse was placed in the soil along the entire crop profile, and the connector came out of the soil to connect to the tdr measurement instrument. in this case, for the sake of comparison, both the tdr200 and the tdr ri-307usbm were used. the acquired reflectograms were directly related to the impedance profile of the se inserted in the soil. experimental results were obtained according to the following measurement protocol: • reflectogram #1: one week after d-se installation, • reflectogram #2: (post-irrigation) acquired approximately one hour after the irrigation process was finished, • reflectogram #3: (pre-irrigation) acquired three days after reflectogram #2, • reflectogram #4: (post-irrigation) acquired approximately 10 hours after the irrigation process was finished, • reflectogram #5: pre-irrigation measurement. figure 4(a) and 4(b) show the measurements obtained with the tdr200 and tdr ri-307usbm, respectively. it can be seen that after irrigation (reflectogram #2), the apparent length of the se increases, as d-se shifts towards a longer distance; this is a result of the increased effective permittivity. from reflectogram #3, acquired after three days and before a new irrigation, it can be seen that the apparent length of the se has decreased. a similar trend occurs for reflectogram #4 and reflectogram #5. based on the overall length of the se, the system is able to discriminate well between different soil moisture conditions. this can be used as a parameter for activated irrigation. the tdr200 instrument exhibits better performance in estimating dielectric variations; however, the costs are considerably higher than the tdr ri-307usbm. in practical applications, the elongated ses will allow the retrieval of a map in real time that shows the state of the water content in the cultivations, thus allowing an automatic tailormade intervention, especially in view of the optimal management of the irrigation processes. (a) (b) figure 4. sen-a:tdr reflectogram with the tdr200 (a) and with the tdr ri307usbm (b) 0 10 20 30 40 50 55 60 65 70 75 80 85 90 95 100 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 d e ( d im e n si o n le ss ) d app (m) sen-a (tdr200) refl. #1 after installation refl. #2 post-irrigation (a) refl. #3 pre-irrigation refl. #4 post-irrigation (b) refl. #5 pre-irrigation d b 0 20 40 60 80 100 120 140 -40 -20 0 20 40 60 80 100 120 140 t d r 's a d c o u tp u t d app (m) sen-a (tdr ri-307usbm) refl. #1 refl. #2 refl. #3 refl. #4 refl. #5 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 206 4.3. experimental results for sen-b for sen-b, the goal of the proposed method was twofold: 1) an ex-ante monitoring of concrete curing along a building diffuse profile and 2) an ex-post monitoring for the detection of dielectric anomalies as a result of the degradation or stress of the structure. a concrete beam was used as a case study. in order to monitor the water content during the hydration process, three d-ses were inserted in a concrete beam: one at the bottom of the beam, another in the middle and a third at the top. the hardening phase is considered to be completed within the first 28 days, as after this period more than 90 % of the overall mechanical strength has developed. for this reason, the beam was monitored over the 28-day period by acquiring tdr data for the d-ses. figure 5(a) shows some of the tdr reflectograms acquired during the considered period (for clarity of the image, not all the reflectograms are reported); figure 5(b) shows a zoomed image that highlights the trend. from the reflectogram, it is possible to identify the beginning and the end of the d-se (denoted respectively by 𝑑𝐵,app and 𝑑𝐸,app) in order to calculate the apparent distance 𝑑app of the dse. as can be seen, this parameter decreases with the hydration process because 𝑑𝐸,app shifts towards lower values. this indicates that the apparent distance 𝑑app of the d-se decreases with the decreasing dielectric permittivity 𝜀app of the concrete beam as a result of the ongoing hydration process. however, ex-post monitoring through an embedded low-cost diffused se can promptly detect the deterioration of the structures. in this regard, mechanical tests on the beam were carried out to analyse whether the diffuse sensor cable was sensitive to mechanical deformation. the beam was subjected to several tests with concentrated weights varying from 1,000 kgf to 9,000 kgf, with an increase of 1,000 kgf at each test. figure 6 illustrates the test setup. as explained in section 3, the d-se also included a coaxial cable, which was sensitive to deformation and/or compression phenomena. during the mechanical test, tdr measurements were also carried out on the d-se. as is known from the theory, the diameter of the inner and outer conductors determines the impedance of the coaxial cable. because of this, a deformation in the cable resulting from the degradation phenomena in the structure causes a variation in the electrical impedance, which was immediately identified from the measurements. as shown in figure 7, as the weight applied to the beam increases, the reflection coefficient decreases, and the apparent distance increases, indicating that the cable is being deformed and bent. these mechanical tests highlight that continuous ex-post monitoring through permanent, d-ses can provide early indications of structural problems. in this way, safety measures can be considered in time, and structural interventions can be performed immediately. 5. conclusions in this study, the development and the proof of concept of a multi-purpose sen were addressed through the adoption of tdr and d-ses. the proposed system, which allows the limitations of traditional punctual monitoring systems to be overcome, was validated for the localisation of leaks (sen-w), for monitoring the diffused profile of the water content of soil in agriculture (sen-a) and for the ex-ante and ex-post monitoring of the hydration process and the diagnostics of destructive phenomena (sen-b). additionally, the proposed monitoring system can be easily extended to other fields. in practical applications, each sen could be managed through a portable device and through a single platform. in addition, the monitored data can be stored in a repository and used for further statistics or future reference, and the output of the sens can be geo-referenced by acquiring the (a) (b) figure 5. sen-b: selected tdr reflectograms for the central d-se over the 28-day observation period (a) and a zoomed image (b). figure 6. experimental setup for post-mechanical tests on the beam. 0.0 2.0 4.0 6.0 8.0 10.0 12.0 14.0 0.0 0.2 0.4 0.6 0.8 1.0 1.2  ( d im e n si o n le ss ) d app (m) sen-b (tdr200) day #1 day #2 day #3 day #8 day #10 day #15 day #17 day #25 d b,app d e,app d app 10.0 10.5 11.0 11.5 12.0 12.5 0.5 0.6 0.7 0.8 0.9 1.0  ( d im e n si o n le ss ) d app (m) sen-b (tdr200) day #1 day #2 day #3 day #8 day #10 day #15 day #17 day #25 d e,app-25 d e,app-1 days acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 207 gps coordinates of the location where the mr measurements are taken. the introduction of these strategies would contribute to making the proposed monitoring system evolve into a cyberphysical measurement system [32], [33], fully exploiting the potential of 4.0 technologies. references [1] a. sforza, c. sterle, p. d’amore, a. tedesco, f. de cillis, r. setola, optimization models in a smart tool for the railway infrastructure protection, in: critical information infrastructures security. e. luiijf, p. hartel (editors). springer, cham, 2013, pp. 191-196. doi: 10.1007/978-3-319-03964-0_17 [2] p. d’amore, a. tedesco, technologies for the implementation of a security system on rail transportation infrastructures, in: railway infrastructure security. r. setola, a. sforza, v. vittorini, c. pragliola (editors), springer, cham, 2015, isbn: 978-3-31904425-5, pp. 123-141. doi: 10.1007/978-3-319-04426-2_7 [3] c. scuro, p. f. sciammarella, f. lamonaca, r. s. olivito, d. l. carni, iot for structural health monitoring, ieee instrumentation measurement magazine 21 (2018) pp. 4-14. doi: 10.1109/mim.2018.8573586 [4] e. sisinni, a. saifullah, s. han, u. jennehag, m. gidlund, industrial internet of things: challenges, opportunities, and directions, ieee transactions on industrial informatics 14 (2018) pp. 4724-4734. doi: 10.1109/tii.2018.2852491 [5] b. chen, j. wan, l. shu, p. li, m. mukherjee, b. yin, smart factory of industry 4.0: key technologies, application case, and challenges, ieee access 6 (2018) pp. 6505-6519. doi: 10.1109/access.2017.2783682 [6] h. xu, w. yu, d. griffith, n. golmie, a survey on industrial internet of things: a cyber-physical systems perspective, ieee access 6 (2018) pp. 78238-78259. doi: 10.1109/access.2018.2884906 [7] f. lamonaca, c. scuro, d. grimaldi, r. s. olivito, p. f. sciammarella, d. l. carnì, a layered iot-based architecture for a distributed structural health monitoring system, acta imeko 8 (2019) 2, pp. 45-52. doi: 10.21014/acta_imeko.v8i2.640 [8] t. addabbo, a. fort, m. mugnaini, l. parri, s. parrino, a. pozzebon, v. vignoli, a low power iot architecture for the monitoring of chemical emissions, acta imeko 8 (2019) 2, pp. 53-61. doi: 10.21014/acta_imeko.v8i2.642 [9] a. bernieri, d. capriglione, l. ferrigno, m. laracca, design of an efficient mobile measurement system for urban pollution monitoring, acta imeko 1 (2012) 1, pp. 77-84. doi: 10.21014/acta_imeko.v1i1.27 [10] g. y. chen, x. wu, e. p. schartner, s. shahnia, n. bourbeau hébert, l. yu, x. liu, a. v. shahraam, t. p. newson, h. ebendorff-heidepriem, h. xu, d. g. lancaster, t. m. monro, short-range non-bending fully distributed water/humidity sensors, journal of lightwave technology 37 (2019) pp. 20142022. doi: 10.1109/jlt.2019.2897346 [11] l. zhao, j. wang, z. li, m. hou, g. dong, t. liu, t. sun, k. t. v. grattan, quasi-distributed fiber optic temperature and humidity sensor system for monitoring of grain storage in granaries, ieee sensors journal 20 (2020) pp. 9226-9233. doi: 10.1109/jsen.2020.2989163 [12] d.-s. xu, l.-j. dong, l. borana, h.-b. liu, early-warning system with quasi-distributed fiber optic sensor networks and cloud computing for soil slopes, ieee access 5 (2017) pp. 25437-25444. doi: 10.1109/access.2017.2771494 [13] a. cataldo, e. de benedetto, g. cannazza, a. masciullo, n. giaquinto, g. d’aucelli, n. costantino, a. de leo, m. miraglia, recent advances in the tdr-based leak detection system for pipeline inspection, measurement 98 (2017) pp. 347-354. doi: 10.1016/j.measurement.2016.09.017 [14] a. cataldo, e. de benedetto, g. cannazza, n. giaquinto, m. savino, f. adamo, leak detection through microwave reflectometry: from laboratory to practical implementation, measurement 47 (2014) pp. 963-970. doi: 10.1016/j.measurement.2013.09.010 [15] n. giaquinto, g. d’aucelli, e. de benedetto, g. cannazza, a. cataldo, e. piuzzi, a. masciullo, criteria for automated estimation of time of flight in tdr analysis, ieee transactions on instrumentation and measurement 65 (2016) pp. 1215-1224. doi: 10.1109/tim.2015.2495721 [16] a. cataldo, e. de benedetto, g. cannazza, e. piuzzi, n. giaquinto, embedded tdr wire-like sensing elements for monitoring applications, measurement 68 (2015) pp. 236-245. doi: 10.1016/j.measurement.2015.02.050 [17] a. cataldo, e. de benedetto, a. masciullo, g. cannazza, a new measurement algorithm for tdr-based localization of large dielectric permittivity variations in long-distance cable systems, measurement 174 (2021), art. 109066. doi: 10.1016/j.measurement.2021.109066 [18] a. walczak, m. lipiński, g. janik, application of the tdr sensor and the parameters of injection irrigation for the estimation of soil evaporation intensity, sensors 21 (2021) art. 2309. a) b) figure 7. a) tdr reflectograms and first derivatives acquired through the coaxial cable of the d-se for the concrete beam in the different compression conditions. b) zoom of the trend as compression increases. 0 3 6 9 12 15 0.0 0.2 0.4 0.6 0.8 1.0 6 7 8 9 10 0.02 0.04 0.06 0.08 0.10 der 5,000 kgf 6,000 kgf der 6,000 kgf 7,000 kgf der 7,000 kgf 8,000 kgf der 8,000 kgf 8,800 kgf der 8,800 kgf beam breakage der beam breakage  ( d im e n s io n le s s ) d app (m) pre-test der pre-test 1,000 kgf der 1,000 kgf 2,000 kgf der 2,000 kgf 3,000 kgf der 3,000 kgf 4,000 kgf der 4,000 kgf 5,000 kgf  ( d im e n s io n le s s ) d app (m) pre-test 1,000 kgf 2,000 kgf 3,000 kgf 4,000 kgf 5,000 kgf 6,000 kgf 7,000 kgf 8,000 kgf 8,800 kgf beam breakage 0 3 6 9 12 15 0.0 0.2 0.4 0.6 0.8 1.0 6 7 8 9 10 0.02 0.04 0.06 0.08 0.10 der 5,000 kgf 6,000 kgf der 6,000 kgf 7,000 kgf der 7,000 kgf 8,000 kgf der 8,000 kgf 8,800 kgf der 8,800 kgf beam breakage der beam breakage  ( d im e n s io n le s s ) d app (m) pre-test der pre-test 1,000 kgf der 1,000 kgf 2,000 kgf der 2,000 kgf 3,000 kgf der 3,000 kgf 4,000 kgf der 4,000 kgf 5,000 kgf  ( d im e n s io n le s s ) d app (m) pre-test 1,000 kgf 2,000 kgf 3,000 kgf 4,000 kgf 5,000 kgf 6,000 kgf 7,000 kgf 8,000 kgf 8,800 kgf beam breakage https://doi.org/10.1007/978-3-319-03964-0_17 https://doi.org/10.1007/978-3-319-04426-2_7 https://doi.org/10.1109/mim.2018.8573586 https://doi.org/10.1109/tii.2018.2852491 https://doi.org/10.1109/access.2017.2783682 https://doi.org/10.1109/access.2018.2884906 http://dx.doi.org/10.21014/acta_imeko.v8i2.640 http://dx.doi.org/10.21014/acta_imeko.v8i2.642 http://dx.doi.org/10.21014/acta_imeko.v1i1.27 https://doi.org/10.1109/jlt.2019.2897346 https://doi.org/10.1109/jsen.2020.2989163 https://doi.org/10.1109/access.2017.2771494 https://doi.org/10.1016/j.measurement.2016.09.017 https://doi.org/10.1016/j.measurement.2013.09.010 https://doi.org/10.1109/tim.2015.2495721 https://doi.org/10.1016/j.measurement.2015.02.050 https://doi.org/10.1016/j.measurement.2021.109066 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 208 doi: 10.3390/s21072309 [19] d.-j. kim, j.-d. yu, y.-h. byun, horizontally elongated time domain reflectometry system for evaluation of soil moisture distribution, sensors 20 (2020) pp. 1-17. doi: 10.3390/s20236834 [20] c.-p. lin, y. j. ngui, c.-h. lin, multiple reflection analysis of tdr signal for complex dielectric spectroscopy, ieee transactions on instrumentation and measurement 67 (2018) pp. 2649-2661. doi: 10.1109/tim.2018.2822404 [21] a. cataldo, e. de benedetto, g. cannazza, e. piuzzi, e. pittella, tdr-based measurements of water content in construction materials for in-the-field use and calibration, ieee transactions on instrumentation and measurement 67 (2018) pp. 1230-1237. doi: 10.1109/tim.2017.2770778 [22] e. iaccheri, a. berardinelli, g. maggio, t. g. toschi, l. ragni, affordable time-domain reflectometry system for rapid food analysis, ieee transactions on instrumentation and measurement 70 (2021) pp. 1-7. doi: 10.1109/tim.2021.3069050 [23] s. m. kim, j. h. sung, w. park, j. h. ha, y. j. lee, h. b. kim, development of a monitoring system for multichannel cables using tdr, ieee transactions on instrumentation and measurement 63 (2014) pp. 1966-1974. doi: 10.1109/tim.2014.2304353 [24] g.-y. kim, s.-h. kang, w. nah, novel tdr test method for diagnosis of interconnect failures using automatic test equipment, ieee transactions on instrumentation and measurement 66 (2017) pp. 2638-2646. doi: 10.1109/tim.2017.2712978 [25] c.-k. lee, s. j. chang, a method of fault localization within the blind spot using the hybridization between tdr and wavelet transform, ieee sensors journal 21 (2021) pp. 5102-5110. doi: 10.1109/jsen.2020.3035754 [26] c. m. furse, m. kafal, r. razzaghi, y.-j. shin, fault diagnosis for electrical systems and power networks: a review, ieee sensors journal 21 (2021) pp. 888-906. doi: 10.1109/jsen.2020.2987321 [27] s. lee, h.-k. yoon, hydraulic conductivity of saturated soil medium through time-domain reflectometry, sensors 20 (2020) pp. 1-18. doi: 10.3390/s20237001 [28] a. cataldo, l. tarricone, m. vallone, f. attivissimo, a. trotta, uncertainty estimation in simultaneous measurements of levels and permittivities of liquids using tdr technique, ieee transactions on instrumentation and measurement 57 (2008) pp. 454-466. doi: 10.1109/tim.2007.911700 [29] h. yang, h. wen, tdr prediction method for pim distortion in loose contact coaxial connectors, ieee transactions on instrumentation and measurement 68 (2019) pp. 4689-4693. doi: 10.1109/tim.2019.2900963 [30] g. robles, m. shafiq, j. m. martínez-tarifa, multiple partial discharge source localization in power cables through power spectral separation and time-domain reflectometry, ieee transactions on instrumentation and measurement 68 (2019) pp. 4703-4711. doi: 10.1109/tim.2019.2896553 [31] r. schiavoni, g. monti, e. piuzzi, l. tarricone, a. tedesco, e. de benedetto, a. cataldo, feasibility of a wearable reflectometric system for sensing skin hydration, sensors 20 (2020), art. 2833. doi: 10.3390/s20102833 [32] s. grazioso, a. tedesco, m. selvaggio, s. debei, s. chiodini, e. de benedetto, g. di gironimo, a. lanzotti, design of a soft growing robot as a practical example of cyber–physical measurement systems, proc. of the ieee conf. on metrology for industry 4.0 and iot proceedings, rome, italy, 7 – 9 june 2021, pp 23-26. doi: 10.1109/metroind4.0iot51437.2021.9488477 [33] s. grazioso, a. tedesco, m. selvaggio, s. debei, s. chiodini, towards the development of a cyber-physical measurement system (cpms): case study of a bioinspired soft growing robot for remote measurement and monitoring applications, acta imeko 10 (2021) 2, pp. 103-109. doi: 10.21014/acta_imeko.v10i2.1123 https://doi.org/10.3390/s21072309 https://doi.org/10.3390/s20236834 https://doi.org/10.1109/tim.2018.2822404 https://doi.org/10.1109/tim.2017.2770778 https://doi.org/10.1109/tim.2021.3069050 https://doi.org/10.1109/tim.2014.2304353 https://doi.org/10.1109/tim.2017.2712978 https://doi.org/10.1109/jsen.2020.3035754 https://doi.org/10.1109/jsen.2020.2987321 https://doi.org/10.3390/s20237001 https://doi.org/10.1109/tim.2007.911700 https://doi.org/10.1109/tim.2019.2900963 https://doi.org/10.1109/tim.2019.2896553 https://doi.org/10.3390/s20102833 https://doi.org/10.1109/metroind4.0iot51437.2021.9488477 http://dx.doi.org/10.21014/acta_imeko.v10i2.1123 bringing optical metrology to testing and inspection activities in civil engineering acta imeko issn: 2221-870x september 2021, volume 10, number 3, 108 116 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 108 bringing optical metrology to testing and inspection activities in civil engineering luís martins1, álvaro ribeiro1, maria do céu almeida1, joão alves e sousa2 1 lnec national laboratory for civil engineering, avenida do brasil 101, 1700-066 lisbon, portugal 2 ipq portuguese institute for quality, rua antónio gião 2, 2829-513 caparica, portugal section: research paper keywords: optical metrology; civil engineering; testing; inspection citation: luis martins, álvaro ribeiro, maria do céu almeida, joão alves e sousa, bringing optical metrology to testing and inspection activities in civil engineering, acta imeko, vol. 10, no. 3, article 16, september 2021, identifier: imeko-acta-10 (2021)-03-16 section editor: lorenzo ciani, university of florence, italy received february 8, 2021; in final form august 5, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by lnec national laboratory for civil engineering, portugal. corresponding author: luís martins, e-mail: lfmartins@lnec.pt 1. introduction optical metrology has a large scientific and technological scope of application, providing a wide range of measurement methods, from interferometry to photometry, radiometry and, more recently, to applications using digital, video and vision systems, which combined with computational algorithms, allow obtaining traceable and accurate measurements. increasing accuracy of optical measurement instruments creates new opportunities for applications in civil engineering, namely, for testing and inspection activities. these new methodologies open broader possibilities in civil engineering domains where dimensional and geometrical quantities are major sources of information on infrastructures and construction materials. the assessment of their performance and behaviour, often involves monitoring and analysis under dynamic regimes [1], [2]. in many cases, the development of new technologies, based on the use of methods combining optics and digital algorithms, have recognized advantages, namely, those using non-invasive techniques in harsh environments and remote observation [3]. moreover, the need for accurate measurements related to infrastructures management, e.g., in early detection of damage or in safety monitoring, is growing. the contribution of metrology in this area is key to increase the confidence in decision-making processes. r&di activities in the optical metrology domain in recent years at the portuguese national laboratory for civil engineering (lnec) led to the development of innovative applications, many of them related to doctoral academic research. the main objectives are: (i) to design and develop optical solutions for applications where conventional instrumentation does not provide satisfactory results; (ii) to establish si (international system of units) traceability of measurements undertaken with optical instruments; (iii) to develop advanced mathematical and numerical tools, namely based on monte carlo methods (mcm) and bayesian methods, abstract optical metrology has an increasing impact on observation and experimental activities in civil engineering, contributing to the research and development of innovative, non-invasive techniques applied in testing and inspection of infrastructures and construction materials to ensure safety and quality of life. advances in specific applications are presented in the paper, highlighting the application cases carried out by lnec (the portuguese national laboratory for civil engineering). the examples include: (i) structural monitoring of a long-span suspension bridge; (ii) use of close circuit television (cctv) cameras in drain and sewer inspection; (iii) calibration of a large-scale seismic shaking table with laser interferometry; (iv) destructive mechanical testing of masonry specimens. current and future research work in this field is emphasized in the final section. examples given are related to the use of m oiré techniques for digital modelling of reduced-scale hydraulic surfaces and to the use of laser interferometry for calibration of strain measurement standard for the geometrical evaluation of concrete testing machines. mailto:lfmartins@lnec.pt acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 109 bringing benefits to the evaluation of measurement uncertainty in complex and non-linear optical problems. this paper exemplifies how new methods enable traceable and accurate solutions to assess conformity with safety requirements, providing support to the measurement uncertainty evaluation as a tool to use decision rules. in addition, the applications described emphasize the role of digital and optical systems, as a basis for robust techniques able to provide measurement estimates for dimensional quantities, replacing conventional invasive measurement approaches. to illustrate these achievements, results of r&di in the civil engineering context are presented, including examples of application in: (i) structural monitoring of a long-span suspension bridge; (ii) drain and sewer inspection using cctv cameras; (iii) calibration of a large-scale seismic shaking table with laser interferometry; (iv) destructive testing of masonry specimens. 2. overview of optical metrology optical metrology is a specific scientific area of metrology, defined as the science of measurement and its applications [4], in which experimental measurement processes are supported by light. currently, it has a significant contribution in multiple scientific and engineering domains, improving measurement methods and instruments, to assess their limits and increasing their capabilities in order to improve the knowledge of the studied phenomena. in recent years, the technological development of computational tools has extended the optical metrology activity scope, by increasing the number of measurement processes supported in digital processing of images obtained from optical systems [5]. this activity is characterized by the detection and record abilities, without physical contact with the object and in minor time interval, of a large amount of information (dimensional, geometrical, radiometric, photometric, colour, thermal, among others), overcoming human vision limitations, reaching information imperceptible for human eyes and, therefore, improving knowledge about phenomena. although this paper is focused on dimensional measurements, optical metrology also reaches other domains of activity, namely, temperature, mechanical and chemical quantities. optical metrology covers a wide range of dimensional measurement intervals, from nanometer magnitude up to the dimension of celestial bodies and space distances. in this context, measurement principles are usually grouped in three categories [6]: (i) geometrical optics – related to the refraction, reflection and linear propagation of light phenomena, which are the functional support of several instruments and measurement systems composed by light sources, lenses, diaphragms, mirrors, prisms, beam splitters, filters and optical electronic components; (ii) wave optics – where the wave nature of light is explored, namely, the interference of electromagnetic waves with similar or identical wavelength, being present in a wide range of instruments and measurement systems which use polarized and holographic optical components and diffraction gratings; and (iii) quantum optics – supports the generation of laser beams which correspond to high intensity and monochromatic coherent light sources used, e.g., in sub-nanometer interferometry and scanning microscopy. in the case of civil engineering, two main areas for applications of optical metrology are identified: space and aerial observation; and terrestrial observation. space observation, supported by optical systems, equipped with panchromatic and multi-spectral sensors integrated in remote sensing satellites, is gradually more frequent in the context of civil engineering, due to the growing access to temporal and spatial collections of digital images of the earth’s surface with increasing spatial resolution. aerial observation is generally focused on photogrammetric activities undertaken from aircrafts, aiming at the production of geographic information to be included in topographic charts or geographical information systems, namely, through orthophotos and three-dimensional models (realistic or graphical) representing a certain region of the earth’s surface. moreover, optical systems are also installed in uav unmanned aerial vehicles, used in the visual inspection of large constructions, contributing to the detection and mapping of observations (e.g. cracks, infiltrations, among others) and analysis of their progression with time (see example in figure 1) [7]. 3. structural monitoring of a long-span suspension bridge optical metrology has been successively applied by lnec to the monitoring of a long-span suspension bridge, allowing the development of non-contact measurement systems, capable of determining three-dimensional displacements of critical regions, namely, in the bridge’s main span central section. optical systems are an interesting solution for this class of measurement problems, especially in the observation of metallic bridges, where the accuracy of microwave interferometric radar systems [8] and global navigation satellite systems [9], [10] can be affected, for instance, by the multi-path effect resulting from electromagnetic wave reflections in the bridge’s structural components. the measurement approach developed consists in the use of a digital camera rigidly installed beneath the bridge’s stiffness girder, oriented towards a set of four active targets placed at a tower foundation, materializing the world three-dimensional system. provided that the camera’s intrinsic parameters (focal length, principal point coordinates and lens distortion coefficients) and the targets relative coordinates are accurately known (by previous testing), non-linear optimization methods can be used to determine the position of the camera’s projection centre. the temporal evolution of this quantity is considered representative of the bridge’s dynamic displacement at the location of the camera. since distances can be quite high in this type of observation context, thus the use of high focal length lenses is required to figure 1. digital image processing of concrete wall surface image showing crack. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 110 achieve a suitable spatial image resolution. however, conventional camera parameterization methods were mainly developed for small focal length cameras (below 100 mm). when applied to high focal length cameras, such methods can reveal numeric instability related to over-parameterization and ill-conditioned matrices. a suitable solution for this problem is found in [11], where the intrinsic parametrization method is described, supported in the use of diffractive optical elements (doe). this approach was implemented in the 25th of april long-span suspension bridge (p25a) in lisbon (portugal), for an observation distance near 500 m. to obtain suitable sensitivity of three-dimensional displacement measurement, a 600 mm high focal length lens (composed by a 300 mm telephoto lens and a 2x teleconverter) was used. a set of four active targets was placed in the p25a bridge south tower foundation (figure 2), facing the bridge’s main span where the camera was installed (figure 3). each of the four targets was composed by 16 leds, distributed in a circular geometrical pattern capable of emitting a narrow near-infrared beam (875 nm wavelength) and compatible with the camera’s spectral sensitivity. an optical filter on the camera reduced the environment visible irradiance from many other elements in the observation scenario, thus improving contrast in the target image. several field validation tests were performed, aiming at the quantification of the optical phenomena influence, such as atmospheric refraction and turbulence in the dimensional measurement accuracy. a calibration device was used for this purpose [11], [12], allowing to install the set of targets in four reference positions. by placing the camera in the p25a south anchorage, orientated toward the calibration device in the p25a south tower foundation (both considered static structural regions), the systematic effect caused by refraction and the beam wandering effect originated turbulence, mainly in the summer season, were quantified as explained in [12]. since the p25a bridge has two main decks (an upper road deck and a lower train deck), two types of displacement records with and without train circulation were obtained during field testing of the displacement measurement system. due to the reduced measurement sensitivity in the longitudinal direction, demonstrated in the validation tests, only transverse and vertical displacements were recorded. an image acquisition frequency of 15 hz was defined for an observation time interval of three minutes. the collected image sequences were digitally processed afterward, using the same techniques applied in the validation tests. figure 4 exemplifies a typical displacement record obtained for a passengers train passage on the p25a main span central section. for the operational condition mentioned train and road traffic the observed maximum (peak-to-peak) displacement were 0.39 m and 1.69 m, respectively, in the transverse and vertical directions. high-measurement sensitivity is noticed in the vertical displacement record where the number of train carriages (four) can be temporally discriminated four small spikes around t = 120 s, with a 95 % expanded measurement uncertainty of 8.8 mm. the distributed passengers train load was estimated between 20.7 kn/m (empty train) and 28.8 kn/m (overload train), which is considerably lower than the distributed load applied in the p25a static loading test performed in 1999, where a 3.15 m vertical displacement value was recorded for a 77.5 kn/m distributed load. as expected, in the absence of train circulation in the p25a, the observed maximum displacements were less significant, namely 0.53 m and 0.29 m, respectively, for the vertical and transverse directions, as shown in figure 5. 4. drain and sewer inspection using cctv cameras another recent example of the application of optical metrology to the civil engineering inspection context is the study carried out on the metrological quality of dimensional measurements based on images from cctv inspections in drain and sewer systems (example shown in figure 6). in this context, investigations are carried out using several sources of information, including external and internal inspection activities for the detection and characterization of anomalies which can negatively affect the performance of the drain or sewer system. cctv inspection is a largely used visual inspection technique for non-man entry components. this type of indirect visual inspection is characterized by the quantification of a significant number of absolute and relative dimensional quantities, which contribute to the characterization of the inspection observations and, consequently, to the analysis of the performance of drain and sewer systems outside buildings. unfavourable environmental factors and conditions in the drain or sewer components pose difficulties in the estimation of the quantities of interest and the quality of the recorded images can be quite poor (lighting, lack of reference points, geometric irregularities and subjective assessments, among others). the study [14] stresses the need of proper metrological characterization of the optical system the cctv camera used in drain or sewer inspections, namely, the geometrical characterization and quantification of intrinsic parameters using traceable reference dimensional patterns and applying known algorithms. the standard radiometric characterization, aiming at figure 2. active targets on the south tower foundation. figure 3. digital camera installed in the stiffness girder. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 111 the determination of the cctv camera sensitivity, linearity, noise, dark current, spatial non-uniformity and defecting pixels, is also mentioned [15]. two measurement models were studied to be applied in this context the perspective camera model and the orthographic projection camera model [16]. the first model implies having input knowledge about the camera’s intrinsic parameters and the extrinsic parameters (the camera position and orientation in the local or global coordinate system), which must be obtained from instrumentation of the cctv camera. the second model is a less rigorous approach that can be followed, assuming a parallel geometrical relation between the image plane and the crosssection plane in the drain or sewer to define a scale coefficient between real dimension (in millimetres) and image dimension (in pixels). research efforts were directed towards the evaluation of the measurement uncertainty following the gum framework [17], [18]. particular attention was given to the influence of lens distortion in the results obtained from the perspective camera model. in a typical inspection of a drain or sewer system, a reduced focal distance lens is generally used to have a wider angle. in this type of lens, distortion can cause geometrical figure 4. p25a main span central section displacement train and road traffic. figure 5. p25a main span central section displacement road traffic only. figure 6. inspection image showing dimensional reduction by deformation effect [13]. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 112 deformation of the image, thus affecting the accuracy of dimensional measurements. for this purpose, intrinsic parameters’ estimates and standard uncertainties were obtained [14] for the case of a camera with a 4 mm nominal focal length and an image sensor with 480 × 640 squared pixels, considering a pixel linear dimension equal to 6.5 µm. high-order radial distortion coefficients were considered negligible. the standard uncertainty related to the image coordinates, resulting from the performed intrinsic parametrization, was equal to 0.04 pixel. to assess the impact of distortion in the image coordinate measurement accuracy, a monte carlo method [18] was used, given the complex and non-linear lens distortion model [19]. figure 7 shows the estimates of the image variation due to the combined effect of radial and tangential distortions and figure 8 presents the corresponding 95 % measurement uncertainty. as shown in figure 7 and figure 8, the distortion impact in the images coordinates is quite low. as expected, a higher distortion is observed in the extreme regions of the image, especially in the corners. the maximum distortion estimate is close to 0.050 pixel with a 95 % expanded uncertainty of 0.001 pixel. these results allow to remove the distortion component from the perspective camera model, making it less complex and numerically more stable. due to the non-linear and complex mathematical models related to the perspective camera model, a monte carlo method was again applied in numerical simulation, in order to obtain the dispersion of values related to the local dimensional coordinates which support dimensional measurement in inspection images. a 95 % computational accuracy level lower than 1 mm was obtained. the simulation results showed that a dimensional accuracy level lower than 10 mm can only be achieved for a camera and plane location standard uncertainties of 1 mm and an image coordinate standard uncertainty bellow 3 pixels. in a sensitivity point of view, the camera and plane standard uncertainty showed a stronger contribution to the dimensional accuracy level, rather than the image coordinate measurement uncertainty. when compared with the global dimensions of the corresponding camera field-of-view (974 mm x 731 mm), the 95 % expanded uncertainty of the dimensional coordinates is comprised between 0.2 % and 4.1 %. the measurement uncertainty related to the adoption of the orthographic projection model was also studied in [14] using the uncertainty propagation law [17], considering the linearity of the applied mathematical models. for the worst case, related to the scale coefficient with the highest measurement uncertainty, the obtained dimensional measurement accuracy was always above 5 %. better accuracy levels are possible, namely, in the case of the lowest measurement uncertainty of the scale coefficient, for standard uncertainties of 1.3 pixel (for dimensional measurements close to 100 mm) and 2.5 pixels (for dimensional measurements of 200 mm), 5. calibration of a large-scale seismic shaking table with laser interferometry laser interferometry was applied for the calibration of a large-scale seismic shaking table, used by lnec’s earthquake engineering research centre in r&di activities related to seismic risk analysis and experimental and analytical dynamic modelling of structures, components and equipment. this european seismic engineering research infrastructure (shown in figure 9) is composed by a high stiffness testing platform with 4.6 m x 5.6 m dimensions and a maximum payload capacity of 392 kn, connected to hydraulic actuators, allowing to test real or reduced-scale models up to extreme collapse conditions, between 0 hz and 40 hz [20]. the control system used allows the active application of the displacement to the testing platform in three independent orthogonal axis, while its rotation is passively restricted using torsion bars. the performed calibration is included in the introduction of quality management systems in large experimental infrastructures with r&di [21], aiming the recognition of technical competence for testing and measurement and the formal definition of management processes, which can be regularly assessed by an independent entity. the compliance with metrological requirements is a key issue in this context, being related, for example, with traceability and calibration procedures, conformity assessment, measurement correction and uncertainty evaluation, data record management and data analysis procedures. laser interferometry was used to evaluate the dimensional cross-axis motion, as well as the rotation motion across axis performances of lnec’s shaking table, using specific experimental setups and optical components, as shown in figure 10 and figure 11. this experimental work allowed performing remote and non-invasive measurements with a high accuracy level in a harsh environment, being composed by two stages: the laser beam figure 7. image distortion estimates in pixels. figure 8. image distortion 95 % expanded uncertainties in pixels. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 113 alignment and data acquisition (500 sampling pairs from both the interferometer and the dimensional sensors of the seismic shaking table, having a gaussian representation of the probability distribution). the main identified uncertainty components were related to misalignment of optical elements, time synchronization and influence quantities such as air and material temperature, relative humidity and atmospheric pressure. specific actions were taken in order to minimize these uncertainty components, namely, full-range preliminary tests with adaptative adjustment of the main optical components, the application of a signal synchronization procedure and the use of compensation algorithms for the correction of the material thermal expansion and of the air refraction index [23]. one of the developed tests was defined in order to evaluate the dimensional scale calibration errors and reversibility, using input dynamic series with low variance 30 mm calibration steps, within a measurement interval of ± 120 mm. examples of obtained results are shown in figure 12 and figure 13. a measurement discrimination test was also developed, considering transition steps of 0.5 mm, 0.1 mm and 5.0 mm given at 20 mm, 50 mm and 80 mm linear positions. an example of the obtained results is shown in figure 14. the obtained results show calibration errors ranging, approximately between -0.4 mm and 0.7 mm, with a reduced reversibility close to 0.1 mm. these results were included in the measurement uncertainty evaluation, from which an instrumental measurement accuracy of 0.31 mm was obtained considering a confidence interval of 95 %. the corresponding target instrumental measurement uncertainty, defined as a metrological requirement for the seismic shaking table, is equal to 1 mm. additional dynamical tests and the corresponding discussion of results can be found in [21]. figure 9. top view of lnec’s earthquake engineering testing room [22]. figure 10. experimental setup for cross-axis motion testing. figure 11. experimental setup for rotation motion testing. figure 12. calibration errors and reversibility for the static position test of axis 1-t-a. figure 13. calibration errors for the dynamic position test of axis 1-t-a. figure 14. results of the discrimination test of axis 1-t-a at the 80 mm position. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 114 6. destructive mechanical testing of masonry specimens the application of optical metrology to the destructive mechanical testing of masonry specimens was motivated by the possibility of obtaining non-contact dimensional measurements. in a destructive test, the use of classical invasive instrumentation, such as deformeters, electrical strain gauges and contact displacement sensors, is considered not suitable for some applications due to dynamic effect in the experimental setup and to the high risk of damaging the equipment. knowledge of mechanical characteristics of resistant masonry walls is one of the aspects that still have gaps, mainly due to the difficulty in obtaining representative specimens. in addition, the growing interest in the rehabilitation of old buildings contributes to the search for new reinforcement solutions that are compatible with the original building construction techniques [24], [25]. it is equally important to ensure that these reinforcement techniques, in addition to the aesthetic and functional aspects, also reduce the seismic vulnerability of these buildings [26]. from an experimental point view, dimensional measurements have a strong contribution for the determination of key mechanical characteristics since they support the indirect strain measurement in the tested specimens [27], [28]. afterwards, these measurements are used for characterizing the masonry specimen mechanical behaviour in terms of its elasticity modulus and poisson ratio. the optical measurement solution proposed [29] is based in the use of a single camera with a spatial position and orientation allowing visualization of a set of passive targets evenly distributed in different regions, both in the static region surrounding the specimen and in the dynamic region of the tested specimen surface. the weak perspective model or the orthographic model with uniform scaling was adopted [29] allowing to establish a functional relation of the three-dimensional point georeferenced (expressed in millimetres, for example) with the corresponding bi-dimensional position in the image (usually expressed in pixels). a measurement referential, composed of reference targets, was placed in front of the observation region in the masonry specimen at the minimum distance from the specimen surface (without contact), thus minimizing the observation depth difference to the monitoring targets fixed and scattered in the observation region (in the inner region of the referential), as shown in figure 15. the mentioned referential was subjected to dimensional measurement in an optical measuring machine, before the specimen testing, aiming at the determination of the three-dimensional georeferenced position of each reference target. the knowledge of these spatial coordinates supported the calculation of the scale coefficient in each acquired image, since the measurement referential is placed in a static region of the experimental setup (ensuring that it does not touch the specimen and it is not subjected to vibrations produced by the testing machine). solid and hollow ceramic brick masonry specimens were retrieved from the walls of one building built in the beginning of the 20th century in the city of lisbon (portugal), which was undergoing rehabilitation. the proposed optical approach was implemented by fixing monitoring targets in the specimen’s ceramic bricks and placing the measurement referential with the reference targets close to the observation surfaces as shown in figure 16 (displacement sensors are also visible, being used for validation purposes, without specimen collapse). the recorded images were subjected to tailored digital image processing algorithm, in order to retrieve the image coordinates of both reference and monitoring targets, as shown in figure 17. the first stage of obtained results is related to the scale coefficient measurement samples (with a dimension equal to 28), from which an average value was obtained. figure 18 illustrates the dispersion of scale coefficient values obtained for one of the used measurement referential. based on the specimen’s length and width measurements, as well as the axial compression force readings obtained from the used universal testing machine, vertical and horizonal figure 15. schematic representation of the proposed optical measurement method. figure 16. instrumentation of the masonry specimen. figure 17. example of targets image after digital processing, showing the determined centroids. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 115 dimensional measurements were performed in the frontal and rear surfaces of the specimen, noticing the existence of both contact and optical measurement points not spatially coincident. from the collected data, stress vs. strain curves were obtained for the loading and unloading cycle corresponding to 1/3 of the fracture stress, as shown in figure 19 and figure 20. figure 20 shows the effect of noise in the strain measurements obtained by the optical dimensional measurements, when compared with the strain measurements obtained by the contact measurement chain (figure 19). this is justified by the low spatial resolution of the acquired images, which affects the targets image coordinates which support the deformation measurement. a higher spatial resolution can be achieved with an image sensor composed by smaller pixels or by using a different lens that is capable of producing a higher image magnification with an acceptable narrow field-of-view. these results were use in the determination of mechanical properties estimates and measurement uncertainties in tested masonry specimens. a detail discussion is shown in [29]. 7. conclusions this paper describes relevant contributions of optical metrology when applied in different testing and inspection activities in civil engineering, providing significant added-value in decision-making processes. the wide diversity of testing and inspection activities in this context, together with the versatility of the measurement solutions and tools provided by optical metrology, motivates the development of new interdisciplinary r&di work at lnec, so far with promising results. one of these fields is the development of moiré techniques [30] applied in the digital modelling of reduced-scale hydraulic surfaces. hydraulic experimental activities are frequently carried out in a dynamic regime; however, conventional invasive instrumentation is often unsuitable for real-time observations, making these experiments time-consuming and with reduced acquisition frequency. moiré techniques have been successively applied in other scientific and technical areas, however, their application in the civil engineering context is still quite reduced. another research field being developed by lnec in this context is the application of laser interferometry in the calibration of a strain measurement standard used for the geometrical evaluation of concrete testing machines (self-alignment and movement restriction) [31]. this measurement standard a strain gauged column is required to have a reduced instrumental measurement uncertainty (0.1 % or 5x10-6), making laser interferometry an interesting suitable solution for this objective. acknowledgement the authors acknowledge the financial support provided by lnec national laboratory for civil engineering. references [1] g. di leo, c. liguori, a. paolillo, a. pietrosanto, machine vision systems for on line quality monitoring in industrial applications, acta imeko 4 (2015) 1, pp. 121-127. doi: 10.21014/acta_imeko.v1i1.7 [2] g. d’emilia, d. di gasbarro, e. natale, optical system for online monitoring of welding: a machine learning approach for optimal set up giulio, acta imeko 5 (2016) 4, pp. 4-11. doi: 10.21014/acta_imeko.v5i4.420 [3] m. lo brutto, g. dardanelli, vision metrology and structure from motion for archaeological heritage 3d reconstruction: a case study of various roman mosaics, acta imeko 6 (2017) 3, pp. 35-44. doi: 10.21014/acta_imeko.v6i3.458 [4] vim international vocabulary of metrology basic and general concepts and associated terms, jcgm joint committee for guides in metrology, 2008, pp. 16. [5] m. rosenberger, m. schellhorn, g. linß, new education strategy in quality measurement technique with image processing technologies chances, applications and realisation, acta imeko 2 (2013) 1, pp. 56-60. doi: 10.21014/acta_imeko.v2i1.92 figure 18. dispersion of values related to the scale coefficient. figure 19. stress vs. strain curve obtained by contact dimensional measurement. figure 20. stress vs. strain curve obtained by optical dimensional measurement. http://dx.doi.org/10.21014/acta_imeko.v1i1.7 http://dx.doi.org/10.21014/acta_imeko.v5i4.420 http://dx.doi.org/10.21014/acta_imeko.v6i3.458 http://dx.doi.org/10.21014/acta_imeko.v2i1.92 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 116 [6] h. schwenke, u. neuschaefre-rube, t. pfeifer, h. kunzmann, optical methods for dimensional metrology in production, cirp annals manufacturing technology 51, 2 (2002), pp. 685-699. [7] l. santos, visual inspections as a tool to detect damage: current practices and new trends, proceedings of condition assessment of bridges: past, present and future a complementary approach, lisbon, 2012. [8] m. pieraccini, g. luzi, d. mecatti, m. fratini, l. noferini, l. carissimi, g. franchioni, c. atzeni, remote sensing of building structural displacement using microwave interferometer with imaging capability, ndt& int. 37 (2004), pp. 545-550, doi: 10.1016/j.ndteint.2004.02.004 [9] k. wong, k. man, w. chan, monitoring hong kong’s bridges. real-time kinematics spans the gap, gps world 12, 7 (2001), pp. 10-18. [10] v. khoo, y. thor, g. ong, monitoring of high-rise building using real-time differential gps, proceedings of the fig congress, sidney, 2010. [11] l. martins, j. rebordão, a. ribeiro, intrinsic parameterization of a computational optical system for long-distance displacement structural monitoring, optical engineering 54, 1 (2015), pp. 1-12, doi: 10.1117/1.oe.54.1.014105 [12] l. martins, j. rebordão, a. ribeiro, thermal influence on long-distance optical measurement of suspension bridge displacement, int. j. thermophysics 35, 3-4 (2014), pp. 693-711, doi: 10.1007/s10765-014-1607-3 [13] p. henley, sewer condition classification and training course, wrc, 2017. [14] l. martins, m. almeida, a. ribeiro, optical metrology applied in cctv inspection in drain and sewer systems, acta imeko 9 (2020) 1, pp. 18-24. doi: 10.21014/acta_imeko.v9i1.744 [15] m. rosenberger, c. zhang, p. votyakov, m. peibler, r. celestre, g. notni, emva 1288 camera characterisation and the influences of radiometric camera characteristics on geometric measurements, acta imeko 5 (2016) 4, pp. 81-87. doi: 10.21014/acta_imeko.v5i4.356 [16] r. hartley, a. zisserman, multiple view geometry in computer vision, cambridge university press, new york, 2003. [17] gum guide to the expression of uncertainty in measurement, iso international organization for standardization, 1993. [18] gum-s1 evaluation of measurement data supplement 1 to the guide to the expression of uncertainty in measurements propagation of distributions using a monte carlo method, jcgm joint committee for guides in metrology, 2008. [19] d. brown, close-range camera calibration, proceedings of the symposium on close-range photogrammetry, illinois, 1971, pp. 855-866. [20] r. duarte, m. rito-corrêa, t. vaz, a. campos costa, shaking table testing of structures, proceedings of the 10th world conference on earthquake engineering, rotterdam, 1994. [21] a. ribeiro, a. campos costa, p. candeias, j. alves e sousa, l. martins, a. martins, a, ferreira, assessment of the metrological performance of seismic tables for a qms recognition, journal of physics: conference series 772 (2016), pp. 1-16. doi: 10.1088/1742-6596/772/1/012006 [22] common protocol for the qualification of research, seismic engineering research infrastructures for european synergies, wp 3, na 2.4, deliv. 3, 2012. [23] g. lipinski, mesures dimensionnelles par interférométrie laser, techniques de l’ íngénieur measures et contrôle, r 1 320, 1995. [24] a. caporale, f. parisi, d. asprone, r. luciano, a. prota, micromechanical analysis of adobe masonry as two-component composite: influence of bond and loading schemes, compos. struct. 112 (2014), pp. 254-263. doi: 10.1016/j.compstruct.2014.02.020 [25] f. greco, l. leonetti, r. luciano, p. blasi, an adaptative multiscale strategy for the damage analysis of masonry modelled as a composite material, compos. struct. 153 (2016), pp. 972-988. doi: 10.1016/j.compstruct.2016.06.066 [26] s. kallioras, a. correia, a. marques, v. bernardo, p. candeias, f. graziotti, lnec-build-3: an incremental shake-table test on a dutch urm detached house with chimneys, eucentre research report euc203/2018u, eucentre, 2018. [27] a. marques, j. ferreira, p. candeias, m. veiga, axial compression and bending tests on old masonry walls, proceedings of the 3rd international conference on protection of historical constructions, lisbon, 2017. [28] en 1052-1 methods of test for masonry part 1 determination of compressive strength, cen european committee for standardization, 1998. [29] l. martins, a. marques, a. ribeiro, p. candeias, m. veiga, j. gomes ferreira, optical measurement of planar deformations in the destructive mechanical testing of masonry specimens, applied sciences 10, 371 (2020), pp.1-23. doi: 10.3390/app10010371 [30] k. gåsvik, optical metrology, 2nd edition, john wiley & sons, 1995, pp. 168-169. [31] en 12390-4 testing hardened concrete part 4: compressive strength specifications for testing machines, cen european committee for standardization, 2019. https://doi.org/10.1016/j.ndteint.2004.02.004 https://doi.org/10.1117/1.oe.54.1.014105 https://doi.org/10.1007/s10765-014-1607-3 http://dx.doi.org/10.21014/acta_imeko.v9i1.744 http://dx.doi.org/10.21014/acta_imeko.v5i4.356 https://doi.org/10.1088/1742-6596/772/1/012006 https://doi.org/10.1016/j.compstruct.2014.02.020 https://doi.org/10.1016/j.compstruct.2016.06.066 https://doi.org/10.3390/app10010371 study of fracture processes in sandstone subjected to four-point bending by means of 4d x-ray computed micro-tomography acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 7 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 study of fracture processes in sandstone subjected to fourpoint bending by means of 4d x-ray computed microtomography leona vavro1, martin vavro1, kamil souček1, tomáš fíla2, petr koudelka2, daniel vavřík2, daniel kytýř2 1 the czech academy of sciences, institute of geonics, studentská 1768/9, 708 00 ostrava-poruba, czech republic 2 the czech academy of sciences, institute of theoretical and applied mechanics, prosecká 809/76, 190 00 praha 9, czech republic section: research paper keywords: four-point bending test; chevron-notched core specimen; crack propagation; 4d micro-ct; sandstone citation: leona vavro, martin vavro, kamil souček, tomáš fíla, petr koudelka, daniel vavřík, daniel kytýř, study of fracture processes in sandstone subjected to four-point bending by means of 4d x-ray computed micro-tomography, acta imeko, vol. 11, no. 2, article 34, june 2022, identifier: imekoacta-11 (2022)-02-34 section editor: francesco lamonaca, university of calabria, italy received december 21, 2021; in final form march 16, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by the project for the long-term strategic development of research organisations (rvo: 68145535) and the operational programme research, development, and education in project inafym (cz.02.1.01/0.0/0.0/16/019/0000766). corresponding author: martin vavro, e-mail: martin.vavro@ugn.cas.cz 1. introduction the process of crack propagation in quasi-brittle materials due to mechanical loading leading to material failure have been intensively studied by researchers in various disciplines for a very long time. the monitoring of crack initiation and propagation is also of a great importance in rock engineering. the presence of microas well as macrocracks significantly influences the strength, deformation, and filtration properties of the rock mass and therefore, strongly affect, for example, the stability of underground workings, tunnels, or open pit slopes. rock fracture mechanics can also be applied in the prediction of anomalous geomechanical phenomena such as rock bursts or rock and gas outbursts, or in the evaluation of rock fragmentation processes such as drilling, blasting, crushing, and cutting [1], [2]. more recently, the knowledge about rock fracture processes has been of crucial importance when assessing the suitability of the rock host environment for such demanding engineering applications as co2 sequestration or the geological disposal of high-level radioactive waste. the failure process of rocks and similar rock-like materials is the result of complex mechanisms, including microcrack initiation, propagation, and interactions with each other, resulting in crack coalescence. eventually, a macroscopic failure plane is generated, thus causing final rock rupture [3], [4]. cracks in rocks initiate and propagate in response to the applied stress, with the crack path often being driven by the local distribution abstract high-resolution x-ray computed micro-tomography (ct) is a powerful technique for studying the processes of crack propagation in nonhomogenous quasi-brittle materials such as rocks. to obtain all the significant information about the deformation behaviour and fracture characteristics of the studied rocks, the use of a highly specialised loading device suitable for the integration into existing tomographic setups is crucial. since no adequate commercial solution is currently available, a completely newly-designed loading device with a fourpoint bending setup and vertically-oriented scanned samples was used. this design of the loading procedure, coupled with the high stiffness of the loading frame, allows the loading process to be interrupted at any time and for ct scanning to be performed without the risk of the sudden destruction of the scanned sample. this article deals with the use of the 4d ct for the visualisation of crack initiation and propagation in clastic sedimentary rocks. two types of quartz-rich sandstones of czech provenance were used for tomographic observations during the four-point bending loading performed on chevron notched test specimens. it was found that the crack begins to propagate from the moment that ca. 80 % of the maximum loading force is applied. mailto:martin.vavro@ugn.cas.cz acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 of micro-flaws such as cavities, inclusions, fossils, grain boundaries, mineral cleavage planes, and micro-cracks inside the rock [5], [6]. crack initiation occurs when the stress intensity factor (k) at a microcrack tip reaches its critical value, known as the fracture toughness (kc). fracture toughness thus expresses the resistance of a material to crack initiation and subsequent propagation and represents one of the most important material properties in linear elastic fracture mechanics (lefm). however, it is important to highlight that rocks and similar geomaterials such as concrete exhibit quasi-brittle behaviour (see [7], [8]), which is typical through the large plastic zone, referred to as the fracture process zone (fpz), ahead of the crack tip where more complex nonlinear fracture processes occur. due to the fpz, classical lefm is not fully applicable for studies related to rock/concrete fracture processes. thus, the description of fractures needs to be carried out based on non-linear fracture models involving the cohesive nature of the crack propagation; often the fracture energy and/or other softening parameters are utilised [9]. to date, many advanced techniques such as scanning electron microscopy [10], [11] or acoustic emission detection [12], [13] have been adopted to study the progressive failure process of rocks. in the case of these experimental approaches, basic data about crack propagation can be obtained, but the spatial information about deformation processes and fpz development throughout the tested sample volume remains unknown. for this reason, a wide range of uses is opening up for x-ray ct in the study of the deformation behaviour and fracture processes in rocks. in this paper, a completely newly designed loading device with a four-point bending setup for vertically oriented scanned samples allowing 4d ct measurements of cracks and fpz propagation in quasi-brittle materials was used. more specifically, the presented contribution deals with the identification of crack initiation and propagation in two types of upper cretaceous quartz sandstones. both rocks represent well-known building, sculpture, and decorative stone materials that have been used on the czech territory for many centuries [14]. 2. rock material used for the experiments two different types of czech sandstones, namely the mšené sandstone and kocbeře sandstone, were used in the fracture experiments. the fine-grained mšené sandstone is almost entirely (> 90 vol. %) composed of quartz grains with an average grain size of 0.15 mm. other clastic components consist of quartzite, feldspars, and mica flakes. the rock matrix (ca. 5 vol. %) is formed by kaolinite with very finely dispersed fe-oxyhydroxides (limonite). the degree of secondary silicification is very low. the fineto medium-grained kocbeře sandstone has a bimodal grain size distribution and mainly consists of monocrystalline quartz grains with an average grain size of 0.24 mm and a maximum grain size of 1.5 mm. quartzite and orthoclase grains occur in a considerably smaller quantity. the matrix and rock cement (10 – 15 vol. %) are formed by quartz which predominates over clay matter. secondary silicification is intense. both sandstones used in the experiments are similar in their mineralogical composition of detrital rock particles but differ in some inner rock texture features as well as in their mineralogical composition of interstitial material between the framework grains. these differences are reflected in different values of physical and mechanical properties (table 1). 3. experimental setup and instrumentation 3.1. x-ray ct imaging device the xt h 225 st industrial x-ray micro-ct system by nikon metrology nv was used to assess the development of the fracture process in fracture toughness tests using chevron bend (cb) test specimens. this x-ray ct scanner is a fully automated apparatus with a rotating scanning system equipped with a microfocus x-ray source, which generates cone-shaped beams. it is equipped with an x-ray flat panel detector having a number of 2,000  2,000 pixels with a pixel size of 200 μm. the basic technical parameters of the xt h 225 st inspection machine are given, for example, in [17]. the scanning parameters were as follows: a reflection target, 160 kv voltage, 126 μa current, 0.5 mm thick aluminium filter, 3,141 projections with two images per projection, 1,000 ms exposition, a scanning time of ca. 2 hours, and a cubic voxel size of 16 μm. the ct data generated by the micro-scanner were reconstructed using the ct pro 3d software (nikon metrology nv). the visualisation and analysis software vgstudio max 3.3 (volume graphics gmbh, germany) was used for data post-processing. 3.2. loading device previous research [17], [18] has focussed on the study of the crack propagation processes in quasi-brittle materials, such as silica-based composites or sandstones; these have shown some limitations when the conventional three-point bending tests were used in combination with the x-ray ct technique. the main disadvantage of such an arrangement of the experiment can be seen in particular in the horizontal orientation of the sample perpendicular to the rotational axis of the ct scanner, which, together with the loading supports covering parts of the radiograms (see figure 1a) significantly reduces the quality of the acquired data. these shortcomings of the foregoing technical solution have been eliminated with the use of a unique four-point bending loading device which was developed (czech national patent cz 307897) at the institute of theoretical and applied mechanics of the czech academy of sciences in prague (itam cas). contrary to the standard arrangement of the threeor four-point table 1. basic physical and mechanical properties of mšené and kocbeře sandstones according to various authors (data adopted from [14]-[16]). parameter mšené sanstone kocbeře sandstone specific (real) density in kg/m3 2,620–2,650 2,630–2,670 bulk (apparent) density in kg/m3 1,850–1,930 2,140–2,490 total porosity in % 26.3–29.7 12.0–15.3 water absorption capacity by weight in % 10.8–13.3 2.2–6.1 uniaxial compressive strength (dry sample) in mpa 21–33 56–87 flexural strength (dry sample) in mpa 0.9–1.9 5.9–7.9 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 bending tests using horizontally oriented samples, this novel approach is based on the vertical orientation of the investigated cylindrical specimen, the direction of whose longitudinal axis is identical to that of the rotational axis of the ct scanner (figure 1b). this concept significantly reduces differences in the attenuation of x-rays observed during the rotation of the sample subjected to a ct scan. the modular device for four-point bending during micro-ct sequences consists of three main components. a pair of a motorised movable supports are integrated with driving units with precision captive stepper linear actuators (23-2210, koco motion dings, usa) and precision linear guideways (mgw12, hiwin, japan). the driving units are equipped with linear encoders (lm10, renishaw inc., united kingdom) ensuring positioning with 1 μm resolution and load cells (lcm300, futek, usa) with nominal capacity 1,250 n. the central part of the device frame exposed to x-ray beam is manufactured from a carbon fiber composite (mtm57 series epoxy resin, t700s carbon fibers, shell nominal thickness 1.95 mm) providing sufficient frame stiffness and low attenuation of x-rays. cylindrical load-bearing frame, housing the loaded specimen, together with all the components of the loading device are manufactured from high strength aluminium alloy (en-aw6086-t6). the small distance between the x-ray source and the scanned objects also allows for the high resolution of the reconstructed 3d images, which is necessary for a detailed tomographic investigation of the loaded sample. the high stiffness of the loading frame and high precision control of the loading force during the experiment allow for the subsequent interruption of the loading process at any point of both the ascending and descending parts of the rock´s load-displacement (f-d) curve without the risk of sudden sample collapse. a more detailed technical description of the in-house four-point measuring device is provided in [19]. the device and its scheme are shown in partial section in figure 2a. 3.3. test specimens and experimental procedure cylindrical cb specimens with a diameter of 29 mm and a length of approximately 195 mm were drilled from sandstone blocks in the laboratory. the core drilling was carried out parallel to the sandstone bedding planes. in the central part of the test specimen, a chevron edge notch was carved using a circular diamond blade. the width of the chevron notch was 1.4 mm. a prepared cylindrical specimen was inserted into the specimen chamber placed into the x-ray ct inspection system (figure 2b) and centred inside this chamber on the supports. the orientation of the longitudinal axis of the specimen, emplaced in the test-ready position, was identical to that of the rotational axis of the loading device. after the initial contact of the specimen with the loading parts, its stable position was secured by applying a contact force of 5 n. then, the inspection using transmission radiography was performed to verify the correct position of the chevron notch tip and to exclude the samples with significant inhomogeneity in the volume of interest in the vicinity of the notch. pre-peak loading was performed in force control mode by prescribing its linear increase and ensuring a uniform load figure 1. principal differences between a conventional three-point bending loading scenario (a) and the new four-point bending test with a vertical orientation of the investigated specimen (b). figure 2. four-point loading device for 4d micro-ct experiments: (a) longitudinal cross-section of the loading device, (b) loading device attached to the rotation table of the xt h 225 st x-ray micro-ct scanner. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 distribution on the supports (outer supports span lout = 179 mm, inner supports span lin = 75 mm). post-peak loading was performed in displacement control mode with a position increment identical for both loading supports to prevent the sudden rupture of the specimen caused by the eventual nonsymmetrical response of the specimen. during the loading procedure, the test time, the displacement value, and the loading force were recorded continuously, with the sampling frequency of the displacements and the load of the external support being 200 hz using an in-house developed control software [20]. during the experiment, the loading process was interrupted at four to five loading steps, where imaging via x-ray micro-ct scanner was carried out. in the loading sequence, one or two load-steps were performed during the rock´s hardening phase, one load-step near the ultimate-stress point, and two or three load-steps were performed during the post-peak softening phase, without the sudden cracking of the specimen. the maximal loading force reached approximately 30 n to 35 n in the case of mšené sandstone and between 90 n and 100 n when the kocbeře sandstone was tested (figure 3). the force drops in the recorded loading diagrams were caused by material relaxation during the time-lapse tomography scanning. it is also clear from figure 3 that three test specimens were examined for each of the two above mentioned sandstone types. one of them, namely specimen 16057/11 from the mšené sandstone, was then selected for a detailed description of the process of crack propagation, which is presented in section 4. 4. results the measured data acquired from the loading device were subsequently processed in the form of f-d diagrams. in figure 4, the displacement-load curve of one selected test sample is shown. as can be seen from figure 4, a total of six micro-ct measurements were taken at points a to f while loading sample 16057/11. a reference ct measurement (scan a) was realised immediately after the fixation of the rock specimen in the loading device, at a minimal loading of 5 n. two other consecutive measurements were performed at points b and c before reaching the maximum load, i.e., in the ascending portion of the loaddisplacement plot. specifically, measurement b was made at a loading level of 26 n (i.e., at ca. 80% of the maximal force) and figure 3. loading curves of mšené (sample no. 16057) and kocbeře (sample no. 16060) sandstones with well visible loading gaps observed by ct scanning. the f-d diagram of the individual rock sample 16057/11 from the mšené sandstone is presented in detail in figure 4. figure 4. f-d diagram of a selected rock sample prepared from the mšené sandstone (16057/11) with ct slices obtained at different loading steps (d, e, and f) on the descending part of the loading curve. a macroscopically visible crack path is highlighted by white lines. the red circle in the vertical cross-section of the loaded specimen (upper left corner) defines the area, which is in the case of loading steps a, b, and c zoomed in onto and presented in figure 5. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 5 measurement c at a loading of 30 n, which corresponded to about 90 % of the peak force. measurement d practically corresponded to the ultimate stress point. the last two measurements, e and f, were made during the post-peak phase, at loads of 24 n and 9 n, respectively. the presented measurements clearly showed that even such low-strength rock samples, where the peak force reached only 33 n, can be successfully subjected to loading with discrete steps during strain softening without the risk of their sudden collapse. the process of crack propagation during the post-peak behaviour is well visible in the ct images presented in figure 4. generally, it is assumed that the real crack begins to propagate when the load reaches the ultimate stress point (see e.g., [21]). however, based on previous experience acquired during radiographic measurements in a three-point bending loading scenario [17], our current research was focussed mainly on the identification of the possible manifestation of crack origins and their subsequent growth on the ascending part of the loading curve. as for the pre-peak crack propagation, ct images taken before the load reached the peak value are shown in figure 5. this figure indicates that the apparent crack developing from a crack tip was present also within step c, i.e., at a level of ca. 90 % of the maximal loading force (fmax). when compared to reference step a, some changes were visible in the sandstone microtextures in scan b (ca. 80 % of fmax) near the crack tip related to crack evolution. these changes are reflected in the movement of individual quartz grains apart from each other. based on these findings, it can be concluded that, in the case of the studied sandstones, the crack began to propagate from the moment when approximately 80 % of the maximum loading force was applied. however, it should be noted, that the crack path was hard to identify in some parts of the reconstructed ct slices due to heterogenous sandstone microtextures, especially due to the presence of pores. this problem can be overcome by using differential tomography, where changes in the object are emphasised by the differentiation of the actual and the reference tomographic reconstructions, as recently described by e.g., [16] and [19]. the differences between the states at points b and c, respectively, and the initial state (a) are presented in figure 6. the development of the crack during mechanical loading was manifested by an increase of open porosity in the area of the figure 5. series of zoomed ct images of a selected mšené sandstone rock sample (16057/11) acquired at the different loading steps (a, b, and c) on the ascending part of the f-d loading curve. the changes in sandstone microtexture related to crack origin are highlighted by white ellipses. figure 6. visualisation of the development of the crack shape in loading steps b and c using tomographic image subtraction between the actual (loaded) and reference (unloaded) states; i.e., the image on the left represents the difference between steps b and a and the one on the right is a subtraction between the c and a loading steps. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 6 crack spreading through the rock test specimen, which is clearly shown in figure 7. the figure shows porosity distribution under the tip of the notch in two consecutive loading steps from the tests of the mšené sandstone test specimen no. 16057/11. using a hexahedral mesh grid superimposed on the reconstructed 3d images, where the porosity was evaluated in every hexahedral region of interest as an average over its volume, it can be seen that the macroscopic crack developed before the ultimate stress point was reached. 5. conclusions the study performed on the rock samples prepared from quartz-rich mšené and kocbeře sandstones showed that crack development and propagation can be successfully observed in 3d thanks to the joint use of a four-point bending procedure and high-resolution 4d micro-ct measurements. a four-point bending device that was newly developed at itam cas allows the study of fracture processes in rocks and similar quasi-brittle materials with high precision. the measurements also showed that the concept of vertically oriented rock samples in four-point bending devices for 4d micro-ct provides considerable advantages over the standard horizontally oriented three-point or four-point bending setups. in the studied sandstones, it was observed that the process of crack propagation starts from the moment when approximately 80% of the maximum loading force is applied. this outcome is in very good agreement with the results of previous research [16], [17], which was performed on the different types of sandstone regarding their microtextural features, mineralogical compositions, and related physical and mechanical properties. therefore, this research confirmed the fact that the crack, which was formed and started to propagate before the peak load was reached, was further propagated during the post-peak phase. more specifically, the crack length at point d (maximal loading force) was approximately 4.9 mm, which increased to 9.8 mm and 15.2 mm at points e and f, respectively. these crack lengths measured for post-peak strain softening phase are very similar to the values which were published by [19] for a stronger variety of mšené sandstone. the observation that the process of crack propagation started before reaching the maximum load is consistent with previous knowledge obtained for various types of german, american, or chinese sandstones by means of digital image correlation [24] or acoustic emission techniques [25]-[28]. moreover, this finding is also valid for other quasi-brittle materials, such as concrete, as confirmed by the study of ae signals both in flexural [29] and compressive [30] loading modes. acknowledgement the presented work was supported by the project for the long-term strategic development of research organisations (rvo: 68145535) and the operational programme research, development, and education in project inafym (cz.02.1.01/0.0/0.0/16/019/0000766). references [1] isrm commission on testing methods (f. ouchterlony, coordinator), suggested methods for determining the fracture toughness of rock, int. j. rock mech. min. sci. & geomech. abst. 25(2) (1988), pp. 71–96. [2] b. n. whittaker, r. n. sing, g. sun, rock fracture mechanics: principles, design and applications, elsevier science publishers b.v., amsterdam, the netherlands, 1992, isbn 978-0444896841. [3] h. haeri, k. shahriar, m. f. marji, p. moarefvand, experimental and numerical study of crack propagation and coalescence in precracked rock-like disks, int. j. rock mech. min. sci. 67 (2014), pp. 20–28. figure 7. changes in the distribution of rock porosity in xy (upper images) and xz (bottom images) planes due to crack development. it should be noted that the open porosity of the intact mšené sandstone measured by mercury intrusion porosimetry reached values between ca. 26 % and 30 % as reported by e.g., [19], [22], or [23]). acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 7 doi: 10.1016/j.ijrmms.2014.01.008 [4] c. a. tang, s. q. kou, crack propagation and coalescence in brittle materials under compression, eng. fract. mech. 61(3-4) (1998), pp. 311–324. doi: 10.1016/s0013-7944(98)00067-8 [5] s. zabler, a. rack, i. manke, k. thermann, j. tiedemann, n. harthill, h. riesemeier, high-resolution tomography of cracks, voids and micro-structure in greywacke and limestone. j. struct. geol. 30(7) (2008), pp. 876–887. doi: 10.1016/j.jsg.2008.03.002 [6] j. b. zhu, t. zhou, z. y. liao, l. sun, x. b. li, r. chen, replication of internal defects and investigation of mechanical and fracture behaviour of rock using 3d printing and 3d numerical methods in combination with x-ray computerized tomography, int. j. rock mech. min. sci. 106 (2018), pp. 198–212. doi: 10.1016/j.ijrmms.2018.04.022 [7] z. p. bažant, j. planas, fracture and size effect in concrete and other quasibrittle materials, 1st edn. crc press llc, boca raton (fl), usa, 1998, isbn 978-0849382840. [8] s. p. shah, s. e. swartz, c. ouyang, fracture mechanics of structural concrete: applications of fracture mechanics to concrete, rock, and other quasi-brittle materials. 1st edn. john wiley & sons inc, new york, usa, 1995, isbn 978-0-471-30311-4. [9] l. vavro, l. malíková, p. frantík, p. kubeš, z. keršner, m. vavro, an advanced assessment of mechanical fracture parameters of sandstones depending on the internal rock texture features, acta geodyn. geomater. 16(2) (2019), pp. 157–168. doi: 10.13168/agg.2019.0013 [10] p. baud, e. klein, t. f. wong tf, compaction localization in porous sandstones: spatial evolution of damage and acoustic emission activity, j. struct. geol. 26(4) (2004), pp. 603–624. doi: 10.1016/j.jsg.2003.09.002 [11] z. brooks, f. j. ulm, h. h. einstein, environmental scanning electron microscopy (esem) and nanoindentation investigation of the crack tip process zone in marble, acta geotech. 8(3) (2013), pp. 223–245. doi: 10.1007/s11440-013-0213-z [12] d. lockner, j. d. byerlee, v. kuksenko, a. ponomarev, a. sidorin, quasi-static fault growth and shear fracture energy in granite, nature 350 (1991), pp. 39–42. doi: 10.1038/350039a0 [13] e. townend, b. d. thompson, p. m. benson, p. g. meredith, p. baud, r. p. young, imaging compaction band propagation in diemelstadt sandstone using acoustic emission locations, geophys. res. letters 35(15) (2008), l15301. doi: 10.1029/2008gl034723 [14] v. rybařík, noble building and sculptural stone of the czech republic, industrial secondary school of stonework and sculpture in hořice, hořice, czech republic, 1994, isbn 80900041-5-6 [in czech]. [15] p. koutník, p. antoš, p. hájková, p. martinec, b. antošová, p. ryšánek, j. pacina, j. šancer, j. ščučka, v. brůna, decorative stones of bohemia, moravia and czech silesia. jan evangelista purkyně university in ústí nad labem, ústí nad labem, czech republic, 2015, isbn 978-8074149740 [in czech]. [16] d. vavrik, p. benes, t. fila, p. koudelka, i. kumpova, d. kytyr, m. vopalensky, m. vavro, l. vavro, local fracture toughness testing of sandstone based on x-ray tomographic reconstruction, int. j. rock mech. min. sci. 138 (2021), art. no. 104578. doi: 10.1016/j.ijrmms.2020.104578 [17] l. vavro, k. souček, d. kytýř, t. fíla, z. keršner, m. vavro, visualization of the evolution of the fracture proces zone by transmission computed radiography, procedia eng. 191 (2017), pp. 689–696. doi: 10.1016/j.proeng.2017.05.233 [18] i. kumpová, m. vopálenský, t. fíla, d. kytýř, d. vavřík, m. pichotka, j. jakůbek, z. keršner, j. klon, s. seitl, j. sobek, onthe-fly fast x-ray tomography using cdte pixelated detector – application in mechanical testing, ieee trans. nucl. sci. 65(12) (2018), pp. 2870-2876. doi: 10.1109/tns.2018.2873830 [19] p. koudelka, t. fila, v. rada, p. zlamal, j. sleichert, m. vopalensky, i. kumpova, p. benes, d. vavrik, l. vavro, m. vavro, m. drdacky, d. kytyr, in-situ x-ray differential microtomography for investigation of water-weakening in quasi-brittle materials subjected to four-point bending, materials 13(6) (2020), art. no. 1405. doi: 10.3390/ma13061405 [20] v. rada, t. fila, p. zlamal, d. kytyr, p. koudelka, multi-channel control system for in-situ laboratory loading devices, acta polytech. ctu proc. 18 (2018), pp. 15–19. doi: 10.14311/app.2018.18.0015 [21] m. d. wei, f. dai, n. w. xu, t. zhao, stress intensity factors and fracture process zones of isrm-suggested chevron notched specimens for mode i fracture toughness testing of rocks, eng. fract. mech. 168(a) (2016), pp. 174–189. doi: 10.1016/j.engfracmech.2016.10.004 [22] j. desarnaud, h. derluyn, l. molari, s. de miranda, v. cnudde, n. shahidzadeh, drying of salt contaminated porous media: effect of primary and secondary nucleation, j. appl. phys. 118(11) (2015), art. no. 114901. doi: 10.1063/1.4930292 [23] z. pavlik, p. michalek, m. pavlikova, i. kopecka, i. maxova, r. cerny, water and salt transport and storage properties of mšené sandstone, constr. build. mater. 22(8) (2008), pp. 1736–1748. doi: 10.1016/j.conbuildmat.2007.05.010 [24] q. lin, j. f. labuz, fracture of sandstone characterized by digital image correlation, int. j. rock mech. min. sci. 60 (2013), pp. 235– 245. doi: 10.1016/j.ijrmms.2012.12.043 [25] t. backers, s. stanchits, g. dresen, tensile fracture propagation and acoustic emission activity in sandstone: the effect of loading rate, int. j. rock mech. min. sci. 42(7–8) (2005), pp. 1094–1101. doi: 10.1016/j.ijrmms.2005.05.011 [26] w. li, f. u. a. shaikh, l. wang, y. lu, k. wang, z. li, microscopic investigation of rate dependence on three-point notched-tip bending sandstone, shock vib. (2019), art. no. 4525162. doi: 10.1155/2019/4525162 [27] h. zhang, d. fu, h. song, y. kang, g. huang, g. qi, j. li, damage and fracture investigation of three-point bending notched sandstone beams by dic and ae techniques, rock mech. rock eng. 48(3) (2015), pp. 1297–1303. doi: 10.1007/s00603-014-0635-4 [28] w. k. zietlow, j. f. labuz, measurement of the intrinsic process zone in rock using acoustic emission, int. j. rock mech. min. sci. 35(3) (1998), pp. 291-299. doi: 10.1016/s0148-9062(97)00323-9 [29] j. f. labuz, s. cattaneo, l. h. chen, acoustic emission at failure in quasi-brittle materials, constr. build. mater. 15(5–6) (2001), pp. 225–233. doi: 10.1016/s0950-0618(00)00072-6 [30] d. l. carnì, c. scuro, f. lamonaca, r. s. olivito, d. grimaldi, damage analysis of concrete structures by means of acoustic emissions technique, compos. b. eng. 115 (2017), pp. 79–86. doi: 10.1016/j.compositesb.2016.10.031 https://doi.org/10.1016/j.ijrmms.2014.01.008 https://doi.org/10.1016/s0013-7944(98)00067-8 https://doi.org/10.1016/j.jsg.2008.03.002 https://doi.org/10.1016/j.ijrmms.2018.04.022 https://doi.org/10.13168/agg.2019.0013 https://doi.org/10.1016/j.jsg.2003.09.002 https://doi.org/10.1007/s11440-013-0213-z https://doi.org/10.1038/350039a0 https://doi.org/10.1029/2008gl034723 https://doi.org/10.1016/j.ijrmms.2020.104578 https://doi.org/10.1016/j.proeng.2017.05.233 https://doi.org/10.1109/tns.2018.2873830 https://doi.org/10.3390/ma13061405 https://doi.org/10.14311/app.2018.18.0015 https://doi.org/10.1016/j.engfracmech.2016.10.004 https://doi.org/10.1063/1.4930292 https://doi.org/10.1016/j.conbuildmat.2007.05.010 https://doi.org/10.1016/j.ijrmms.2012.12.043 https://doi.org/10.1016/j.ijrmms.2005.05.011 https://doi.org/10.1155/2019/4525162 https://doi.org/10.1007/s00603-014-0635-4 https://doi.org/10.1016/s0148-9062(97)00323-9 https://doi.org/10.1016/s0950-0618(00)00072-6 https://doi.org/10.1016/j.compositesb.2016.10.031 primary shock calibration with fast linear motor drive acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 383 acta imeko issn: 2221-870x december 2020, volume 9, number 5, 383 387 primary shock calibration with fast linear motor drive h. volkers1, h. c. schoenekess1, th. bruns1 1 physikalisch-technische bundesanstalt, braunschweig, germany, henrik.volkers@ptb.de abstract: this paper describes the implementation of a new, fast and precise linear motor drive for ptb’s primary shock calibration device. this device is used for monopole shock calibrations of accelerometers using the “hammer-anvil” principle according to iso 16063-13:2001 [1] and operates in a peak acceleration range from 50 m/s² to 5000 m/s². the main challenge of implementing this kind of shock generator is accelerating a hammer to velocities up to 5 m/s within distances of less than 70 mm. in this paper, a few helpful improvements are described which lead to an enhanced repeatability of pulse generation over the full shock intensity range as well as a substantial decrease of harmonic disturbing signals. keywords: primary shock calibration; hammeranvil principle; half-sine pulse; pulse transmission; ldv interferometry; linear motion drive; magnetic actuator 1 introduction ptb’s original shock exciter which was designed for low to medium acceleration intensities mainly consists of a mechanical spring unit, an airborne transmission element (the “hammer”) and an air-borne measuring unit (the “anvil”) to which the accelerometer to be calibrated is attached. the released spring pushes the hammer which subsequently strikes the anvil and accelerates it. exchangeable elastic pads (pulse shapers) between hammer and anvil allow certain shock shapes to be attained and disturbing oscillations to be reduced. using the spring drive unit, shock peak values can be varied between 100 m/s2 and 5000 m/s2, whereas the shock duration is between 8 ms and 1 ms. different sets of the modules (hammer, anvil and spring unit) are available, allowing different combinations of peak values and shock durations to be excited. the acceleration is measured by applying two laser doppler vibrometers (ldv). after a signal conditioning, both photo detector signals and the sensor measuring chain signal are simultaneously captured. following the signal demodulation, the primary measured acceleration peak and the sensor peak are used to determine the shock sensitivity according to 𝑆sh = 𝑢peak 𝑎peak (1) by definition [1], the shock sensitivity ssh is calculated as the quotient of the output charge peak value of the accelerometer and the peak value of the interferential measured shock acceleration (evaluation in the time domain). it is dependent on the impact spectrum, the duration of impact and the peak acceleration value. 2 the initial configuration the original driving mechanism to accelerate the hammer was realized by a spring-driven shaft with a free running length of up to 30 mm (c.f. figure 1). by manually setting the spring preload, the resulting hammer velocity was adjusted to generate the desired peak acceleration of the anvil. repeated loading and releasing of the spring was performed by motor-driven mechanics. a set of three spring modules with different stiffnesses provides the peak acceleration ranges needed. all of these spring units suffered from substantial drawbacks such as: the highest force acts at the start of motion, which results in a strong jerk. this introduces high frequency vibrations and excites resonances in the hammer and the anvil. the mechanical stop of the spring unit induces additional vibrations to the system. the force’s set-point has to be mechanically adjusted by hand. the repeatability of adjusted shock intensities is of only moderate quality. three different spring units have to be employed to cover the 100 m/s² to 5 km/s² range. shock intensities below 100 m/s² could not be achieved with the old spring units. http://www.imeko.org/ mailto:henrik.volkers@ptb.de acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 384 mechanical wear of the springs, locking knife edges and the release mechanism required frequent maintenance and readjustment. these were the main causes of poor repeatability. the calibration process involved a lot of manual interaction. the design and realization of this unit was the first of this type and was done in the 1990th of the last century [2, 3]. other nmis have set up similar devices in the meantime, sometimes with different drive methods [e.g. 4, 5]. to address the issues listed above, a project was started to update the driving unit specifically. the complete mechanical set-up of the calibration device is shown in figure 1. the manually adjustable spring unit is at the bottom. the air-borne hammer (the projectile) in the centre has a rectilinear motion distance of roughly 50 mm and is stopped after the impact with the anvil by mechanical dampers. the air-borne anvil (the target) at the top of the photograph has a freemotion distance of about 8 mm before it is halted by a pneumatic brake. figure 1: mechanics of the shock calibration device the impulse hammer element with a mass of approx. 0.25 kg hits the 0.25 kg anvil element with the mounted accelerometer (0.038 kg). the shock pulse is transmitted within a time that is shorter than 8 milliseconds. when the shortest duration (roughly 1 ms) is used, high accelerations can be achieved. the strongest spring with a maximum force of 400 n achieved more than 5 km/s² at the sensor’s reference surface, while the softest spring, with a minimum preload force of about 10 n, reached 100 m/s². via a suitably elastic pulse shaper at the tip of the hammer, a sin² acceleration time curve is generated and sensed by the accelerometer (c.f. fig. 2). several different pulse shapers made of rubber could be attached to the hammer to obtain the desired pulse duration and shape. figure 2: hammer and anvil heads with dut as discussed above, the mechanical spring unit proved to be a weak point in the old design. during the relaxation of the spring, strong vibrations were transmitted into the entire mechanical system, which led to disturbances. in addition, the knifeedge mechanics of the release mechanism suffered from wear and tear and caused a poor repeatability during calibration runs. the goal of this project was to overcome these issues using modern linear drive technology. 3 the new drive unit 3.1 requirements and design in order to ensure the full operation of the customer calibration service, it was necessary to have backstop conversion capability during the development and testing period. this resulted in dimensional design constraints. by applying laws of basic mechanics, the required mechanical specifications were easily derived from the involved moving masses, impact speeds and dimensions. subsequent market research covering mechanic, pneumatic, hydraulic and electro-magnetic actuators led to the decision to use an electric linear motor drive. the selected solution is a type ps01-23x160hhp-r linear motor with a pl01-12x350/310-hp magnetic slider as well as a type c1100 standard closed loop controller and a linmot® nti ag twohttp://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 385 phase power supply1 [6]. the package also includes software for the controller. this kind of drive was originally designed for so-called “fast pick and place” operation in industrial production lines. the use of such industrial equipment means that only moderate hardware costs are involved. in order to implement the drive, some modifications had to be applied to the calibration device such as: • construction of a damped stop for the runner of the linear motor (pusher). • a prolongation of the hammer and an increase of the distance between hammer and anvil air bearings. • an increase of the hammer’s mass (c.f. figure 3). the last two actions permitted the working length of the drive to be enlarged by about 72 mm. in the original set-up, similar hammer and anvil dimensions were chosen for an optimal momentum transfer during impact. this, however, also resulted in similar first modal resonances around 10 khz, which in turn favoured the propagation of disturbing oscillations during impact. the new hammer design reduces this effect by means of a (two times) heavier weight. table 1 summarizes the modified parameters. table 1: specifications of both impact generators change old state new state hammer weight 0.296 kg 0.595 kg anvil weight 0.296 kg 0.296 kg max. travel hammer 50 mm 72 mm max. travel anvil 4 to 7 mm 7 to 8 mm max. velocity hammer 5 m/s 5 m/s max. velocity anvil 5 m/s 6 to 7 m/s shock duration range 1 to 8 ms 1 to 8 ms max. drive force 400 n 138 n max. shock acceleration 5000 m/s² 5000+ m/s² these system modifications are a consequence of changing the hammer weight and travel parameters. the main improvements are found in the higher anvil velocity and, at the same time, the reduction of drive force to less than half. the latter greatly reduced the jerk and thus the issue of motion disturbances. 3.2 quasi-elastic central impact calculation the selected linear motor with a maximum propulsive force of fmot = 138 n serves as a new mechanical shock generator. the stroke s has been extended to 72 mm. the total mass to be accelerated consists of the runner’s 0.280 kg mass msl and the 1 commercial components are merely identified in this paper to adequately specify the experimental set-up. naming these products does not imply a recommendation by ptb, nor does it mass of the new air-borne hammer (mha = 0.595 kg). the mass of the anvil to be pushed by the hammer is man = 0.296 kg. the initial velocity reached by the accelerated hammer before impact is 𝑉𝐻𝑎 = √ 2∙𝐹𝑀𝑜𝑡∙𝑠 𝑚𝑠𝑙+𝑚𝐴𝑛 2 . ( 2) it attains up to 5 m/s. the moving hammer mha pushes the resting anvil man and transfers a large amount of its momentum to it. the mass ratio is mha/man = 2 and the anvil is initially at rest. applying the basic equations for momentum conservation, the speeds of the hammer and anvil after the impact become: 𝑉 ′𝐻𝑎 = 𝑚𝐻𝑎− 𝑚𝐴𝑛 𝑚𝐻𝑎+𝑚𝐴𝑛 ∙ 𝑉𝐻𝑎 (3) and 𝑉 ′𝐴𝑛 = 2 ∙ 𝑚𝐻𝑎 𝑚𝐻𝑎+𝑚𝐴𝑛 ∙ 𝑉𝐻𝑎 (4) or, with the given mass ratio: 𝑉 ′𝐻𝑎 = 1 3 ∙ 𝑉𝐻𝑎 and 𝑉 ′ 𝐴𝑛 = 4 3 ∙ 𝑉𝐻𝑎 (5) this transformation moves the anvil to a onethird higher speed than the hammer’s initial velocity. for an optimal hammer mass, one could insert (2) into the right hand side of (4) and solve for the maximum value. this gives an optimum hammer mass of 0.581 kg when all boundary conditions are considered. this is pretty close to the realized new hammer. figure 3 in the next section shows the new shock generator setup. 3.3 repeatability an additional advantage of this linear motor drive set-up is the option of preselecting an accurate target value by setting a potentiometer or using the software interface for digital adjustment. the controller of the linear motor can be parametrized and controlled via two serial interfaces by the supplied vendor software or a freely available labview driver which can be adapted to individual needs. for any given setting, the spread of the repeatedly realized acceleration peak values was drastically reduced compared to the original spring drive module. a comparison is depicted in figure 4. for both drives, the absolute range around the nominal value increases linearly with the shock imply that the equipment identified is necessarily the best available for the purpose. http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 386 intensity. however, for the new linear drive, the spread is more than an order of magnitude lower than before. figure 3: shock generator for 5000 m/s² with fine controllable linear motor drive figure 4: comparison of the statistical reach of acceleration intensity of spring drive vs. linear motor drive 3.4 signal quality the disturbances due to modal resonances of the anvil are reduced by the redesign of the hammer. while the resonance frequency of both the hammer and anvil was about 11.5 khz in the old setup, the first longitudinal resonance of the new hammer is now at 13 khz. because the design and materials were identical, this difference can be attributed to the elongated geometry and the solid design (no axial holes). this disparity greatly reduces the transmission of ringing vibrations from the hammer to the reference surface of the accelerometer. in combination with the largely reduced jerk in the driving motion itself, ringing is no longer an issue in the signal shape. as an example, figure 5 shows an accelerometer output signal (filtered at 100 khz) for an acceleration peak value of 1.97 km/s². figure 5: raw signal of a half-sine shock measurement by an endevco 2270 accelerometer 4 summary the primary monopole shock acceleration calibration facility at ptb has been successfully upgraded by the implementation of an industrial electric linear motor which was implemented as the driving unit for the hammer. the change in the driving mechanics and geometry involved adaptations to the hammer and anvil configuration, too. after successful implementation the following improvements were noticeable: 1. a strong reduction in disturbing vibrations due to a lower, but constant, force level generated by the new drive. 2. a very strong reduction in the transfer of mechanical ringing from the hammer to the anvil due to the weight disparity between the two. 3. great improvement of the repeatability of the shock intensity. http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 387 4. analog electrical or digital control of the set-point for the generated shock acceleration. 5. greatly reduced need for maintenance and operation is practically non-wear and has very little friction. 5 outlook based on the newly implemented drive a shift from manual operation to an automated performance of the low-intensity shock calibration seems feasible in the near future. the control options via a digital interface (rs-485) using labview or other programming languages together with the excellent repeatability allow, in principle, unsupervised operation. however, in the current set-up the pulse shaper remains as a component with unknown reproducibility. in order to achieve autonomous operation, some type of self-adjusting algorithm is necessary. first steps on the path towards such methods have been taken here and will be the topic of future publications. 6 references [1] iso 16063-13, https://www.iso.org/standard/27075.html [2] von martens, h.-j. taeubner a., wabinski w., link a., and. schlaak h.-j “laser interferometry as tool and object in vibration and shock calibrations”, proc. spie 3411, third international conference on vibration measurements by laser techniques: advances and applications, (1 june 1998); https://doi.org/10.1117/12.307700 [3] von martens h.-j., “ptb vibration and shock calibration”, royal swedish academy of engineering sciences (iva), 8 pages, 15-16 september 1993, stenungsund, sweden [4] nozato h., kokuyama w., ota a., “improvement and validity of shock measurements using heterodyne laser interferometer”, elsevier measurement, volume 77, pages 67-72, january 2016 [5] nozato h., usuda t., ota a., ishigami t. and kudo k., “development of shock acceleration calibration machine in nmij”, imeko 20th tc3, 3rd tc16 and 1st tc22 international conference cultivating metrological knowledge 27-30 november, 2007. merida, mexico. [6] linmot webpage https://linmot.com/ http://www.imeko.org/ https://www.iso.org/standard/27075.html https://doi.org/10.1117/12.307700 https://www.sciencedirect.com/science/article/pii/s0263224115004625#! https://www.sciencedirect.com/science/article/pii/s0263224115004625#! https://www.sciencedirect.com/science/article/pii/s0263224115004625#! https://www.sciencedirect.com/science/journal/02632241 https://www.sciencedirect.com/science/journal/02632241/77/supp/c https://linmot.com/ development of 1.6 gpa pressure-measuring multipliers acta imeko june 2014, volume 3, number 2, 54 – 59 www.imeko.org development of 1.6 gpa pressure-measuring multipliers w. sabuga1, r. haines2 1 physikalisch-technische bundesanstalt, bundesallee 100, 38116 braunschweig, germany 2 fluke calibration, 4765 east beautiful lane, phoenix, az 85044-5318, usa section: research paper keywords: high pressure standards, pressure multipliers, finite element analysis, pressure transducers, calibration citation: wladimir sabuga, rob haines, development of 1.6 gpa pressure-measuring multipliers, acta imeko, vol. 3, no. 2, article 13, june 2014, identifier: imeko-acta-03 (2014)-02-13 editor: paolo carbone, university of perugia received april 15th,2013; in final form august 16th, 2013; published june 2014 copyright: © 2014 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: this research was jointly funded by the emrp participating countries within euramet and the european union. corresponding author: wladimir sabuga, e-mail: wladimir.sabuga@ptb.de 1. introduction new high pressure technologies such as autofrettage, hydroforming and isostatic pressing are being intensively developed and used in the automotive industry, diesel engineering, vessel production for the petrochemical and pharmaceutical industry, water cutting machine manufacture, new material fabrication and, recently, for food sterilisation. new transducers for measuring pressures up to 1.5 gpa have recently been developed and are offered by several manufacturers. the use of these high pressure transducers requires their calibration and, thus, existence of appropriate reference pressure standards traceable to the international system of units. the operation range of the pressure standards in west europe is limited by 1.4 gpa. creation of new primary pressure standards up to 1.6 gpa and establishing a calibration service up to 1.5 gpa is the objective of a joint research project (jrp) "high pressure metrology for industrial applications" within the european metrology research programme (emrp) [1, 2]. ptb and fluke calibration (fluke) have jointly developed and built two 1.6 gpa pressure measuring multipliers to extend the pressure scale and the calibration range as required. 2. principle and key features of the pressure multipliers the operation principle of a pressure measuring multiplier is explained in figure 1. the multiplier includes a low pressure (lp), pl, and a high pressure (hp), ph, piston-cylinder assembly (pca) which have significantly different effective areas. the lp and hp pcas are axially aligned and their pistons are mechanically coupled. both, lp and hp pistons are unsealed in the cylinders and are rotated, which, due the lubrication effect, avoids mechanical friction between the pistons and cylinders. consequently, in the absence of other forces, the forces due to pressures ph and pl on the pistons are balanced when the ratio of pressures is equal to the ratio of the effective areas of the lp and hp pcas, ahp and alp: ph / pl = ahp / alp. (1) the high pressure, ph, can thus be determined by accurately measuring pl and by knowing the exact ratio of ahp to alp, also called multiplying ratio (km). the principle of the pressure measuring multipliers has been in use at least since the 1930s and is utilised in current practice, for example in the 1.5 gpa national pressure standard of russia, vniiftri [3]. a 1 gpa pressure multiplier has been commercially offered since the late abstract two 1.6 gpa pressure-measuring multipliers were developed and built. feasibility analysis of their operation up to 1.6 gpa, parameter optimisation and prediction of their behaviour were performed using finite element analysis (fea). their performance and metrological properties were determined experimentally at pressures up to 500 mpa. the experimental and theoretical results are in reasonable agreement. with the results obtained so far, the relative standard uncertainty of the pressure measurement up to 1.6 gpa is expected to be not greater than 2·10-4. with this new development the range of the pressure calibration service in europe can be extended up to 1.5 gpa. acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 54 1980s and is used by national metrology institutes and calibration laboratories as a secondary or transfer standard [4]. in the newly developed multiplier, the nominal effective areas of the lp and hp pcas were chosen to be 1 cm2 and 5 mm2, respectively. these are dimensions for which production technology is well established and that result in a pressure ratio of 1:20. thus, a pressure, ph, of 1.6 gpa on the hp side of the multiplier is reached at pl = 80 mpa which is easily generated and measured with high accuracy. the design of the new multiplier has specific features which distinguish it from that of the former multipliers. first, to avoid plastic deformation and to guarantee stability of the effective areas the components of the lp and hp pcas are made of tungsten carbide with 6% or 10% (hp piston) cobalt (wc-co) instead of steel used in [3]. since the tensile strength of the tungsten carbide is limited to roughly 0.7 gpa, the hp cylinder can be operated at the maximum pressure of 1.6 gpa only if it is supported from outside. thanks to a special design of the multipliers a compressive load is established on the outside of the hp tungsten carbide cylinder which prevents rupture when the pressure inside the cylinder exceeds the material tensile strength. in [4] this is accomplished by fitting a sleeve around the cylinder. in order to extend to 1.6 gpa in the new multiplier, two sleeves, each made of chrome/nickel/molybdenum steel, are successively assembled onto the tungsten carbide hp cylinder by means of thermal shrink fits. in addition, the hp cylinder with the two sleeves is set into a jacket which allows a jacket pressure (pj) to be applied to the lateral surface of the outer sleeve and, thus, to additionally compensate the tensile stress in the cylinder (figure 2). the hp pca is designed to be operated in controlled clearance (cc) mode with pj typically equal to 25% of ph, at which the pressure distortion coefficient (λ) of the pca should be around zero. in addition, the hp pca may be operated with variable pj in order to adjust the piston fall rate (vf) and the pca sensitivity, if necessary, as well as to study λ experimentally by measuring dependences of vf and ahp on pj. for optimal and stable operation of a pca it is desirable that the pressure in the piston-cylinder gap changes linearly from its maximum value at the gap inlet to the ambient pressure at the gap outlet. such a pressure distribution is difficult to realise in the case of cc hp pcas having a nominally constant gap in the pressure-free state because, under pressure, the piston-cylinder gap becomes extremely small in the outlet region due to a cross-sectional expansion of the axially loaded piston and a simultaneous reduction of the cylinder bore due to the jacket pressure [5]. in [3], where pcas are operated in the re-entrant mode, which produces even stronger contraction of the cylinder than the cc mode, the problem was solved by giving the cylinder bore a flare-like shape with a diameter at the outlet being by few micrometers larger than at the inlet. such a manufacture strategy is extremely difficult and generally leads to large widths and irregularities of the piston cylinder gap. in the new multiplier, the problem is overcome by giving the outer surface of the inner sleeve a variable shape. in the lower part, where the pressure inside the cylinder and in the pca gap is much larger than the ambient pressure, the inner sleeve has a cylindrical shape. in the upper part, where the pressure in the gap approaches the ambient pressure, the inner sleeve has a conical shape with the diameter at the top being by 0.3 mm smaller than the diameter of the cylindrical part. this results in a tapered gap between the inner and outer sleeves which reduces the action of pj on the upper part of the inner sleeve. therefore, excessive concentration of the pressure gradient in the pistoncylinder clearance towards the outlet of the cylinder is avoided and an acceptable flow rate of the pressure transmitting liquid between the piston and the cylinder is provided. the optimal shape of the inner sleeve was determined by finite element analysis (fea) as described in the next section. the lp pca was designed keeping in mind the requirement to have sufficiently low fluid flow through the piston-cylinder gap. this requirement results from the relatively large effective area of the lp pca compared to the area of a pressure balance pca maintaining and measuring pl. usually, pcas used in the range of 80 mpa have nominal effective areas of 0.1 cm2, which is ten times smaller than alp. excessive flow rate in the lp pca would cause a high piston fall rate of the reference pressure balance which could increase uncertainties or result in insufficient time for stable ph. to limit the flow rate and optimize performance, the principle of negative free figure 1. operation principle of a pressure multiplier. figure 2. hp pca in the mounting post. low pressure high pressure pisto cylinder inner sleeve outer sleeve jacket hp tube collar gland sleeve nut jacket pressure connection thermometer location jacket pressure channel acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 55 deformation was applied. this principle is well proven and is used in fluke gas high pressure balances [6], providing low fall rates at higher pressures and high sensitivity at lower pressures. in the lp pca design, the lp cylinder is surrounded by a sleeve with a conical taper on the inside surface and pl is applied to the outside surface of the sleeve. the lp sleeve has a sliding fit on the cylinder and is positioned so that its smallest diameter is located where the cylinder pressure is maximal. in the absence of pressure, the sleeve produces no stress on the cylinder. as pressure increases, loading the cylinder from inside and the sleeve from outside, the sleeve first comes in contact with the cylinder in the region where the pressure in the pistoncylinder gap is maximal. in this way, a variable outside load of the cylinder is created that optimally compensates the radial distortion of the cylinder produced by the inner pressure. with this variable outside load distribution a nearly linear pressure distribution in the lp pca gap is achieved. 3. finite element analysis to analyse feasibility of the pressure multipliers' operation up to 1.6 gpa and to optimise dimensions of the pcas components they were modelled using fea. the modelling was performed using two different fea software packages, ansys at ptb and cosmos/works at fluke. in this way, correctness of the calculations was verified by analysing the same problem. additionally, the analyses were performed for different problems to get complementary information on the multipliers performance. the fea included large deflection, contact and plastic capabilities, the latter required for the tube connected to the hp pca. first, all parts were modelled with their material properties and assumed geometries. for the hp pca the shrinking of the inner and then of the outer sleeve on the tungsten carbide cylinder, and connection of the hp tube to the hp cylinder was modelled. the cylinder to inner sleeve shrink fit was performed first, using nominal geometry. the deformation of the outer surface of the inner sleeve after this step was noted. in production this surface is re-machined after the initial shrink fit. to simulate this, the outer diameter of the inner sleeve was changed reducing it by the amount of deformation, to achieve geometry after this initial shrink step that gives a good representation of the geometry that results in production. the inner to outer sleeve shrink fit was performed second, using the resulting cylinder/inner sleeve combination with the outer sleeve nominal geometry. the shrink fit of the taper in the inner sleeve was accomplished in the same manner as the other shrink fit surfaces. the amount of contact of the surfaces was determined iteratively in the analysis, a step performed automatically by the fea software. three inner sleeve outside shapes were numerically tested in their effect on the stress, pressure distribution in the piston-cylinder gap and λ. connection of the hp tube to the cylinder and deformation of the tube under pressure were studied. a tube tip angle of 59.5° and matching cylinder cone angle of 60° were selected. the tube was moved into the cylinder to get a contact along the whole length of the cylinder cone and a pressure of 1.6 gpa was applied. surface loads in various combinations were applied. the loads included 50% or 100% of maximum measurement pressure on relevant surfaces, a linear and, alternatively, constant pressure distribution in the piston-cylinder gap, as well as a jacket pressure on the outer surface of the hp pca sleeve and a lp on the outer surface of the lp pca sleeve. after each load step, radial deformation, radial and tangential stresses were extracted. the stress and strain distributions were analysed in relation to the ultimate tensile strength (sut) and the elastic limit (sy) of the cylinder, sleeve, and hp tube materials. these properties together with the young’s modulus (e) and the poisson ratio (µ) based on the information by the materials' manufacturers and literature data are compiled in table 1. in addition, the ultimate compressive strength of the wc materials is known to be extremely high of about 7 gpa. later, e and µ values were accurately measured using resonant ultrasound spectroscopy [7]. after the shrink fit of the inner sleeve on the cylinder, a tangential stress of about -600 mpa (compression) was achieved at the inside of the cylinder. the tangential stress distribution at the cylinder inside after the subsequent shrinking fit of the outer sleeve is shown in figure 3. in any point, the absolute value of the stress is much lower than the ultimate compressive strength of the wc materials (7 gpa). in the upper part of the pca, the absolute value of the stress becomes lower which results from the conical shape of the outer surface of the inner sleeve. this corresponds to the intended reduction of the outside support in the region of the internal pressure drop. the dashed line in figure 3 shows the tangential stress calculated analytically under assumption of cylindrically perfect cylinder and sleeves. both the fea and analytical results demonstrate that the double shrink will to a great extent compensate for the stress produced by the internal pressure of 1.6 gpa. for a linear pressure distribution in the gap the maximum residual stress produced by the shrinking and the internal pressure would be about 400 mpa, which could be withstood by the wc cylinder even in the absence of jacket pressure. however, in order to minimize risk of cylinder rupture, jacket pressure is expected to table 1. material properties. part / material e/gpa µ sy/gpa sut/gpa lp pca, hp cylinder / wc-6%co 620 0.218 ≈0.7 hp piston / wc-10%co 560 0.218 lp & hp sleeves / cr-ni-mo steel 200 0.3 1.2 1.4 hp tube / austenitic steel 200 0.3 1.053 1.216 figure 3. tangential stress at the hp cylinder inside after shrinking fit of 2 sleeves calculated with fea (st) and analytically (st,calc). acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 56 be applied in all normal system operation. figure 4 presents the fea model of the hp pca and tangential stresses in the pca components at ph = 1.6 gpa, a linear pressure distribution from 1.6 gpa to zero along the piston-cylinder gap and pj = 0.4 gpa. the fea calculations for the hp pcas at the maximum measurement pressure of 1.6 gpa and the jacket pressure of 400 mpa show that the radial and tangential stress distributions are smooth and without any significant concentrations, the cylinder is subject only to compressive stresses, and the sleeves remain within the elastic limit. these results for the hp pca indicate that the design is not at risk for cylinder rupture nor instability of the effective area due to plastic deformation in the sleeves. the calculations also confirm the necessity of having two sleeves on the hp cylinder in order to achieve the required cylinder compression when the temperature for the thermal shrink is kept below the tempering temperature of the sleeve material. the analysis of the tube demonstrates that only a small portion of the tube near the center line is subject to plastic deformation. in the tapered part of the tube, in the sections where the tube is not supported by the cylinder, the region of plastic deformation does not exceed 1/3 of the tube cross section. these results indicate a reliable connection between the hp pca and tube at pressures up to 1.6 gpa. it is necessary to note that the fea was performed assuming the ultimate strength and the elastic limit each to be the same for tensile and compressive deformations based on the available manufacture data. this might produce inaccuracies of the fea results if these material properties were different for tension and compression. in a similar manner as the hp pca, an fea of the lp pca was performed at pl = 80 mpa and a linear pressure distribution from pl to zero being applied to inside of the cylinder with pl applied to the outside of the sleeve surrounding the cylinder. the primary objective was to minimise the radial distortions at the cylinder inside and thus the fluid flow rate. it was found out that, with an optimal taper on the inside of the sleeve, the radial distortions of the cylinder do not exceed 0.1 µm at any point of the cylinder bore. without the sleeve and in the free deformation (fd) mode, they would reach 1 µm at the gap entrance. combining the structural fea of the hp pca with a hydrodynamic analysis for its piston-cylinder gap λ and vf were calculated with using the ptb iterative method described in [5]. as a pressure transmitting medium two liquids were considered: di(2)-ethyl-hexyl-sebacate (dhs) at ph ≤ 0.5 gpa and polydiethylsiloxan pes-1 for ph ≤ 1.6 gpa. dhs is a liquid widely used in pressure balances up to 1 gpa. however dhs is not applicable at higher pressures because of solidification. its density and viscosity dependences on pressure were used as given e.g. in [5]. pes-1 has a significantly lower viscosity than dhs with acceptable values up to 1.6 gpa. its density and viscosity as functions of pressure were based on the experimental data presented in [3]. with dhs, calculations were performed in fd mode to provide target values of vf for optimal piston-cylinder gap widths to be achieved in the pistoncylinder production process. with pes-1, both fd and cc modes were analysed. as known from former fea studies results of hydrodynamic modelling strongly depend on a real initial gap profile between undistorted piston and cylinder [5]. in particular, information about the cylinder bore profile near the exit is important because the gap in this region becomes the narrowest under high pressure and therefore has a strong effect on the pressure distribution, vf and λ. to take this into account, prior to performing a final adjustment of the piston to the cylinder bore in the multiplier production process described in section 4, dimensional measurements were performed on the two hp cylinders. they included straightness measurements in the outlet region of the cylinder bore along 4 generatrix lines separated by 45°. results for opposite generatrices (0° and 180°, 45° and 225°, and so on) were averaged and are shown for the two cylinders in figure 5. for fea calculations, where the pcas are treated as axisymmetric, the gap profiles were averaged and approximated by analytical functions which are also presented in figure 5. the piston and the cylinder bore apart from the gap exit were considered ideally cylindrical. different gap widths (h) were analysed. figure 5 presents the case in which h was equal to 0.2 µm. results of the piston fall rate calculations for h = (0.2-0.5) µm, fd and cc operation modes, dhs and pes1 liquids are shown in figure 6. even with the smallest technologically feasible gap of 0.2 µm vf is too high when pes-1 and fd mode are used. the largest gap considered, 0.5 µm, combined with cc mode produces figure 5. piston-cylinder gap near the outlet for a perfect piston and real dimensions of cylinders 1 and 2. a b figure 4. fea model of hp pca (a), tangential stress distribution in it at ph = 1.6 gpa & pj = 0.4 gpa (b). -0.5 0.0 0.5 1.0 1.5 2.0 4 6 8 10 12 14 16 18 20 dr / µm z / mm 0° 45° 90° 135° in fea model cylinder 1 cylinder 2 [pa] acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 57 acceptable vf at ph >1 gpa but rates that are still too high below 1 gpa. surprisingly, the difference between the piston fall rates for 0.2 µm and 0.5 µm gaps in the pressure range (1 to 1.6) gpa is not as big as would be expected from theory of the undistorted gap. with the fea results for vf the range h = (0.30.4) µm was found as optimal. with h = 0.4 µm, a target vf of (0.068 to 0.073) mm/min was defined to be achieved at 32 mpa in control measurements when fitting the pistons to the cylinders. at 500 mpa with dhs and in fd mode, this gap width leads to vf = (0.57-0.61) mm/min. 4. realisation of the multipliers for both the hp and lp pcas, the best designs indicated by the fea were realized. complementary technologies available at ptb and fluke were combined. the piston-cylinders were manufactured and detailed technical drawings of the multipliers and all their parts produced by fluke. all other parts – each of the two multipliers comprised about 80 parts – were manufactured by ptb. fluke carried out a final mechanical adjustment of the sleeves and some other parts. in particular, processing of the final diameters of the sleeves to meet the defined tolerances of 1 µm and to achieve a roughness of lateral surfaces better than 0.2 µm required fluke’s expertise. the production of the pcas started with 5 to 8 pieces of lp and hp pistons and cylinders as well as sleeves. the best were selected during the succeeding processing and characterisation. ptb performed dimensional measurements and performed the thermal shrink fits of the sleeves on the cylinders. for the shrink fits, the outer steel sleeve was heated up to 400 °c maximum to stay below the tempering temperature of the sleeve's steel, which is 450 °c. however, provisional shrinking trials indicated that 400 °c of temperature increase may not be sufficient to perform the shrink – the insert stuck in the outside sleeve. to get more space and time leeway for the shrinking procedure, a greater temperature difference between the two parts was created by cooling the insert (cylinder in the 1st shrink stage and cylinder with already shrunk inner sleeve in the 2nd shrink stage) down to about −196 °c using liquid nitrogen. prior to shrinking the outer sleeve onto the inner, which had been fit to the cylinder in the 1st shrink, the cylindrical and conical outside surfaces of the inner sleeve was characterised dimensionally. after the two hp cylinders heat shrink operations their bores were re-machined to remove 3 to 5 µm from inner surfaces deformed by shrinking. the hp pistons and cylinders were then lapped to achieve piston fall rates which had been predicted by fea with the gap width of h = (0.3-0.4) µm. the test piston fall rate measurements in the production stage were performed at a pressure of 32 mpa at which the effect of the elastic distortion is relatively small and vf primarily depends on the undistorted gap width. later, piston fall rates were measured in both hp and lp pcas and allowed estimation of the gap width between piston and cylinders. it was found h = (0.27-0.36) µm for the hp pcas and h = (0.68-0.73) µm for the lp pcas. the whole production required the multipliers’ parts to be sent between ptb and fluke, some of them many times, for the subsequent production, characterisation and adjustment procedures. finally, the multipliers were assembled and preliminarily tested by fluke. 5. experiments first tests of the multipliers were performed by fluke at pressures (100 to 500) mpa on the hp side and (5 to 25) mpa on the lp side of the multipliers using two piston gauges as a reference, with dhs as a pressure transmitting liquid and at pj = 0.25·ph. the setup is shown in figure 7. multiplying ratios were determined using two hydraulic pressure balances in a crossfloat. both lp and hp piston gauges were pg7302. two different 500 kpa/kg pcas having expanded uncertainties in pressure of 22·10-6·pl + 16 pa and 27·10-6·pl + 16 pa (k = 2) were used in different runs on the lp side. a 5 mpa/kg pca having expanded uncertainty in pressure of 70·10-6·ph + 16 pa (k = 2) was used on the hp side of the multiplier. the hp and lp pcas' temperatures in the multiplier were measured using platinum resistance thermometers. these temperatures and the pistons position in the multiplier were indicated by a laboratory conditions monitor (lcm). the pistons were kept within ±2.5 mm around their middle working position. they were rotated by a dc motor at approximately 10 rpm. a ppch hydraulic pressure controller was used to set pj. a tare pressure (pt) produced on the hp side of the multiplier by the masses loading the hp piston (hp and lp pistons, pistons coupler, etc.) was measured at pl = 0 for each multiplier assembly using an rpm3 a1000, h1 (0-2) mpa pressure monitor with an uncertainty of approximately 2 kpa (k = 2). it was equal to (2.918 and 2.926) mpa for the two multipliers. with the tare pressure equation (1) transforms to ph = pt + pl·km, with (2) km = km,0 × [1 + λkm·(ph – pt)], (3) where km,0 is km at pl = 0 and λkm is the pressure dependence coefficient of km. a crossfloat, using the drop rate method, was performed between the two piston gauges with multiplier either 1 or 2 in between the two piston gauges at ph = (100, 200, 300, figure 7. multiplier system test setup. figure 6. piston fall rates calculated for different gap sizes, profiles and liquids. 0 1 2 3 4 5 6 7 8 0 200 400 600 800 1000 1200 1400 1600 1800 v f / ( m m /m in ) p / mpa h = 0.2 µm, cyl. 1, fd, dhs h = 0.2 µm, cyl. 2, fd, dhs h = 0.2 µm, ideal cyl., fd, dhs h = 0.3 µm, cyl. 1, fd, dhs h = 0.3 µm, cyl. 2, fd, dhs h = 0.3 µm, ideal cyl., fd, dhs h = 0.2 µm, cyl. 1, fd, pes-1 h = 0.2 µm, cyl. 2, fd, pes-1 h = 0.2 µm, cyl. 1, cc, pes-1 h = 0.2 µm, cyl. 2, cc, pes-1 h = 0.2 µm, ideal cyl., cc, pes-1 h = 0.5 µm, ideal cyl., cc, pes-1 h = 0.2 µm, fd, pes-1 h = 0.5 µm, cc, pes-1 h = 0.2 µm, cc, pes-1 h = 0.2 µm, fd, dhs lcm hp piston gauge motor power supply multiplier lp piston gauge acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 58 400, 500, 500, 400, 300, 200, 100) mpa in four runs total. according to (2), the multiplying ratio was determined at each point by subtracting pt from ph measured on the hp side and dividing by pl measured with the lp piston gauge (figure 8). the set of data for each run was fit with function (3) providing km,0 and λkm. the results of the two runs for each multiplier were combined (averaged) and used to determine the residuals of the points taken. after reviewing the results of the tests it was decided to leave out the 100 mpa in determining km,0 and λkm as it did not seem typical with respect to the rest of the results. table 2 gives the results of the fit for each multiplier. performance of the multipliers has been found satisfactory. the km values were reproducible within ±4·10-5 for multiplier 1 and ±2·10-5 for multiplier 2. an increased standard deviation in the case of multiplier 1 is presumably associated with the exchange of the reference lp piston gauge between runs 1 and 2. herewith and taking into account the uncertainties of the reference lp and hp piston gauges pg7302, the relative standard uncertainty of the multipliers in the pressure range (100 to 500 mpa) lies between (4 and 5.5)·10-5. with the same data the relative standard uncertainty in the range (1 to 1.6) gpa can be expected to be (1.3 to 2)·10-4, which is a very preliminary estimation and must be confirmed by experiments at higher pressures. this uncertainty is sufficiently small to provide a calibration service required by the industry. 6. conclusions and outlook the two novel 1.6 gpa pressure-measuring multipliers were developed, tested at pressures up to 500 mpa, and demonstrated repeatability on the level of as low as 2·10-5. a standard uncertainty of up to 5.5·10-5 obtained in the test crossfloats is mainly caused by the reference lp and hp standards. this uncertainty can be reduced in the future by more extensive experiments using more accurate 1 gpa standards of ptb as a reference, but also by a theoretical calculation of the pressure distortion coefficients of the lp and hp pcas taking into account the real dimensional properties of the hp piston-cylinder gap and of sleeve-to-cylinder gap in the lp pca. moreover, extension of the fluid flow calculations for the pca gap up to 1.6 gpa requires accurate data on density and viscosity of pes-1 at high pressure. all these measurements are in progress within emrp jrp [1]. acknowledgement the contribution of dr. p. ulbig in the organisation of this research and of mrs. d. hentschel, who manufactured most of the parts of the multipliers, both ptb members, is much appreciated. the authors acknowledge mr. p. delajoud (fluke dhi retired) for the design of the multipliers and f. valenzuela and m. bair (both fluke) for piston-cylinder fabrication and cross-float testing/analysis respectively. this research was carried out within the emrp. it is jointly funded by the emrp participating countries within euramet and the european union. references [1] euramet, “high pressure metrology for industrial applications”, publishable jrp summary report for ind03 highpres, http://www.euramet.org/index.php?id=emrp_call_2010. [2] emrp jrp ind03 highpres, http://emrp-highpres.cmi.cz/. [3] v.m. borovkov, "deadweight high pressure piston manometers", in: researches in the area of high pressures. e.v. zolotyh (editor). publisher izdatelstvo standartov, moscow, 1987, pp.577, russ. [4] p. delajoud, "the pressure multiplier: a transfer standard in the range 100 to 1000 mpa", bipm monographie, vol. 89/1, 1989, pp.114-124. [5] w. sabuga et al., "finite element method used for calculation of the distortion coefficient and associated uncertainty of a ptb 1 gpa pressure balance – euromet project 463", metrologia, 43 (2006) pp.311–325. [6] p. delajoud, m. girard, "a new piston gauge to improve the definition of high gas pressure and to facilitate the gas to oil transition in a pressure calibration chain", proc. of the imeko tc16 int. symp. on pressure and vacuum, sept. 22-24, 2003, beijing, china, acta metrologica sinica press, pp.154-159. [7] w. sabuga, p. ulbig, a.d. salama, "elastic constants of pressure balances' piston-cylinder assemblies measured with the resonant ultrasound spectroscopy", proc. of the int. metrology conf. cafmet-2010, april 2010, cairo. figure 8. multiplying ratio vs. high pressure corrected for tare pressure. table 2. results of multiplying ratios in individual tests and averages for each multiplier. multiplier 1 multiplier 2 km,0 λkm·107 mpa-1 km,0 λkm·107 mpa-1 run 1 19.987668 3.79 20.008672 3.55 run 2 19.989101 2.89 20.008994 3.39 average 19.988384 3.34 20.008833 3.47 acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 59 http://www.euramet.org/index.php?id=emrp_call_2010 http://emrp-highpres.cmi.cz/ development of 1.6 gpa pressure-measuring multipliers banded vaults with independent arches: analysis of case studies in turin baroque atria acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 7 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 banded vaults with independent arches: analysis of case studies in turin baroque atria fabrizio natta1 1 department of architecture and design (dad), politecnico di torino, v.le mattioli 39, 10125 torino, italy section: research paper keywords: banded vaults; architectural drawing; 3d modelling; point clouds citation: fabrizio natta, banded vaults with independent arches: analysis of case studies in turin baroque atria, acta imeko, vol. 11, no. 1, article 8, march 2022, identifier: imeko-acta-11 (2022)-01-08 section editor: fabio santaniello, university of trento, italy received march 7, 2021; in final form march 7, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: fabrizio natta, e-mail: fabrizio.natta@polito.it 1. introduction this research project is a continuation of a study conducted by roberta spallone and marco vitali on complex brickwork vaulted systems in turin's baroque buildings [1]. the progress of the studies, and the enlargement of the research group, have led to the analysis of a considerable number of case studies in the architectural heritage of turin. the ‘a fascioni’ – or banded – vaults are architectural solutions for the covering of medium and large rooms derived from the guarini experience. guarini describes their characteristics in architettura civile (published posthumously in 1737) [2] and applies their forms in some of his projects. from guarini's example, a remarkable production has emerged that has seen the work of many important architects, but also of others whose identity is unknown. they have applied formal and constructive principles that have become customary in the turin building site, which have allowed a wide application in the civil buildings of the city. eleven vaulted baroque atria were identified by the research group as banded vault, of which eight were accessible. one of the objectives was to catalogue and recognize those vaulted structures with independent arches and those in which the arches are generated by vertical cuts on the main reference vault (e.g., pavilion, ‘a conca’, ‘a schifo’, etc.). this analytical phase preceded the comparison of the metric data obtained by tls (terrestrial laser scanning) metric survey, coordinated by concepción lópez. the information obtained from the point cloud was fundamental for the comparison through sections of the various parts of the structure. the philological reconstruction of the design idea was analysed with ideal schematics of the treatises, archive drawings, and realizations in the city and through the tools of twoand threedimensional modelling. the studies that led to the continuation of this analysis [3], resume work methods already applied to other case studies [4], intending to define a systematic methodology after this extensive research. 2. architectural treatises and manuals the ‘a fascie’ vaults are introduced, as we have seen, by guarino guarini into architettura civile. starting from a rigorous abstract this contribution presents a part of the work and the methodology applied to it developed for the realization of an international research project aimed at the analysis and preservation of an architectural heritage characteristic of turin's baroque architecture: the ‘a fasce’ vaults, locally named as ‘a fascioni’. this architectural solution, used by important architects, such as guarini up to the local workers, to cover spaces of various sizes, has found in the court of turin area a wide application in buildings atria. a considerable number of banded vaulted atria were identified and surveyed by the research group to recognize and investigate those whose bands are generated from independent arches. the objective is the comparison of metric and geometric data between ideal models and realizations over time, to evaluate their variations and understand a constructive methodology through three-dimensional modeling. mailto:fabrizio.natta@polito.it acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 knowledge on the theme of vaulted systems, with studies related to geometry, stereotomy, calculation of surfaces and volumes present in his previous treatises (euclides adauctus published in 1671 and modo di misurare le fabriche, in 1674) in architettura civile, treatise iii, chapter xxvi ‘delle volte, e varj modi di farle’, guarini dedicates the ‘osservazione nona’ and the ‘osservazione decima’ to banded vaults and flat banded vaults. this text is accompanied by two plates, ‘lastra xix’ and ‘lastra xx’ (figure 1), in which guarini graphically illustrates the principles set out in the observations through a double orthogonal projection representation. the architect describes textually the spatial genesis of this type of vault starting from a division of the space to be vaulted through wall-to-wall bands, with perpendicular or oblique direction, to create fields, which can then be filled with vaults of different types. references to this type of construction can be seen in the work of authors contemporary to guarini, as in the case of the vault of the sala di diana in the reggia di venaria by amedeo di castellamonte (1661 – 1662) [5], and in later periods between the end of the 17th century and the early 18th century by guarini’s collaborators (for example gian francesco baroncelli in palazzo barolo, 1692) or by other internationally renowned architects such as filippo juvarra in palazzo martini in cigala (1716). two centuries later, giovanni curioni conducted a study on this type of vaulting in bands, and with his geometria pratica (1868) [6] he straddles the gap between a purely theoretical contribution and a practical approach. the author, starting from guarini's approach, develops further considerations regarding the origin of the generating surface of the subdivisional bands of the space: “on the polygon to be covered with one of these times already insists the intrados of a time which, depending on the figure of the said polygon, can be a barrel, ‘a conca’, a pavilion, a barrel with heads of pavilion, ‘a schifo’, a dome” [7]. the subsequent operations are carried out by cutting with vertical planes on the reference surface and therefore do not seem to identify the construction for the independent arches. also, in the turin area we find the work of giovanni chevelley, where in his elementi di tecnica dell'architettura: materiali da costruzione e grandi strutture (1924) [8] collects local building knowledge in the field of vaulted structures. the description of the banded vaults take up the definition of curioni and in indicating some realizations emphasizes its spatial qualities and its variety of use in the atria of civil buildings and churches of the 17th and 18th centuries. 3. banded vault in archival drawings alongside the source of the treatises, we can also find the documentary source, which consists of the original guarini drawings, or the work of his collaborators. these documents, kept in the archivio di stato – sezione corte, have been studied directly by the author of this paper. they are firstly published and analysed in the archival regesto elaborated by augusta lange [9], for the 1968 conference on the figure of guarini. some of the drawings concern precisely the banded vaulted system applied to cover rooms in civil buildings (figure 2) [10]. the examples describe different solutions starting from the same tracing of the bands, perpendicular to the wall in the first case and oblique in the other, with the possibility for both to identify the arches as independent [6], [11]. the one shown as an example (figure 3), even in a hypothetical three-dimensional vision, reveals many similarities with the realizations surveyed in the fieldwork. this structure is characterized by the double dimensions of the bands; starting from the transverse arches, the longitudinal band is specularly supported, leaving the central field free for the insertion of further shapes and decorations, as described in his treatise. figure 1. banded vault in architettura civile. guarini 1737, treat. iii, plate xx. figure 2. g.guarini, study of a banded vault, 1680 c., torino, asto, azienda savoia-carignano, cat. 43, mazzo i, fasc. 6, n. 36; g.guarini, study of a composed and banded vault, 1680 c., torino, asto, azienda savoiacarignano, cat. 95, mazzo ii, fasc. 115, n. 23. figure 3. plan distribution and digital reconstruction of a guarini’s banded vault. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 4. banded vaults in turin baroque atria the applications of this particular type of vaulted structure find therefore after guarini a vast development in the city of turin, taking the maximum expression and variety in the atria of the baroque palaces of the city. the already mentioned works by castellamonte and juvarra seem to follow a typological current that touches many other authors in the city of turin. in their realizations, it is possible to identify those characters derived from guarini's thoughts but also to understand the peculiarities of a different creative process. the phase of identification and cataloguing of these vaulted structures was therefore fundamental. is it possible to firstly identify census maps in the research directed by cavallari murat and published in forma urbana e architettura nella torino barocca (1968) [12] and later in studies reported in the volume by spallone and vitali (2017). this structure was built between the 17th and 18th centuries in the areas of the second and third baroque extension of the city. in the variety of the baroque atria surveyed, three have been identified – at the moment of the research – as belonging to this category of vaulted structures with independent arches (table 1). the classification realized, focusing only on the vaulted structures analysed for this case study, wants to identify the maximum dimensions of the spaces and identifies the main axialities that compose the grid. this cataloguing, certainly expandable by extending the analysis to entire buildings, has provided a first overview of the spatiality created through this structure. the most common grids, in the whole context, were 3x3, with a smaller number of 3x4 and 3x5 used for rooms with a larger floor plan. the atria with vaulted structures generated from independent arches (figure 4) by filippo juvarra (in palazzo martina in cigala), gian giacomo plantery (in palazzo capris in cigliè), and gian francesco baroncelli (in palazzo barolo) are characterized by a varied spatial division (figure 5), maintaining the constant of the transversal arches as the basis for the creation of the subsequent bands and the vaults to complete the further fields created by the grids. for the recognition of this type of structure, the point clouds generated by the tls survey were therefore analysed. through a phase of identification of the characteristic sections by using the point cloud, we tried to compare this information to evaluate their conformation and geometric construction. the comparison is made between the sections that followed the same direction (in these cases only longitudinal or transversal, as there are no examples with diagonal axiality). if the variances identified could be considered within a geometrically valid level (but not metrically defined and evaluated case by case), we proceed with the classification of this part of the vaulted structure of the atria [13]. the example of the vaulted atria of via della consolata is displayed to explain the classification method and the following identification of the construction geometries (figure 6). in this case, the cross-sections lay the basis for the construction of the independent arches. after this step, the subsequent longitudinal arches are positioned using the transversal arches as a support base. this second level of arches, due to the conformation of the space, in its central field is straight to cover the whole space in figure 4. banded vault in baroque atria in turin. table 1. baroque atria under analysis. address width, depth, height (m) grid via della consolata, 3 7,66 × 10,37 × 6,75 3 × 5 via santa maria, 1 9,33 × 5,97 × 6,24 3 × 3 via delle orfane, 7 8,62 × 10,42 × 6,78 3 × 4 figure 5. plan distribution of banded vault in baroque atria in turin. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 length. solutions of this type of result to be very common in these vaults (the same system is used also in via delle orfane). the opportunities related to this type of structure allow construction in the other fields of autonomous vaults, as suggested by guarini’s indications [14] and that we are going to see in the selected case study. 5. the case study: palazzo capris di cigliè the two-dimensional drawings allow making an initial analysis of the architectural consistencies which, reported in the three-dimensional model, are linked to the formal conception derived from architectural literature and archival documentation. the case study now selected is palazzo capris di cigliè (1730) by gian giacomo plantery. 5.1. survey methodology and technical aspects the research carried out has had as main purpose the geometric and metrological analysis of the vaults of the atrium of this noble palace with the use of terrestrial laser scanner (tls). data obtained with this technology, led by prof. concepción lópez, generate easy-to-use models for later comparison with the geometric prototypes established in the literature [15], [16]. for this survey has been used the focus-130x3d scanner by faro. its low weight (5,2 kg) and small size (24 x 20 x 10 cm) facilitate its transportation and handling. the integrated longlasting battery (4 hours) ensures its use with no need to connect it to electricity during the entire/whole scanning session. it has a systematic error scope of ± 2 mm of distance in 25 meters which was acceptable for this study. it includes an integrated camera with 70 megapixel non-parallel colour overlay so the resulting point clouds a photographic realism very useful to understand it. the scans have been performed at a speed of 488.000 points/sec implied duration of each scan of approximately 8 minutes obtaining a good resolution of scanned. to process the scans, it was used autodesk recap pro® and the cloud registration was done automatically, without errors, using the tools of the software. after this step, the point cloud was imported in autodesk autocad® to execute the success data (figure 7 figure 8) [4]. 5.2. interpretation and modeling data obtained from the two-dimensional surveys with data from tls survey are used by restoring the symmetries and searching the elementary geometries in sections [17]. the method of analysis, developed in previous research [4], is based on guarini’s general indications for the composition of this type of vaulted system: delineated the bands starting from the plan – identified in this case also three-dimensionally –, we pass to the filling of the empty spaces with a small vault. the phases of geometric decomposition of the vaulted structure are shown through representation in isometric axonometry (figure 9). figure 6. graphic analysis and digital modeling of independent arches in the baroque atria in examination. figure 7. point cloud of atria portion in palazzo capris di cigliè. figure 8. point cloud of vault atria in palazzo capris di cigliè. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 through the point cloud sections are generated the most geometrically accurate curves (figure 9b) (looking for a curve generation with the lowest possible number of centres). these curves, belonging to independent arches, allow generating the first order of the vaulted structure (figure 9c). in this case, the central area sees the interruption of the longitudinal arches leaving full range to a single vaulted structure. this vault superimposed on the arched system, is sailed-shaped, as evidenced also by the decoration. along the major axis, the portion of the vault between the two arches follows the same geometry of these arches, recreating the idea of giant arches already seen in guarini’s drawings (figure 9d). the last fields to complete this vaulted structure are the angular ones. this time is independent to the main structure. cross vaults are set starting from the intersection of the arches and they develop with very low height (figure 9e). one of the most relevant features of this case is certainly the width of the discontinued bands, with specifications similar to those of the guarini design (figure 2 and figure 3) and leading to further evaluation of the creation of these bands and the internal areas created. 6. comparison between design geometric model and survey data the phase that accompanies the redrawing and digital reconstruction of this vault model goes hand in hand with the research of comparison between the model built and generated through point cloud and the geometric model of this surface [18]. this results in finding and analysing the characteristic sections of the vaulted structure [19], in this case the component of bands generated by independent arches. the replicable procedure [1] search in the section the geometric information useful for the construction of a theoretical model comparable to the original design idea; in this case study we extract fourteen sections from the point cloud (figure 10). the position of the sections in relation to the characteristics of the vaulted system led to their cataloguing, aimed at recognizing those which, by virtue of the hypothetical symmetry of the ideal model, should have the same shape. those of the main arch, aligned and superimposed with reference to the impost plane, have led to the recognition of axes, proportions, points of intersection and curves. among these, element by element, the polycentric curve (the latter with the smallest possible number of centers) was digitally constructed with autodesk autocad®, consistent with the techniques of construction of the centering (figure 11). at the end of this curve recognition phase, we moved on to the reconstruction of the theoretical three-dimensional model. these surfaces, modelled on rhinoceros® 6, were exported in the .e57 format to be then overlaid with the point cloud inside the cloudcompare open source software (figure 12). it is necessary to remember that the two digital products can never be perfectly overlapped: the point cloud brings with itself information about the consistencies in their current condition, the digital model, reconstructive of the design idea is generated through rigorously geometric references and restoration of the symmetries [3]. figure 9. graphic analysis and digital modeling of the vault in palazzo capris di cigliè. figure 10. scheme section of point cloud (red). figure 11. tracing method for directrices and comparison with the section of point cloud (red). acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 the graphical outputs realized through the software, can show the standard distance between point cloud and ideal model (figure 13). the higher distances between the two elaborations are identified in the more internal bands which, already in the extraction phase, were highlighted by a flexion near the keystone. 7. conclusions this paper outlines the methodological framework developed for the metric survey and the processing of knowledge data in the research on banded vaulted systems in turin baroque atria. the integration between the technique of metric survey by laser scanning with digital drawing and modelling involves, as we have seen, the definition of a workflow aimed at optimizing the use of data. indeed, adhering to the objectives of the research, twodimensional drawings must represent the atria in their current state, while three-dimensional modelling of the vaults is linked to the geometric reference models and aimed at the philological reconstruction of the design idea [18]. these procedures have given rise to new opportunities for research, such as the comparison (metric, but even more interesting, geometric) through the superimposition of ideal design models and point clouds. the deviation between the two digital products will not only reveal the deformations, structural failures, transformation that are part of the real-life of the building, but, above all, will provide new insights for the hypothesis on the necessary construction adaptions, and the centerings and laying techniques applied on building-site, that will contribute to the understanding of the relationship between design and construction. acknowledgement this contribution presents a development of the work and methodology elaborated within the international research project “nuevas tecnologías para el análisisis y conservación del patrimonio arquitectónico” coordinated by roberta spallone, assisted by marco vitali (department of architecture and design of the politecnico di torino). the project allowed the stay in turin, as visiting professor, of concepción lópez (department of graphic expression in architecture, universitat politecnica de valencia). the group includes, in addition to the author of this work: giulia bertola and francesca ronco (department of architecture and design, politecnico di torino). the research, favored by funding from the ministerio de ciencia, innovación y universidades of spain, is aimed at the analysis and interpretation of an architectural heritage characteristic of turin’s baroque production: the ‘a fasce’ vaults, locally named as ‘a fascioni’. references [1] r. spallone, m. vitali, volte stellari e planteriane negli atri barocchi in torino – star-shaped and planterian vaults in turin baroque atria, aracne, ariccia, 2017, isbn 978–88–255–0472–9. [2] g. guarini, architettura civile, gianfranco mairasse, turin, 1737. [3] f. natta, baroque banded vaults with independent arches: from literature to realizations in turin atria, proc. of imeko tc4 metroarchaeo 2020 virtual conference “metrology for archaeology and cultural heritage”, 22-24 october 2020, isbn 978-92-990084-9-2, pp. 72-77. online [accessed 11 march 2022] https://www.imeko.org/publications/tc4-archaeo2020/imeko-tc4-metroarchaeo2020-014.pdf [4] m. c. lópez gonzález, r. spallone, m. vitali, f. natta, baroque banded vaults: surveying and modeling. the case study of a noble palace in turin, in: the international archives of the photogrammetry, remote sensing and spatial information sciences, copernicus gmbh (editors). volume xliii-b2-2020 (2020), pp. 871-878. doi: 10.5194/isprs-archives-xliii-b2-2020-871-2020 [5] e. piccoli, le strutture voltate nell’architettura civile a torino (1660-1720), in: sperimentare l’architettura: guarini, juvarra, borra e vittone. g. dardanello (editor). fondazione crt, turin, 2001, issn 0008-7181, pp. 38-96. [6] g. curioni, geometria pratica applicata all’arte del costruttore, negro, turin, 1868. [7] r. spallone, delle volte, e vari modi di fare. modelli digitali interpretativi delle lastre xix e xx nell'architettura civile di guarini, fra progetti e realizzazioni realizzazioni / on the vaults and various modes of making them. interpretative digital models of the xix and xx plates in guarini's architettura civile, between designs and buildings, in: le ragioni del disegno. pensiero forma e modello nella gestione della complessità / the reasons of drawing. thought shape and model in the complexity management, s. bertocci, m. bini (editors). proc. of 38th convegno internazionale dei docenti delle discipline della rappresentazione, florence, italy, 15 – 17 september 2016, pp. 1275-1282. [8] g. chevalley, elementi di tecnica dell’architettura: materiali da costruzioni e grosse strutture, pasta, turin, 1924. [9] a. lange, disegni e documenti di guarino guarini, in: guarino guarini e l’internazionalità del barocco. v. viale (editor). proc. of the international conference promoted by the accademia delle scienze di torino, turin, italy, 30 september – 5 october 1968, vol i, pp. 91-236. [10] f. natta, dai disegni autografi di guarini all’interpretazione digitale: modelli di volte a fasce, in: sistemi voltati complessi: geometria, disegno, costruzione complex vaulted systems: geometry, design, construction. r. spallone, m. vitali, a. giordano, j. calvo-lópez, c. bianchini, a. lópez-mozo, p. figure 12. tracing method for directrices and comparison with the section of point cloud (red). figure 13. tracing method for directrices and comparison with the section of point cloud (red). https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-014.pdf https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-014.pdf http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2020-871-2020 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 navarro-camallonga (editors). aracne, ariccia, 2020, isbn 97888-255-3053-7, pp. 213-228. [11] g. curioni, lavori generali di architettura civile, stradale ed idraulica e analisi dei loro prezzi. negro, torino, 1866. [12] a. cavallari murat, forma urbana ed architettura nella torino barocca: dalle premesse classiche alle conclusioni neoclassiche, utet, turin, 1968, 2 voll. 3 tome. [13] m. vitali, astrazione geometrica e modellazione tridimensionale per la definizione di una grammatica spaziale delle volte a fascioni/geometric abstraction and three-dimensional modeling for the definition of a spatial grammar of the ‘a fascioni’ vaults, in r. salerno (ed.), “rappresentazione/ materiale/ immateriale – drawing as (in)tangible representation”, gangemi, roma, 2018. [14] g. guarini, modo di misurare le fabriche. per gl'heredi gianelli, torino, 1674. [15] a. almagro gorbea, half a century documenting the architectural heritage with photogrammetry, ege revista de expresión gráfica en la edificación, 11, 2019, pp. 4-30. doi: 10.4995/ege.2019.12863 [16] michael e. auer, ram kalyan b. (eds), cyber physical systems and digital twins. proceedings of the 16th international conference on remote engineering and virtual instrumentation. springer, berlin, 2019. doi: 10.1007/978-3-030-23162-0 [17] f. stanco, s. battiato, g. gallo, digital imaging for cultural heritage preservation: analysis, restoration, and reconstruction of ancient artworks. crc press, 2017. doi: 10.1201/b11049 [18] a. samper., g. gonzález, b. herrera, determination of the geometric shape which best fits an architectural arch within each of the conical curve types and hyperbolic-cosine curve types: the case of palau güell by antoni gaudí, journal of cultural heritage, volume 25, 2017, pp. 56-64. doi: 10.1016/j.culher.2016.11.015 [19] e. lanzara, a. samper, b. herrera, point cloud segmentation and filtering to verify the geometric genesis of simple and composed vaults. int. arch. photogramm. remote sens. spatial inf. sci., xlii-2/w15, 2019, pp. 645-652. doi: 10.5194/isprs-archives-xlii-2-w15-645-2019 https://doi.org/10.4995/ege.2019.12863 https://doi.org/10.1007/978-3-030-23162-0 https://doi.org/10.1201/b11049 https://doi.org/10.1016/j.culher.2016.11.015 https://doi.org/10.5194/isprs-archives-xlii-2-w15-645-2019 on the trade-off between compression efficiency and distortion of a new compression algorithm for multichannel eeg signals based on singular value decomposition acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 7 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 on the trade-off between compression efficiency and distortion of a new compression algorithm for multichannel eeg signals based on singular value decomposition giuseppe campobello1, giovanni gugliandolo1, angelica quercia2, elisa tatti3, maria felice ghilardi3, giovanni crupi2, angelo quartarone2, nicola donato1 1 department of engineering, university of messina, contrada di dio, s. agata, 98166 messina, italy 2 biomorf department, university of messina, aou "g. martino", via c. valeria 1, 98125, messina, italy 3 cuny school of medicine, cuny, 160 convent avenue, new york, ny 10031, usa section: research paper keywords: biomedical signal processing; electroencephalograph (eeg); eeg measurements; near-lossless compression; singular value decomposition (svd) citation: giuseppe campobello, giovanni gugliandolo, angelica quercia, elisa tatti, maria felice ghilardi, giovanni crupi, angelo quartarone, nicola donato, on the trade-off between compression efficiency and distortion of a new compression algorithm for multichannel eeg signals based on singular value decomposition, acta imeko, vol. 11, no. 2, article 30, june 2022, identifier: imeko-acta-11 (2022)-02-30 section editor: francesco lamonaca, university of calabria, italy received october 24, 2021; in final form february 22, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: giuseppe campobello, e-mail: gcampobello@unime.it 1. introduction since its invention by the german psychiatrist hans berger almost a century ago [1], electroencephalography (eeg) has continuously evolved, becoming a powerful and extensively used method that allows measuring safely and noninvasively the spatiotemporal dynamics of the brain activity with a high temporal resolution in the range of milliseconds, which enables detecting rapid changes in the brain rhythms [2]. the brain rhythms are the periodic fluctuations of human eeg, which are associated with cognitive processes, physiological states, and neurological disorders [3]. hence, the use of eeg can range from basic research to clinical applications [4]. among the various applications of eeg, it is worth highlighting its recent application in brain computer interface (bci) research [5], [6]. eeg consists of a neurophysiological measurement of the electrical activity generated by the brain through multiple electrodes placed on the scalp surface. eeg data are measured as the electrical potential difference between two electrodes: active and reference electrodes. at neurophysiological level, the electrical potential differences are mostly generated by the summation of both excitatory and inhibitory post-synaptic potentials in tens of thousands of cortical pyramidal neurons that are synchronously activated [7]. hence, the brain sources of the electrical potentials recorded by eeg may be suited to an infinite number of configurations, thereby limiting the spatial resolution of scalp eeg. to overcome this drawback, several source localization methods have been proposed and their application with high-density eeg (hd-eeg) systems, such as 64-256 electrodes, can lead to a remarkable improvement in eeg spatial resolution [3], [8]. many applications require that eeg systems have to record continuously for several days or even weeks and this might easily yield several gigabytes (gb) of generated data, which makes compression algorithms necessary for efficient data handling. as abstract in this article we investigate the trade-off between the compression ratio and distortion of a recently published compression technique specifically devised for multichannel electroencephalograph (eeg) signals. in our previous paper, we proved that, when singular value decomposition (svd) is already performed for denoising or removing unwanted artifacts, it is possible to exploit the same svd for compression purpose by achieving a compression ratio in the order of 10 and a percentage root mean square distortion in the order of 0.01 %. in this article, we successfully demonstrate how, with a negligible increase in the computational cost of the algorithm, it is possible to further improve the compression ratio by about 10 % by maintaining the same distortion level or, alternatively, to improve the compression ratio by about 50 % by still maintaining the distortion level below the 0.1 %. mailto:gcampobello@unime.it acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 an illustrative example, about 2.6 gb of data per day are generated by an eeg system recording the data from 64 electrodes with a sampling rate of 250 hz and a 16-bit resolution. it should be mentioned that intracranial eeg recordings can generate even terabytes (tb) of data per day [9]. therefore, eeg data need to be largely compressed to efficiently manage their storage. furthermore, data compression is necessary also to reduce both transmission rate and power consumptions when telemonitoring eeg via wireless [10], [11]. for instance, wireless wearable eeg systems for long-term recordings should operate under a low-power budget, due to limitation on battery lifetime, and then the power consumption needs to be significantly reduced by compressing the data before transmission [12]. various eeg compression algorithms have been developed to minimize the number of bits needed to represent eeg data by exploiting inter and/or intra-channel correlations of eeg signals. eeg compression algorithms can be classified into two main categories: lossless and lossy compression [13], [14]. as the main goal of the compression algorithms is to reduce the size of the data, their performance is, typically, evaluated by using the compression ratio (cr), which is calculated as the ratio between the number of bits required to represent the original and compressed eeg data. generally, lossy compression enables superior compression performance compared to the lossless counterpart but it cannot guarantee a reconstruction of the exact original data from the compressed version. in such a case, the percent root-meansquare distortion (prd) is used as indicator for assessment of the quality of the reconstructed signal, which is affected by the distortion introduced by the lossy compression. typically, lossless compression algorithms are preferred in clinical practice to avoid diagnostic errors, since important medical information may be disregarded using lossy compression and, in addition, there is a lack of legislation and/or approved standards on lossy compression, making eeg reconstruction a more critical requirement than compression performance. on the other hand, the lossless compression approach has a limited impact on storage requirements for eeg applications. as a matter of fact, the use of the state-of-the-art lossless compression algorithms allows achieving typical compression ratios in the order of 2 or 3 [15]-[21]. on the other hand, eeg signals are of very small amplitude, typically in the order of microvolts (v), and thus they can be easily contaminated by noise and artifacts, which should be filtered to highlight and/or extract the actual clinical information [22], [23]. to accomplish this task, digital filters and denoising procedures based on wavelets, principal component analysis (pca) and/or independent component analysis (ica) are often used [24]-[28]. this enables the development of near-lossless pca/icabased compression algorithms that can achieve much higher compression ratios than those obtained with lossless compression algorithms with a tolerable reconstruction distortion for the application of interest. different near-lossless eeg compression schemes based on parallel factor decomposition (parafac) and singular value decomposition (svd) have been investigated and compared with wavelet-based compression techniques [29]. in most cases, parafac leads achieving better compression performance but the maximum cr obtained with a prd lower than 2 % was 4.96 [29]. a nearlossless algorithm able to obtain a cr of 4.58 with a prd in the range between 0.27 % and 7.28 %, depending on the specific dataset under study, has been proposed in [30]. more recently, a svd-based compression scheme able to obtain 80 % data compression (i.e., cr = 5) with a pdr of 5 % has been reported in [31]. more recently, in [32] we proposed a near-lossless compression algorithm for eeg signals able to achieve a compression ratio in the order of 10 with a 𝑃𝑅𝐷 < 0.01 %. in particular, the algorithm has been specifically devised for achieving a very low distortion in comparison to other state-ofthe-art solutions. in this paper, we present an improved version of our previous algorithm and particular attention is given to achieve a good trade-off between compression efficiency and distortion. the rest of this paper is organized as follows. in section ii, we briefly review svd and describe our original algorithm proposed in [32]. in section iii, we illustrate the proposed algorithm. in section iv, we present the experimental results obtained on a real-world eeg dataset. finally, future works and our conclusions are drawn in section v. 2. singular value decomposition eeg signals are easily contaminated with artifacts and noise and, therefore, they need to be filtered before extracting the actual clinical information. for this purpose, svd-based pca and ica techniques are commonly used. in order to briefly review how svd is exploited in this context, let us consider a high-density 𝑁-channel eeg system whose signals are sampled for a time interval 𝑇 and at a rate of 𝑓𝑠 samples per second (sps). in this case, we have 𝑀 = 𝑇 ⋅ 𝑓𝑠 samples per channel and thus an overall number of samples equal to 𝑁 ⋅ 𝑀. we assume that such as samples are represented by a 𝑁 × 𝑀 matrix 𝐴. it is known from the svd theory that it possible to decompose a matrix 𝐴 into three matrices 𝑈, 𝛴, and 𝑉, such that 𝐴 = 𝑈𝛴𝑉 𝑇. in particular, 𝛴 is a diagonal matrix whose diagonal elements, i.e., 𝜎𝑖 with 𝑖 ∈ [1, . . . , 𝑁], are named singular values. moreover, a rank 𝑘 approximation of 𝐴, i.e., 𝐴𝑘 = 𝑈𝑘 𝛴𝑘 𝑉𝑘 𝑇 , exists which minimizes the norm ||𝐴 − 𝐴𝑘 || and that can be obtained by considering the submatrices 𝑈𝑘 and 𝑉𝑘 , given by the first 𝑘 columns of 𝑈 and 𝑉, respectively, and the leading principal minor of order 𝑘 of 𝛴, i.e., 𝛴𝑘 , containing the first 𝑘 < 𝑁 singular values. in the specific context of eeg, the desired rank 𝑘, and thus the number of singular values exploited for approximation, is chosen by clinicians, or other eeg experts, in order to reduce the effect of undesired artifacts and noise by keeping unaltered the clinical information. in this case the actual clinical information is contained in 𝐴𝑘 and, with the aim of reducing storage resources that are needed to store eeg samples, it is mandatory to encode the matrix 𝐴𝑘 in the most efficient manner. in [32] authors proposed a solution for the above problem by deriving the near-lossless compression algorithm, as reported in figure 1. the basic idea of the algorithm is to decompose the matrix 𝐴𝑘 into two matrices, 𝑋𝑘 and 𝑌𝑘 , such that 𝐴𝑘 = 𝑋𝑘 𝑌𝑘 . in particular, the matrices 𝑋𝑘 and 𝑌𝑘 can be obtained, as shown in step 2, by first evaluating the matrix 𝑆 = 𝛴1/2 and then considering the first 𝑘 columns of the matrix 𝑈𝑆 and the first 𝑘 rows of the matrix 𝑆𝑉 𝑇 , i.e., 𝑋𝑘 = (𝑈𝑆)[: , 1 ∶ 𝑘] and 𝑌𝑘 = acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 (𝑆𝑉 𝑇 )[1 ∶ 𝑘, ∶] in matlab-like notation. successively (see step 3), maximum absolute values of the matrices 𝑋𝑘 and 𝑌𝑘 , i.e., 𝑚𝑋 = max(|𝑋𝑘 |) and 𝑚𝑌 = max(|𝑌𝑘 |), are evaluated. such values are used in the last step, i.e., step 4, to transform the floating-point matrices 𝑋𝑘 and 𝑌𝑘 into two integer matrices, �̃�𝑘 and �̃�𝑘 , on the basis of the following equations: �̃�𝑘 = round(𝑚𝑌 ⋅ 𝑋𝑘 ) �̃�𝑘 = round(𝑚𝑋 ⋅ 𝑌𝑘 ) . (1) note that the round() operator in above equations is the usual rounding operator, i.e., it rounds a floating point number to the nearest integer number. it is worth observing that actual dimensions of the matrices 𝐴𝑘, �̃�𝑘 , and �̃�𝑘 are, 𝑁 × 𝑀, 𝑁 × 𝑘, and 𝑘 × 𝑀, respectively. thus, the number of elements in �̃�𝑘 and �̃�𝑘 is lower than the number of eeg samples in the matrix 𝐴𝑘. therefore, the matrices �̃�𝑘 and �̃�𝑘 can be considered as an alternative but compressed representation of the matrix 𝐴𝑘. in particular, the expected compression ratio can be derived as follows. let us indicate with 𝑤 the number of bits used to represent each eeg sample in 𝐴𝑘. considering the actual dimensions of the matrices 𝐴𝑘, the overall number of bits needed for representing the matrix 𝐴𝑘 is 𝐵𝑜 = 𝑤 ⋅ 𝑁 ⋅ 𝑀. in the same way, if we suppose that 𝑤 + 𝑎 is the maximum number of bits needed to represent the elements of �̃�𝑘 and �̃�𝑘 , the overall number of bits needed for representing the compressed matrices �̃�𝑘 and �̃�𝑘 is at most 𝐵𝑐 = (𝑤 + 𝑎) ⋅ (𝑁 + 𝑀) ⋅ 𝑘 and therefore the compression ratio can be evaluated as 𝐶𝑅 = 𝐵𝑜 𝐵𝑐 = 𝑤 ⋅ 𝑀 ⋅ 𝑁 (𝑤 + 𝑎) ⋅ (𝑁 + 𝑀) ⋅ 𝑘 . (2) in particular, when 𝑤 + 𝑎 ≈ 𝑤 and 𝑀 >> 𝑁, the expected compression ratio of the proposed algorithm can be approximated as 𝐶𝑅 ≈ 𝑁/𝑘. therefore, a considerable compression can be achieved when 𝑁 >> 𝑘, i.e., in the case of high-density eeg systems with correlated signals. for instance, in the case 𝑁 = 256 and 𝑘 = 15 we have 𝐶𝑅 ≈ 17 so that each gb of eeg data can be compressed and thus stored in less than 60 mb. in their paper, authors proved that, given the matrices �̃�𝑘 and �̃�𝑘 and the scale factor 𝑠 = 𝑚𝑋 ⋅ 𝑚𝑌 , an effective approximation �̃�𝑘 of the matrix 𝐴𝑘 is given by the following equation �̃�𝑘 = round ( �̃�𝑘 �̃�𝑘 𝑠 ) . (3) basically, the above relation provides the reconstruction equation needed for decompression. experimental results reported in [32] have shown that the maximum absolute error 𝑀𝐴𝐸 = |𝐴𝑘 − �̃�𝑘 | introduced by the above approximation is bounded by 𝑀𝐴𝐸 ≤ 2, that is a negligible error in comparison to the actual range of the original eeg samples, i.e. [−2𝑤−1, +2𝑤−1 − 1]. 3. proposed algorithm in this section, we slightly modify the previous algorithm with the aim of: 1) improving the compression ratio; 2) parameterizing the algorithm. in particular, we derived a new version of the algorithm able to achieve different trade-offs between compression efficiency and distortion. basically, the new algorithm exploits the fact that consecutive values in the matrix �̃�𝑘 are highly correlated. therefore, a further reduction in the number of bits, and thus an increasing in the compression ratio, can be obtained by encoding the differences between consecutive values in �̃�𝑘 instead of the matrix �̃�𝑘 itself. more precisely, let us introduce the matrix �̃�𝑌𝑘 = [(�̃�𝑘 [1, : ]) 𝑇 ; diff(�̃�𝑘 𝑇 )] , (4) where diff() returns the matrix of differences along the first dimension. it is worth observing that the matrix �̃�𝑘 can be exactly recovered from �̃�𝑌𝑘 as �̃�𝑘 = cumsum(�̃�𝑌𝑘 ) 𝑇 , (5) where cumsum() is the cumulative sum of elements along the first dimension. therefore, no further losses are introduced if, instead of the matrix �̃�𝑘, the matrix of differences �̃�𝑌𝑘 is stored or transmitted. on the basis of the previous observation, a new compression algorithm for eeg has been derived and can be summarized as shown in figure 2. note that, in comparison to the previous algorithm, we introduced a new step (see step 5), highlighted in bold for the sake of readability. reconstruction, i.e., decompression, can be easily achieved by obtaining �̃�𝑘 with (5) and thus using again (3) to recover �̃�𝑘. figure 1. illustration of the compression algorithm proposed in [32]. figure 2. illustration of the proposed compression algorithm. • inputs: an integer number 𝑘 < 𝑁 and a matrix 𝐴, formed by 𝑁 × 𝑀 eeg samples; • outputs: integer matrices �̃�𝑘 and �̃�𝑘 and the scale factor 𝑠 = 𝑚𝑋 ⋅ 𝑚𝑌. • algorithm: 1) use svd to decompose 𝐴 as 𝐴 = 𝑈𝛴𝑉𝑇 2) obtain the matrices 𝑆 = 𝛴1/2, 𝑋𝑘 = (𝑈𝑆)[: ,1: 𝑘] and 𝑌𝑘 = (𝑆𝑉 𝑇 )[1: 𝑘, : ] 3) evaluate 𝑚𝑋 = max(|𝑋𝑘 |), 𝑚𝑌 = max(|𝑌𝑘 |) 4) calculate 𝑠 = 𝑚𝑋 ⋅ 𝑚𝑌, �̃�𝑘 = round(𝑚𝑌 ⋅ 𝑋𝑘 ) and �̃�𝑘 = round(𝑚𝑋 ⋅ 𝑌𝑘 ) • inputs: an integer number 𝑘 < 𝑁, the matrix 𝐴 formed by 𝑁 × 𝑀 eeg samples and a scale factor 𝐹; • outputs: integer matrices �̃�𝑘 and 𝐷�̃�𝑘 and the scale factor 𝑠 = 𝑚𝑋 ⋅ 𝑚𝑌. • algorithm: 1) use svd to decompose 𝐴 as 𝐴 = 𝑈𝛴𝑉𝑇 2) obtain the matrices 𝑆 = 𝛴1/2, 𝑋𝑘 = (𝑈𝑆)[: ,1: 𝑘] and 𝑌𝑘 = (𝑆𝑉 𝑇 )[1: 𝑘, : ] 3) 𝑚𝑋 = max(|𝑋𝑘 |)/𝐹, 𝑚𝑌 = max(|𝑌𝑘 |)/𝐹 4) calculate 𝑠 = 𝑚𝑋 ⋅ 𝑚𝑌, �̃�𝑘 = round(𝑚𝑌 ⋅ 𝑋𝑘 ) and �̃�𝑘 = round(𝑚𝑋 ⋅ 𝑌𝑘 ) 5) calculate the matrix �̃�𝑌𝑘 = [(�̃�𝑘 [1, : ]) 𝑇 ; diff(�̃�𝑘 𝑇 )] acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 note that in the new algorithm we introduced a new input parameter, i.e., the scale factor 𝐹, which can be used to achieve different tradeoffs between compression efficiency and distortion. in particular, the factor 𝐹 is exploited for reducing 𝑚𝑋 and 𝑚𝑌 (see step 3 in figure 2). it is worth nothing that 𝑚𝑋 and 𝑚𝑌 in the new and previous algorithms assume the same values when 𝐹 = 1. therefore, intuitively and as confirmed in experimental results reported in the next section, we have no difference in the distortion achieved by the two algorithms when 𝐹 = 1. instead, by choosing a value of 𝐹 greater than 1, it is possible to achieve a greater compression ratio. this can be easily justified by observing that, according to (1), by reducing 𝑚𝑋 and 𝑚𝑌 we further reduce the dynamic range of the elements in the matrices �̃�𝑘 and �̃�𝑘 and thus the number of bits that are needed for their representation. obviously, a greater compression ratio is obtained at the cost of a greater distortion. nevertheless, experimental results reported in the next section show that the proposed algorithm improves the compression ratio by about 10 % by achieving the same distortion level of our previous algorithm, i.e., 0.01 %. moreover, a substantial increase in the compression ratio, up to 50 %, can be achieved by still maintaining the distortion level below the 0.1 %. 4. measurement-based results the proposed compression algorithm is applied to a dataset containing real eeg signals, which have been preprocessed by eeg experts to denoise and remove artifacts. the eeg dataset under study has been provided by cuny school of medicine (new york, ny, usa). this dataset refers to awake eeg recordings from the research work published in [25]. in this study, tatti et al. have investigated the role of beta oscillations (13.5-25 hz) in the sensorimotor system in a group of healthy individuals. in this experiment, participants were asked to perform planar reaching movements (mov test). mov test required the participants to reach a target, located at different distances and directions, that appeared on a screen in non-repeating and unpredictable order at 3 seconds interval. participants were asked to make reaching movements by moving a cursor on a digitizing tablet with their right hand to targets appearing on the screen. the total testing time was approximately five to six minutes for each eeg recording (96 targets). each mov test was measured with a 256-channel high-density eeg system (hydrocel geodesic sensor net, hcgsn, produced by electrical geodesic inc., eugene, or, usa), amplified using a net amp 300 amplifier, and sampled at 250 hz with 16-bit resolution using the net station software (version 5.0). eeg was noninvasively recorded using scalp electrodes and electrode-skin impedances were kept lower than 50 k. the eeglab toolbox (v13.6.5b) for matlab (v.2016b) was used for off-line preprocessing of the gathered eeg data [33], [34]. the signal of each recording was first filtered using a finite impulse response (fir) bandpass filter with a passband that extends from 1 hz to 80 hz and notch filtered at 60 hz. then, each recording was segmented in 4-seconds epochs and visually examined to remove sporadic artifacts and channels with poor signal quality. moreover, ica with pca-based dimension reduction (max 108 components) was employed to identify stereotypical artifacts (e.g., ocular, muscle, and electrocardiographic artifacts). only ica components with specific activity patterns and component maps characteristic of artefactual activity were removed. electrodes with poor signal quality were reconstructed with spherical spline interpolation procedures, whereas those located on the cheeks and neck were excluded, resulting in 180 signals. after the preprocessing, all signals were re-referenced to their initial average values and processed eeg data were exported in the european data format (edf) [35] by means of the eeglab toolbox. in particular, with the aim of evaluating the performance of the proposed algorithm, six edf files, which are related to three subjects (labelled with the subject numbers sn_m2, sn_m4 and sn_m5) and two sets of mov tests for each subject (“alltrials_1” and “alltrials_4”), have been tested. data range, number of samples, and a few other information of the above mentioned edf files are reported in table 1. it is worth observing that samples are represented with 16-bit integer numbers, i.e., 𝑤 = 16, and that the overall number of samples exploited for tests is more than 80 ⋅ 106. in order to apply the proposed algorithm, each edf file has been read and related data have been processed in blocks of 𝑁 × 𝑀 samples, where 𝑁 has been chosen equal to 180, i.e., 𝑁 coincides with the number of eeg channels remained after the preprecessing phase, and 𝑀 has been fixed equal to 1,000. so that each block of samples represents 4 seconds of data recorded by the multichannel eeg systems. the proposed compression algorithm, i.e., the algorithm in figure 2, has been applied to each block and the average compression ratio have been evaluated according to the relation: 𝐶𝑅𝐹 = 1 𝐿 ∑ 𝐵𝑜,𝑖 𝐵𝑐,𝑖 𝐿 𝑖=1 , (6) where 𝐿 represents the number of blocks processed, 𝐵𝑜,𝑖 is the number of bits that are needed to represent the 𝑖-th block before compression, and 𝐵𝑐,𝑖 is the number of bits that are needed to represent the same block but after compression. note that we use the subscript 𝐹 to highlight the scale factor used for compression, e.g., 𝐶𝑅2 is the compression ratio achieved when 𝐹 = 2. when the scale factor is not expressly stated we assume 𝐹 = 1. subsequently, each compressed block has been reconstructed according to (5) and (3). finally, distortion metrics, i.e., 𝑃𝑅𝐷 and 𝑀𝐴𝐸, have been evaluated on the whole edf file using the following equations: 𝑃𝑅𝐷 = 100 ⋅ √ ∑ ∑ (𝑀⋅𝐿𝑗=1 𝑁 𝑖=1 𝑎𝑖𝑗 − �̃�𝑖𝑗 ) 2 ∑ ∑ 𝑎𝑖𝑗 2𝑀⋅𝐿 𝑗=1 𝑁 𝑖=1 (7) 𝑀𝐴𝐸 = max 𝑖,𝑗 |𝑎𝑖𝑗 − �̃�𝑖𝑗 | , (8) where �̃�𝑖𝑗 are the integer values obtained after reconstruction and 𝑎𝑖𝑗 are the original samples. in our experiments, we further evaluated the compression efficiency of the proposed algorithm with respect to the nearlossless compression algorithm proposed in [32]. in particular, compression efficiency (𝐶𝐸) is here defined as: acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 5 𝐶𝐸 = 100 ⋅ 𝐶𝑅𝐹 − 𝐶𝑅0 𝐶𝑅0 , (9) where 𝐶𝑅0 is the compression ratio obtained with the algorithm proposed in [32], i.e., the algorithm reported in figure 1. similarly, we use 𝑃𝑅𝐷0 and 𝑀𝐴𝐸0 for referring related distortion metrics. compression results achieved with the proposed compression algorithm by setting the scale factor 𝐹 = 1 are shown in table 2. more precisely, for each compressed file, we reported the number of singular values exploited for compression (𝑘), the compression ratio (𝐶𝑅1), values of the distortion metrics (𝑃𝑅𝐷 and 𝑀𝐴𝐸), and the compression efficiency (𝐶𝐸) of the proposed algorithm and, in brackets, corresponding compression results obtained with the algorithm proposed in [32], evaluated on the same files and considering the same number of singular values. as it is possible to observe, the compression ratio achieved by the proposed algorithm when 𝐹 = 1 is near 𝑁/𝑘, which confirms our analytical results reported in section iii. note that the prd is less than 0.01 % for all edf files tested. in particular, prd values obtained with 𝐹 = 1 are the same values obtained in [32]. the same consideration can be extended to the mae. this confirms that the two algorithms in figure 2 and figure 1 have the same performance in terms of distortion when 𝐹 = 1. however, by observing the results on compression efficiency (see the last column of table 2), it is possible to conclude that, in comparison to the algorithm proposed in [32], the new one proposed in this paper is able to improve the compression ratio in a range between 7 % and 9 %. moreover, the scale factor 𝐹 introduced in the new algorithm provides the possibility to achieve even higher compression ratios, obviously at the cost of a greater distortion. we investigated the trade-off between compression efficiency and distortion of the proposed algorithm by considering different values of the scale factor 𝐹 within the range [1, 16]. in particular, we reported in table 3 compression ratios (𝐶𝑅𝐹), distortion metrics (𝑃𝑅𝐷 and 𝑀𝐴𝐸), and compression efficiency (𝐶𝐸) corresponding to 𝐹 ∈ {1,2,4,8,16}. as can be observed in table 3, by increasing 𝐹 it is possible to improve the compression ratio and thus the compression efficiency. in particular, by fixing 𝐹 = 16, the proposed algorithm is able to improve the compression ratio by about the 50 % by maintaining the 𝑃𝑅𝐷 below the 0.1 % threshold (in fact the 𝑃𝑅𝐷 is at most equal to 0.081 % for all the files tested). it is also worth noting that the mae obtained in our experimental results is approximatively equal to 2 𝐹. finally, we evaluated the distribution of absolute errors in recovered signals. in figure 3 we reported the cumulative distribution function (cdf) of the absolute errors, i.e., the probability 𝑃(|𝑒𝑟𝑟| ≤ 𝑥) table 1. edf files used as dataset. file name channels duration (s) number of samples physical range (μv) data range sn_m2_alltrials_1 180 344 15 480 000 [-35.500, +30.314] [-32768, 32767] sn_m2_alltrials_4 180 340 15 300 000 [-48.550, +38.539] [-32768, 32767] sn_m4_alltrials_1 180 328 14 760 000 [-34.532, +34.756] [-32768, 32767] sn_m4_alltrials_4 180 264 11 880 000 [-41.100, +40.673] [-32768, 32767] sn_m5_alltrials_1 180 324 14 580 000 [-41.463, +38.867] [-32768, 32767] sn_m5_alltrials_4 180 308 13 860 000 [-41.347, +46.929] [-32768, 32767] table 2. compression ratio (cr), percent root-mean-square distortion (prd), maximum absolute error (mae), and compression efficiency (ce) of the proposed algorithm when f = 1 (values of cr0 and prd0 are reported in brackets). file name k cr1 prd (%) mae ce (%) sn_m2_alltrials_1 20 8.6 (7.9) 0.0065 (0.0065) 2 (2) 8.6 sn_m2_alltrials_4 20 8.6 (7.9) 0.0063 (0.0063) 2 (2) 8.9 sn_m4_alltrials_1 12 14.6 (13.6) 0.0075 (0.0075) 2 (2) 7.4 sn_m4_alltrials_4 12 14.4 (13.4) 0.0069 (0.0069) 2 (2) 7.5 sn_m5_alltrials_1 13 13.4 (12.5) 0.0071 (0.0071) 2 (2) 7.2 sn_m5_alltrials_4 13 13.2 (12.3) 0.0069 (0.0069) 2 (2) 7.3 table 3. compression results achieved with the proposed algorithm for different values of the scale factor f. file name f crf prd (%) mae ce (%) sn_m2_alltrials_1 1 8.58 0.006 2 8.6 2 9.23 0.010 4 16.8 4 9.98 0.018 8 26.3 8 10.87 0.035 16 37.6 16 11.94 0.069 32 51.1 sn_m2_alltrials_4 1 8.60 0.006 2 8.9 2 9.25 0.010 4 17.1 4 10.01 0.017 8 26.7 8 10.91 0.033 17 38.1 16 11.98 0.066 31 51.6 sn_m4_alltrials_1 1 14.55 0.007 2 7.4 2 15.68 0.012 5 15.7 4 16.99 0.021 7 25.4 8 18.53 0.041 15 36.8 16 20.39 0.081 34 50.5 sn_m4_alltrials_4 1 14.43 0.007 2 7.5 2 15.53 0.011 4 15.7 4 16.81 0.019 8 25.3 8 18.33 0.036 16 36.6 16 20.14 0.072 31 50.1 sn_m5_alltrials_1 1 13.42 0.007 2 7.2 2 14.45 0.011 5 15.4 4 15.66 0.020 8 25.1 8 17.08 0.039 20 36.4 16 18.79 0.077 34 50.1 sn_m5_alltrials_4 1 13.16 0.007 2 7.3 2 14.16 0.010 4 15.5 4 15.31 0.019 9 24.9 8 16.67 0.037 16 36.0 16 18.30 0.073 35 49.3 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 6 that the absolute error (|𝑒𝑟𝑟|) is lower than a threshold 𝑥, achieved for different values of 𝐹. as can be observed in figure 3, the percentage of samples with an absolute error lower than 𝐹 after reconstruction is near to 100 % for all the edf files in the dataset. note that the vertical lines in figure 3 represent the condition 𝑃(|𝑒𝑟𝑟| ≤ 𝐹)). therefore, we can state that the scale factor 𝐹, which is needed as input in the proposed algorithm, can be fixed according to the desired mae, i.e., for a given value of 𝐹, the mae obtained after reconstruction will be, with high probability, within the range [𝐹, 2𝐹]. 5. conclusions in this paper, we developed and validated an improved version of a recently proposed near-lossless compression algorithm for multichannel eeg signals. the algorithm exploits the fact that svd is usually performed on eeg signals for artifacts removal or denoising tasks. experimental results, reported in this paper, show that the developed algorithm is able to achieve a compression ratio proportional to the number of eeg channels with a root-mean-square distortion less than 0.01 %. moreover, with proper settings of input parameters, the compression ratio can be further improved up to 50 % by maintaining the distortion level below the 0.1 %. moreover, the algorithm allows the desired maximum absolute error to be fixed a priori. it should be highlighted that, although an eeg dataset has been considered as a case study, the proposed compression algorithm can be quite straightforwardly applied to different types of dataset. in a future work, we will further investigate performance of the proposed algorithm considering more extended datasets and other types of signals. references [1] h. berger, uber das elektrenkephalogramm des menschen. arch psychiat nervenkr (1929), pp. 527–570. doi: 10.1007/bf01797193 [2] j. t. koyazo, m. a. ugwiri, a. lay-ekuakille, m. fazio, m. villari, c. liguori, collaborative systems for telemedicine diagnosis accuracy, acta imeko 10 (2021) 3, pp. 192-197. doi: 10.21014/acta_imeko.v10i3.1133 [3] d. a. pizzagalli, electroencephalography and high-density electrophysiological source localization (handbook of psychophysiology, 3th ed.), cambridge university press, 2007. doi: 10.1017/cbo9780511546396 [4] d. l. schomer, f. l. da silva, niedermeyer’s electroencephalography: basic principles, clinical applications, and related fields, lippincott williams & wilkins, 2012. [5] n. yoshimura, o. koga, y. katsui, y. ogata, h. kambara, y. koike, decoding of emotional responses to user-unfriendly computer interfaces via electroencephalography signals, acta imeko 6, (2017). doi: 10.21014/acta_imeko.v6i2.383 [6] r. abiri, s. borhani, e. sellers, y. jiang, x. zhao, a comprehensive review of eeg-based brain-computer interface paradigms. j. neural eng. 16 (2018), pp. 011001–011001. figure 3. cumulative distribution function (cdf) of the absolute errors obtained after reconstruction with the proposed near-lossless algorithm for different scale factors (f). vertical lines represent the condition p(|err| ≤ f). https://doi.org/10.1007/bf01797193 http://dx.doi.org/10.21014/acta_imeko.v10i3.1133 https://doi.org/10.1017/cbo9780511546396 http://dx.doi.org/10.21014/acta_imeko.v6i2.383 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 7 doi: 10.1088/1741-2552/aaf12e [7] p. l. nunez, r. srinivasan, electric fields of the brain: the neurophysics of eeg, oxford university press, usa, 2006, isbn: 9780195050387. [8] c. lustenberger, r. huber, high density electroencephalography in sleep research: potential, problems, future perspective. front. neurol. 3 (2012), pp. 77. doi: 10.3389%2ffneur.2012.00077 [9] b. h. brinkmann, m. r. bower, k. a. stengel, g. a. worrell, m. stead, large-scale electrophysiology: acquisition, compression, encryption, and storage of big data, j. neurosc. meth. 180, (2009), pp. 185–192. doi: 10.1016%2fj.jneumeth.2009.03.022 [10] g. gugliandolo, g. campobello, p. p. capra, s. marino, a. bramanti, g. d. lorenzo, n. donato, a movement-tremors recorder for patients of neurodegenerative diseases, ieee trans instrum meas 68 (2019), pp. 1451–1457. doi: 10.1109/tim.2019.2900141 [11] g. campobello, a. segreto, s. zanafi, s. serrano, an efficient lossless compression algorithm for electrocardiogram signals. in proceedings of the 26th european signal processing conference, eusipco 2018, roma, italy, september 3-7, 2018, pp. 777–781. doi: 10.23919/eusipco.2018.8553597 [12] a. casson, d. yates, s. smith, j. duncan, e. rodriguez-villegas, wearable electroencephalography, ieee embs mag. 29 (2010), pp. 44–56. doi: 10.1109/memb.2010.936545 [13] g. campobello, o. giordano, a. segreto, s. serrano, comparison of local lossless compression algorithms for wireless sensor networks, j netw. comput. appl 47 (2015), pp. 23–31. doi: 10.1016/j.jnca.2014.09.013 [14] a. nait-ali, c. cavaro-menard, compression of biomedical images and signals, john wiley & sons, 2008, isbn: 978-1-84821028-8. [15] n. sriraam, correlation dimension based lossless compression of eeg signals, biomed. signal process. control 7 (2012), pp. 379–388. doi: 10.1016/j.bspc.2011.06.007 [16] n. sriraam, c. eswaran, lossless compression algorithms for eeg signals: a quantitative evaluation, in proceedings of the ieee/embs 5th international workshop on biosignal interpretation, tokyo japan, september 6-8, 2005, pp. 125–130. [17] y. wongsawat, s. oraintara, t. tanaka, k. r. rao, lossless multichannel eeg compression, in proceedings of the 2006 ieee international symposium on circuits and systems, island of kos, greece, may 21-24, 2006, p. 4 pp. – 1614. doi: 10.1109/iscas.2006.1692909 [18] g. antoniol, p. tonella, eeg data compression techniques, ieee trans biomed. eng. 44 (1997), pp. 105–114. doi: 10.1109/10.552239 [19] n. sriraam, c. eswaran, performance evaluation of neural network and linear predictors for near-lossless compression of eeg signals, ieee trans. inf. technol. biomed. 12 (2008), pp. 87–93. doi: 10.1109/titb.2007.899497 [20] g. campobello, a. segreto, s. zanafi, s. serrano, rake: a simple and efficient lossless compression algorithm for the internet of things. in proceedings of the 2017 25th european signal processing conference (eusipco), kos island, greece, 28 august 2 september, 2017, pp. 2581–2585. doi: 10.23919/eusipco.2017.8081677 [21] k. srinivasan, j. dauwels, m. r. reddy, a two-dimensional approach for lossless eeg compression, biomed. signal process. control 6 (2011), pp. 387–394. doi: 10.1016/j.bspc.2011.01.004 [22] n. ille, p. berg, m. scherg, artifact correction of the ongoing eeg using spatial filters based on artifact and brain signal topographies, j. clin. neurophysiol 19 (2002), pp. 113–124. doi: 10.1097/00004691-200203000-00002 [23] r. j. davidson, d. c. jackson, c. l. larson, human electroencephalography (handbook of psychophysiology, 2nd ed.), cambridge university press, 2000. [24] n. mammone, d. labate, a. lay-ekuakille, f. c. morabito, analysis of absence seizure generation using eeg spatialtemporal regularity measures, int j neural syst 22 (2012). doi: 10.1142/s0129065712500244 [25] e. tatti, s. ricci, a. b. nelson, d. mathew, h. chen, a. quartarone, c. cirelli, g. tononi, m. f. ghilardi, prior practice affects movement-related beta modulation and quiet wake restores it to baseline, front. syst. neurosci 14 (2020), pp. 61. doi: 10.3389/fnsys.2020.00061 [26] m. k. islam, a. rastegarnia, z. yang, methods for artifact detection and removal from scalp eeg: a review. neurophysiol. clin. 46 (2016), pp. 287–305. doi: 10.1016/j.neucli.2016.07.002 [27] s. casarotto, a. m. bianchi, s. cerutti, g. a. chiarenza, principal component analysis for reduction of ocular artefacts in eventrelated potentials of normal and dyslexic children, clin. neurophysiol. 115 (2004), pp. 609–619. doi: 10.1016/j.clinph.2003.10.018 [28] z. anusha, j. jinu, t. geevarghese, automatic eeg artifact removal by independent component analysis using critical eeg rhythms, in proceedings of the 2013 ieee international conference control communication and computing (iccc), trivandrum, kerala, india, december 13-15, 2013, pp. 364–367. doi: 10.1109/iccc.2013.6731680 [29] j. dauwels, k. srinivasan, m. r. reddy, a. cichocki, nearlossless multichannel eeg compression based on matrix and tensor decompositions, ieee j. biomed. health inform. 17 (2013), pp. 708–714. doi: 10.1109/titb.2012.2230012 [30] l. lin, y. meng, j. chen, z. li, multichannel eeg compression based on ica and spiht, biomed. signal process. control 20 (2015), pp. 45–51. doi: 10.1016/j.bspc.2015.04.001 [31] m. k. alam, a.a. aziz, s. a. latif, a. awang, eeg data compression using truncated singular value decomposition for remote driver status monitoring, in proceedings of the 2019 ieee student conference on research and development (scored), universiti teknologi petronas (utp), malaysia, 15-17 october 2019, pp. 323–327. doi: 10.1109/scored.2019.8896252 [32] g. campobello, a. quercia, g. gugliandolo, a. segreto, e. tatti, m. f. ghilardi, g. crupi, a. quartarone, n. donato, an efficient near-lossless compression algorithm for multichannel eeg signals, 2021 ieee international symposium on medical measurements and applications (memea), neuchâtel, switzerland, june 23-25, 2021. doi: 10.1109/memea52024.2021.9478756 [33] a. delorme, s. makeig, eeglab: an open source toolbox for analysis of single-trial eeg dynamics including independent component analysis, j. neurosci. methods 134 (2004), pp. 9–21. doi: 10.1016/j.jneumeth.2003.10.009 [34] s. makeig, s. debener, j. onton, a. delorme, mining eventrelated brain dynamics, trends cogn. sci. 8 (2004), pp. 204–210. doi: 10.1016/j.tics.2004.03.008 [35] b. kemp, a. värri, a. c. rosa, k. d. nielsen, j. gade, a simple format for exchange of digitized polygraphic recordings, electroencephalogr. clin. neurophysiol 82 (1992), pp. 391–393. doi: 10.1016/0013-4694(92)90009-7 https://doi.org/10.1088/1741-2552/aaf12e https://doi.org/10.3389%2ffneur.2012.00077 https://doi.org/10.1016%2fj.jneumeth.2009.03.022 https://doi.org/10.1109/tim.2019.2900141 https://doi.org/10.23919/eusipco.2018.8553597 https://doi.org/10.1109/memb.2010.936545 https://doi.org/10.1016/j.jnca.2014.09.013 https://doi.org/10.1016/j.bspc.2011.06.007 https://doi.org/10.1109/iscas.2006.1692909 https://doi.org/10.1109/10.552239 https://doi.org/10.1109/titb.2007.899497 https://doi.org/10.23919/eusipco.2017.8081677 https://doi.org/10.1016/j.bspc.2011.01.004 https://doi.org/10.1097/00004691-200203000-00002 https://doi.org/10.1142/s0129065712500244 https://doi.org/10.3389/fnsys.2020.00061 https://doi.org/10.1016/j.neucli.2016.07.002 https://doi.org/10.1016/j.clinph.2003.10.018 https://doi.org/10.1109/iccc.2013.6731680 https://doi.org/10.1109/titb.2012.2230012 https://doi.org/10.1016/j.bspc.2015.04.001 https://doi.org/10.1109/scored.2019.8896252 https://doi.org/10.1109/memea52024.2021.9478756 https://doi.org/10.1016/j.jneumeth.2003.10.009 https://doi.org/10.1016/j.tics.2004.03.008 https://doi.org/10.1016/0013-4694(92)90009-7 measurements and geometry acta imeko issn: 2221-870x june 2021, volume 10, number 2, 98 103 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 98 measurements and geometry valery d. mazin1 1 peter the great st. petersburg polytechnic university, russia section: research paper keywords: measurements; geometry; projective metric; basic measurement equation; geometric space citation: valery mazin, measurements and geometry, acta imeko, vol. 10, no. 2, article 14, june 2021, identifier: imeko-acta-10 (2021)-02-14 section editor: francesco lamonaca, university of calabria, italy received april 2, 2021; in final form may 18, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: valery d. mazin, e-mail: masin@list.ru 1. introduction the purpose of this article is to define the commonalities between the concept of measurements and geometry; a method that uses the elements of geometry to model the basic elements of a measurement process. among many approaches that are used to describe fundamental measurement categories, geometrical approach is often underrated despite the fact that geometry as a science has originated from measurements and only later turned to a new higher level of generality. this paper attempts to argue that measurements and geometry are related, and geometry is not just another branch of math. at all times the most prominent authorities in the scientific world have acknowledged the fundamental and special place that geometry takes in the system of exact sciences. thus, spinoza believed that it is geometry that “reveals a causal connection in nature”. newton said that “geometry expounds and justifies the art of measurement” [1]. in [2] we find einstein’s statement, according to which “geometry must precede physics, since the laws of the latter cannot be expressed without geometry. therefore, geometry must be considered as a science, logically preceding every experience and every experimental science.” a remarkable illustration of this thought is also presented in the book by b. mandelbrot, "the fractal geometry of nature" [3]. in "the encyclopedia of mathematics" [4], the special role of geometry is characterized in the following way: “developments of geometry and its applications, advances in geometric perception of abstract objects in various areas of mathematics and natural science provide solid evidence of the importance of geometry as one of the most profound and fruitful means for cognizing reality” [5, 6]. today measurement specialists rarely use geometrical apparatus both in general and particular cases (with exception of, maybe, measurements at the elementary level). instead, analytical approach absolutely dominates the field. however, [7] highlights the huge heuristic value of geometric representation of the concepts of analysis; saying that geometry “is becoming increasingly important in … physics. it simplifies mathematical formalism and deepens physical comprehension. this renaissance of geometry has had an impact not only on the special and general theory of relativity, obviously geometric in nature, but also on other branches of physics, where the geometry of more abstract spaces is replacing the geometry of physical space.” today, no one seems to deny the fact that the science of measurements is actually a metascience, which is used in all natural and technical sciences, to say the least. for this reason, abstract the paper is aimed at demonstrating the points of contact between measurements and geometry, which is done by modelling the main elements of the measurement process by the elements of geometry. it is shown that the basic equation for measurements can be established from the expression of projective metric and represents its particular case. commonly occurring groups of functional transformations of the measured value are listed. nearly all of them are projective transformations, which have invariants and are useful if greater accuracy of measurements is desired. some examples are given to demonstrate that real measurement transformations can be dealt with via fractional-linear approximations. it is shown that basic metrological and geometrical categories are related, and a concept of seeing a multitude of physical values as elements of an abstract geometric space is introduced. a system of units can be reasonably used as the basis of this space. two tensors are introduced in the space. one of them (the affinor) describes the interactions within the physical object, the other (the metric tensor) establishes the summation rule on account of the random nature of components. mailto:masin@list.ru acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 99 the apparatus has to be represented by disciplines with the same or higher order of generality. geometry is just such a discipline. section 2 shows that the basic equation for measurements is a special case of the expression of projective metric. in section 3, functional measurement transformations are looked at in the context of groups theory. section 4 is devoted to identifying the relationship between metrological and geometrical categories. the concluding section summarizes the main idea of the paper and points out on its practical usefulness. 2. projective metric and basic measurement equation the essence of any measurement has always been a comparison with a known unit. among various geometric systems, the most common one is projective geometry, which, according to m. komatsu [8], represents geometry as a whole. projective geometry only studies the mutual relations between figures and in this sense is akin to measurements. a segment of a numerical axis can traditionally represent the value of a measured quantity. in projective geometry, the distance between two points is determined using a cayley metric (projective metric) 𝑙 = 𝑐 |ln 𝑉| , (1) where c is the constant, 𝑉 = 𝑥3 − 𝑥1 𝑥2 − 𝑥3 𝑥4 − 𝑥1 𝑥2 − 𝑥4 ⁄ (2) (complex, or double ratio of four points of a straight line), 1 2 3 4 , , ,x x x x are the coordinates of the points on the line. let 𝑐 = 1, 𝑥3 = 0, 𝑥4 = ∞. then from the equations above it follows that 𝑙 = |ln ( 𝑥1 𝑥2 )| = |ln ( 𝑥2 𝑥1 )| , (3) hence e𝑙⋅sgn(𝑥2−𝑥1) = 𝑥2 𝑥1 (4) and 𝑥2 = 𝑥1 ⋅ e 𝑙⋅sgn(𝑥2−𝑥1) . (5) the meaning of the quantities in the last equation leaves no doubt that what we have here the “basic equation of measurements” usually written as 𝑥 = {𝑥}[𝑥] , (6) where x is the measured quantity, {𝑥} = e𝑙⋅sgn(𝑥2−𝑥1) (7) is its numerical value, and [𝑥] is the quantity unit. the latter is taken for granted and does not seem to require any proof. however, as we can see, it is deduced from the definition of projective metric, a fact that can hardly be accidental. thus, it is worth mentioning a statement by famous mathematician holder [9]: “to prevent misunderstanding, i note here that the axioms of the theory of quantities as they appear here should not be presumed in geometry or applied to segments and volumes. on the contrary, there are examples of purely geometric axioms for points and segments, from which it can later be proved ... that for segments there are facts that in general theory of measurable quantities are presupposed as axioms”. in connection to the above said, let us make the following remark: from (1) it follows that the logarithmic scale, widely used in measurements as well as in physics and technology, is nothing but a scale in projective metric. in general, the simple relations stated above suggest that there is a principled connection between the fundamental concepts of geometry and measurements. the basic equation for measurements, fundamental and meaningful in itself, happens to be a particular case of a fundamental geometric relationship too. 3. groups of functional measurement transformations in addition, measurements and geometry are related by the fact that invariants are widely used in both disciplines. in measurements, this leads to improved accuracy. [10] shows an example of invariant principle applied to a simple ratio of three values of the measured quantity to the affine measurement transformation. in geometry, invariants generally have fundamental significance, since according to f. klein's "erlangen program" [11], various geometries represent the invariant theories of the relevant transformation groups. it should be noted that the function of the channel transformation of measurement system, y = f (x), certainly belongs to one of the following groups (we do not mean the groups mentioned in the "erlangen program"): 𝑦 = 𝑥 (8) is the identical group, 𝑦 = 𝑥 + 𝛽 (9) is the shift group, 𝑦 = 𝑎 ∙ 𝑥 (10) is the similarity group, 𝑦 = 𝑎 ∙ 𝑥 + 𝛽 (11) is the affine (linear) group 𝑦 = (𝛼 ∙ 𝑥 + 𝛽) (𝛾 ∙ 𝑥 + 𝛿)⁄ (12) is the projective (fractional-linear) group, 𝑦2 ≥ 𝑦1 by 𝑥2 ≥ 𝑥1, or 𝑦2 ≤ 𝑦1 by 𝑥2 ≥ 𝑥1 (13) is the group of monotonous transformations. all these transformations, except for (13), have invariants. such an invariant for (12), which includes all the previous groups (8) through (11), is a complex ratio of four points on a straight line [12, 13]. for (11), the invariant is the simple ratio of three points on a straight line (𝑥2 − 𝑥1) (𝑥3 − 𝑥2)⁄ , for (9) and (8), except for the two indicated invariants, x2 – x1, it is the usual euclidean distance between two points lying on the coordinate axis. the last two types of transformations are non-linear in general. if nonlinearity is small, then most commonly the corresponding experimental dependence can be satisfactorily approximated by a fractional-linear function [12], [14] – [16]. the remarkable property of the latter is that it belongs to the group of projective transformations while its form can vary a lot. such a transformation can be visualized as an image of a projection of the input scale on the output scale. the group property is acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 100 expressed in the fact that the superposition of a series of fractional-linear functions is the same function and does not lead to higher complexity (figure 1), while the inverse transformation is also fractional-linear. thus, a unified mathematical description becomes possible both for the intermediate transformations in the channel and for the whole transformation. for significantly nonlinear transformations or for improved accuracy of the approximation, it is advised that several fractional-linear functions should be used. they are either summed up or applied to sequential sections of the characteristic (piecewise approximation). in the summation case the output value is obtained as the sum of the results of fractional-linear transformations. to make it possible, the following conditions must be met: the result of the approximation by function 𝑦 = 𝑄(𝑥) 𝑃(𝑥)⁄ = ∑ 𝑎𝑖 𝑥 𝑖 𝑚 0 ∑ 𝑏𝑖 𝑥 𝑖 𝑛 0 ⁄ (14) has single roots in the denominator; 𝑚 does not exceed 𝑛 by more than 1. if y is a proper fraction, then it can be converted into a sum ∑ 𝐴𝑖 (𝑥 − 𝛼𝑖 ) , 𝑛 𝑖=1 where 𝛼1, … , 𝛼𝑛 are the roots of the denominator, the coefficients are found from the equation 𝐴𝑖 = 𝑄(𝛼𝑖 ) 𝑃′(𝛼𝑖 ) , (15) whereas 𝑃′(𝛼𝑖) = 𝑃 ′(𝑥)|𝑥=𝛼𝑖 . (16) if 𝑚 = 𝑛, then a constant is added to the sum of the fractions as a result of extracting the integer part. if 𝑚 = 𝑛 + 1, then a linear function is added. in both instances we deal with a particular case of a fractionallinear function. if any the roots in the denominator are complex, nothing changes in principle, but some of the summable fractional-linear functions turn out to be complex. at the same time, all their remarkable properties are sustained, including the presence of an invariant a complex relationship of four arbitrary points (𝑥3 − 𝑥1) (𝑥2 − 𝑥3) (𝑥4 − 𝑥1) (𝑥2 − 𝑥4) .⁄ an example can be the results of approximation of the calibration characteristics of two temperature sensors. let the first one be a platinum thermoresistor (its characteristic is utilized to model the international practical temperature scale). in the range of –259 °c ÷ +660 °c, we obtain 𝑊 = −2.244 ⋅ 104 𝑡 + 2.768 ⋅ 103 + 2.925 𝑡 + 280.063 + −7.227 ⋅ 104 𝑡 − 7.947 ⋅ 103 , (17) where 𝑊 is the ratio of resistance at temperature 𝑡 in °c to resistance at zero celsius. the standard uncertainty of this approximation is 0.7 °c, which corresponds to 0.08 % with respect to the temperature range and is considered acceptable for most practical cases. let the second sensor be a pt/rh thermocouple with 30 % / 6 % rh content. in the range of 0 °c ÷ 1800 °c its characteristic is approximated by expression 𝐸 = −4.514 ⋅ 106 + 5.287 ⋅ 107𝑖 𝑡 − 186.827 − 2.807 ⋅ 103𝑖 − 4.514 ⋅ 106 + 5.287 ⋅ 107𝑖 𝑡 − 186.827 + 2.807 ⋅ 103𝑖 − 4.86 ⋅ 108 𝑡 − 1.302 ⋅ 104 (18) and the standard uncertainty will equal to 0.3 %. in the case of piecewise linear fractional approximation there are no restrictions with respect to accuracy (uncertainty), but it is more difficult to implement. whichever fractional-linear approximation case is chosen, the need for mathematical methods is limited to four arithmetic operations. using invariance fits the purpose of measurement, which is not about transformation, but rather about preserving the information. indeed, in order to restore the characteristics of the original signal using measured characteristics of the converted signal, some kind of relationship between the signals has to be retained during the chosen transformation. from this perspective, the measuring transducer should be called a transmitter rather than a transducer, i.e. in this case the name is not associated with the main property of an object but reflects its secondary property instead. this happens because the transfer and transformation of the quantity value (including scaled transformation, i.e. energy level transformation) correlate to each other in the same way as the essence and a phenomenon; in other words, a dualism takes place. it is the transformation, not the transfer of the value that is visible to an observer. as in other similar cases, the object was named for its superficial, rather than essential property. ideally, in all types of measurement transformations such as the quantity type transformation, identity transformation, modulation and demodulation, the form of representing the transformation (e. g. analog-digital), code conversions, etc. the amount of information remains intact. in these transformations, errors mean the loss of information, and it is the degree of this loss, not the type of transformations that determines the quality of a measuring channel. 𝑦1 = 𝑎1𝑥 + 𝑏1 𝑐1𝑥 + 1 𝑦𝑖 = 𝑎𝑖 𝑦𝑖−1 + 𝑏𝑖 𝑐𝑖 𝑦𝑖−1 + 1 𝑦𝑛 = 𝑎𝑛 𝑦𝑛−1 + 𝑏𝑛 𝑐𝑛𝑦𝑛−1 + 1 𝑦 = 𝑎𝑥 + 𝑏 𝑐𝑥 + 1 figure 1. the circuit of fractional-linear transformations in a measuring channel. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 101 perhaps, the reason for the difficulty with the classification of measuring transducers that has still not been resolved is that all the variants of such a classification that are known so far are created on that side of the above-mentioned dualism that characterizes the phenomenon rather than the essence. in other words, what we are trying to do is classify the types of transformations; whereas what we should do is classify the types of information preservation. this situation, which occurs while fractional-linear and their dependent transformations are used, is consistent with the general concept of the measurement procedure. the very basic procedures, for example, when the length was measured, consisted of two stages: the mutual displacement of the measured object and the measure, and their comparison with each other. historically, this original essence of measurement is now perceived only by its second stage, while the first stage is actually no less significant. it is worth highlighting that what in the early measurements was omnipresent mechanical displacements, nowadays is replaced by measurement transformations. the analogies between displacements in an ordinary space and measurement transformations can be formalized even further. in length measurements, the correlation we wish to preserve is seen as the distance between the points, and this distance remains unchangeable no matter what the shifts and turns are. if we determine the distances in terms of the values that are preserved during such transformations for a set of all possible signals, we will arrive to the geometrical interpretation of a measurement procedure as a transformation that preserves the distance, i.e. as “displacements” in the relevant space. with fractional-linear transformations (for example, when using a voltage divider), complex ratio 𝑉 is preserved. but projective metric is thereby preserved too. since fractional-linear transformations preserve projective metric, it is only natural to call them the “displacements in a projective space”. at the same time, they are the most common among all the above transformations (except for monotonous transformations), each preserving the distance. it can be assumed that it is this property that permits us to either correct errors effectively or implement algorithms that eliminate errors in the first place. in these terms, the stages of the measurement procedure consist of displacements in a projective space and comparison with a master reference, i.e. coincide with an ordinary length measurement procedure. a typical example is measuring voltage with a bitwise-balancing voltmeter, whose range is smaller than the voltage measured. in such a case the voltage is pre-attenuated by a divider. the divider produces the corresponding section of the voltage scale on its output, preserving its length in the projective metric, while the voltmeter makes the comparison. as a preliminary conclusion, we can state so far that: any functional measurement transformation belongs to the group of monotonous transformations; the most common monotonous transformation is a fractional-linear variety that can describe projective correlations analytically and act as displacements (whereas the dimensions of the moving object are invariant). 4. correspondence between metrological and geometric categories table 1 shows the mutual correspondence of the fundamental metrological and geometric categories. any line in table 1 can be used as a departure point for further research. the concept of the space of affine connectivity takes up the first place in the table among geometrical concepts. it represents diversity, in which the field of the connectivity object is defined. the term “diversity” generally needs to be defined; however, in this case we will skip that as its meaning is obvious. as we know, the connectivity object characterizes the point of diversity, in which a local benchmark (or affine benchmark, referring precisely to the given point) is defined. in turn, the affine benchmark is a combination of the point itself and the coordinate basis. the connectivity object gives as answer to the question about how the coordinates of an arbitrary vector change as it is displaced along a certain curve while preserving orientation. in the general case, the coordinates will indeed change, because as the vector is moving from one point to another, the local benchmark to which the vector is momentarily related, changes too. the space of affine connectivity is poor in terms of properties, but it becomes richer once a metric is introduced in it by means of defining a metric tensor. then the space becomes riemannian, a space of curved vectors. such vectors represent physical quantities that characterize a specific physical object [17]. for such a vector space, a system of units can serve as a basis since a unit of any quantity can be expressed via the basic units of the system. a good example is a vector acceleration receiver on a moving ship. this receiver reads accelerations in an acoustic wave in water and its purpose is to identify the locations of the sources of noise. the main part of the receiver is shown in figure 2. we see six flat piezoelectric plates at the outside edges rigidly connected to a pair of strings in its middle, each pair of strings being fixed on the frame and running in three mutually perpendicular directions. the inner edges of the piezoelectric plates are perpendicularly joined to the faces of a cubic inertial element. when the frame experiences acceleration, the inertia force acts upon the cubical element, which can be componentized along the axes perpendicular to the planes of the piezoelectric plates. these componentized forces cause electrical charges. the axes perpendicular to the planes of the piezoelectric plates form a coordinate basis, and together with the center of gravity of the cubic element create a local benchmark. as the ship moves and experiences pitching and rolling, the location and orientation in space of the local benchmark changes, whereas the direction of the vector of acceleration of water particles in the acoustic beam remains the same. as a result, the table 1. correspondence between metrological and geometric categories. metrological category geometric equivalent an object, a measuring instrument with deterministic relationships the space of affine connectivity, or a riemannian space physical quantity point in the space, vector system of units basis probability characteristics and statistical relationship of physical quantities metric tensor (determines the space geometry) analog measurement transformation affinor (determines the relationship of the vectors) analog-to-digital conversion vectors subtraction preservation of the measurement information invariance acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 102 projections of this vector on the axis of the coordinate basis l change as determined by the connectivity object. the connectivity object is a system of numbers, called connectivity coefficients. if each and every connectivity coefficient turns into zero, the diversity becomes an affine space. vectors can be defined in it, so the space is a vector space. an affine space is a model of any particular object whose physical regular properties can be described by simple additive relationships. the space describing a measuring instrument is generally multidimensional. in [18], we can see the application of multidimensional spaces apparatus. in this space, the points and the vectors connecting them to the origin correspond to physical quantities. the basis of the space is the system of units. an affine space can be identified for an object with any other kind of regular physical properties, but only in an infinitesimal region and with an accuracy no greater than that of the first order [19]. let 𝑦 = 𝑓(𝑥1, … , 𝑥𝑛) be any function of 𝑛 variables. then d𝑦 = ( 𝜕𝑦 𝜕𝑥𝑖 ) d𝑥𝑖 (19) (implied summation over 𝑖) can be considered a vector, since coordinates ( 𝜕𝑦 𝜕𝑥𝑖 ) d𝑥𝑖 (at first approximation) are affine. the rule of adding the vectors, and, consequently, the space geometry, is determined by a metric tensor. since the values for the generality should be considered random, the addition on rule must take into account their probability characteristics and statistical relationship. as shown in [20], if the coordinates for the vectors are expanded uncertainties, then the metric tensor is determined by the types of probability distribution, the coverage probability, the ratio of the terms, and mutual correlation. by today, the components of such a tensor for the most popular probability distributions and for 0.95 and 0.99 coverage probabilities have been determined by a. chepushtanov [20]. if the coordinate system is formed by standard uncertainties, then the metric tensor is determined only by the mutual correlation. the analog measurement transformation, which takes into account the design parameters of the device and influencing factors as input quantities, has a geometrical equivalent affinor, the rule that states that each vector 𝐝𝒙 is matched with a certain vector 𝐝𝒚. the affinor is a square-matrix bivalent tensor. since the result of the analog-digital transformation is a number, it is only obvious that the quantity separates from the quality. in other words, the quantity rids of its physical carrier. taking the logarithm of the basic equation for measurements yields two vectors – one for the numerical value and another for the unit. we do not touch here on the quasi uncertainty arising in the logarithm of a unit, we note only that it is a matter of logarithmising not the number „one”, but a unit of a physical quantity, which can have a different real meaning. thus, in the logarithmic representation, the geometric essence of the analogdigital transformation is that the unit vector is subtracted from the full quantity vector, which is incidentally nothing else but computing how many units of the quantity (or which part of the unit) is included in the dimension of the quantity. finally, as it was stated above, the preservation of the measurement information characterizing the object conceptually corresponds to invariance. information losses, inevitable in any measurement transformation (introduction of an uncertainty), means that this principle of correspondence is compromised. when searching for the points of contact between modern geometry and measurements, special attention should be paid to non-euclidean geometries. riemann geometry is one of them, since its features are quite visible in the space of quantities. for instance, the ends of the logarithmic vectors of reciprocal quantities (such as resistance and conductivity) are located at the diametrically opposite points of a sphere that has its center at the origin of the coordinates. at the same time, there is practically no difference between them. they represent the same characteristic of the system. as it is known, the riemann geometry is a spherical geometry with an additional condition of identification applied to the opposite points of the sphere. the obvious similarity between the physical and geometrical facts can hardly be accidental. moreover, there is evidence that different geometries work in case of different measurement ranges of the same physical quantities, which is only natural due to a wide generality of their geometric space. further research in this area seems to be promising and new interesting results are expected. other promising applications of projective geometry are in explaining the laws of physics and image processing [22]. the basic concept here is projective mapping. it is most frequently used as a visual image rather than a mathematical structure. being used in a strict geometric sense, this concept allows us to describe patterns with any level of complexity. then the mapping parameters logically become variable, which is consequent to the change in the position of the projection center. this position, in turn, is affected by some physical causes, which now can be identified and explored. thus, the fundamental metrological categories have geometric equivalents. problem defining in the area of measurements can be described using geometric terminology. solving metrological problems seems possible if the powerful modern geometric apparatus is used. 5. conclusions the most important geometric concepts have equivalents in measurement theory. this knowledge allows us to apply geometric approach and apparatus to formulate a single figure 2. the main part of a vector acceleration receiver. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 103 mathematical description for important measurement categories, obtain new theoretical results, and model measuring procedures. in particular, projective transformations can be used in such modelling. due to their group properties, the characteristics of measuring devices can be described in a significantly simpler manner, whereas the present invariant allows us to increase the accuracy of measurements. a model for any physical object, including the measuring device itself, can be represented as a vector space, whose elements, in turn, represent the quantities characterizing the object. this approach can be used in metrological analysis of measuring devices, where an important role is given to the summation of uncertainties. for such a summation in the geometric model, a metric tensor of the space is used, and in case of standard uncertainties such a tensor morphs into the coefficient of mutual correlation. thanks to the affinity of the concepts in geometry and measuring theory, measuring concepts and facts can be considered from a geometric standpoint and bring new interesting results. references [1] i. i. newton, mathematical beginnings of natural philosophy, university of california press, 1999, isbn 9780-520-08816-0. [2] b. g. kuznetsov, einstein: life, death, immortality, nauka, moscow, 1980 [in russian] [3] b. mandelbrot, fractal geometry of nature, computer research institute, moscow, 2002 [in russian] [4] the encyclopedia of mathematics, sovetskaia entsiklopediia, moscow, 1977 [in russian] [5] a. a. penin, analysis of electrical circuits with variable load regime parameters (projective geometry method), springer, cham heidelberg new york dordrecht london, 2015, isbn 978-3-31916351-2. [6] a. s. t. pires, a brief introduction to topology and differential geometry in condensed matter physics, morgan & claypool, 2019, isbn: 978-1-64327-371-6. [7] b. f. schutz, geometrical methods of mathematical physics, cambridge university press, cambridge, 1980. [8] m. komatsu, geometry variety, znanie, moscow, 1981 [in russian] [9] o. l. holder, die axiome der quantität und die lehre vom maß: ber. über die verhandlungen der königlich sächsischen ges. der wiss. mathem.-phys. klasse, 1901, s. 1 – 65. [in german] [10] e. m. bromberg, k. l. kulikovsky, test methods for improving measurement accuracy, energija, moscow, 1978 [in russian] [11] f. c. klein, a comparative review of recent researches in geometry: bull. new york math. soc., n.y., 1892-1893, pp. 215249. [12] h. t. nguyen, v. y. kreinovich, c. baral, v. d. mazin, grouptheoretic approach as a general framework for sensors, neural networks, fuzzy control and genetic boolean networks, 10-th imeko tc7 internat. symposium, saint petersburg, russia, 30 june – 2 july 2004, pp. 65–70. online [accessed 22 june 2021] https://www.imeko.org/publications/tc7-2004/imeko-tc72004-044.pdf [13] i. n. krotkov, v. y. kreinovich, v. d. mazin, general form of measurement transformations which admit the computational methods of metrological analysis of measuring-testing and measuring-computing systems, measurement techniques 30 (1987), pp. 936–939. doi: 10.1007/bf00864981 [14] o. a. tsybulskii, use of the complex ratio method in widerange measurement devices, measurement techniques 56 (2013), pp. 232–234. doi: 10.1007/s11018-013-0185-2 [15] o. a. tsybulskii, the fractional-linear measurement equation, measurement techniques 60 (2017), pp. 443–450. [16] o. a. tsybulskii, projective properties of wide-range measurements, measurement techniques 55 (2013), pp. 37-40. doi: 10.1007/s11018-013-0155-8 [17] v. d. mazin, physical quantity as a pseudo-euclidean vector, acta imeko 4(4) (2015), pp. 4-8. doi: 10.21014/acta_imeko.v4i4.268 [18] b. v. shebshaevich, p. p. dmitriev, n.v. ivantsevich et al, network satellite radio navigation systems, radio i svyaz, moscow, 1993 [in russian]. [19] p. k. rashevsky riemannian geometry and tensor analysis, nauka, moscow, 1967 [in russian] [20] v. d. mazin, a. n. chepushtanov , application of a vector analytic model for metrological analysis of an infrared fourier spectrometer, measurement techniques 51(2) (2008), pp. 152157. doi: 10.1007/s11018-008-9013-5 [21] o. a. tsybulskii, analog-to-digital conversion with a hyperbolic scale, metrology 12 (1990), pp.9-19 [in russian] [22] i. s. gruzman, v. s. kirichuk, v. p. kosykh, g. i. peretyagin, a. a. spector, digital image processing in information systems, publishing house of the novosibirsk state technical university, novosibirsk, 2002 [in russian] https://en.wikipedia.org/wiki/special:booksources/978-0-520-08816-0 https://iopscience.iop.org/book/978-1-64327-374-7 https://iopscience.iop.org/book/978-1-64327-374-7 https://www.imeko.org/publications/tc7-2004/imeko-tc7-2004-044.pdf https://www.imeko.org/publications/tc7-2004/imeko-tc7-2004-044.pdf https://link.springer.com/search?facet-creator=%22i.+n.+krotkov%22 https://link.springer.com/search?facet-creator=%22v.+ya.+kreinovich%22 https://link.springer.com/search?facet-creator=%22v.+d.+mazin%22 https://link.springer.com/article/10.1007/bf00864981 https://link.springer.com/article/10.1007/bf00864981 https://link.springer.com/article/10.1007/bf00864981 https://link.springer.com/article/10.1007/bf00864981 https://doi.org/10.1007/bf00864981 https://link.springer.com/article/10.1007/s11018-013-0185-2 https://link.springer.com/article/10.1007/s11018-013-0185-2 https://doi.org/10.1007/s11018-013-0185-2 https://doi.org/10.1007/s11018-013-0155-8 http://dx.doi.org/10.21014/acta_imeko.v4i4.268 https://link.springer.com/article/10.1007/s11018-008-9013-5 https://link.springer.com/article/10.1007/s11018-008-9013-5 https://link.springer.com/article/10.1007/s11018-008-9013-5 https://doi.org/10.1007/s11018-008-9013-5 microsoft word article 15 102-789-1-galley final.docx acta imeko december 2013, volume 2, number 2, 86 – 90 www.imeko.org acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 86 establishing a metrological infrastructure and traceability of electrical power and energy in the r. macedonia ljupco arsov, marija cundeva-blajer ss. cyril and methodius university, faculty of electrical engineering and information technologies-skopje, ruger boskovic b.b., pob 574, 1000 skopje, r. macedonia section: technical note keywords: metrological infrastructure; power and energy measurements; calibrations; standards citation: ljupco arsov, marija cundeva-blajer, establishing metrology infrastructure and traceability of electrical power and energy in r. macedonia, acta imeko, vol. 2, no. 2, article 15, december 2013, identifier: imeko-acta-02 (2013)-02-15 editor: paolo carbone, university of perugia received april 12nd, 2013; in final form december 2nd, 2013; published december 2013 copyright: © 2013 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: this work was supported by faculty of electrical engineering and information technologies-skopje corresponding author: marija cundeva-blajer, e-mail: mcundeva@feit.ukim.edu.mk 1. introduction after gaining the independence of the r. macedonia, the process of creating a metrological system and an infrastructure necessary for the establishment of measurement standards and metrological traceability has begun. the development has started with the creation of own legislative, institutions, equipment and infrastructure for the application of legal and industrial metrology. for the measurement of electrical energy and power, in the frame of legal metrology, nothing was inherited from the former yugoslav system. however, the significance of these measurements impose quick measures to be undertaken, because of the large needs and quantity, the large finances connected to the trade of electrical energy, the need of consumers’ protection and insurance of conditions of fair trade. the adequacy of these solutions, the need of their upgrading and improvement, as well as the model which would create conditions for fair trade and electrical energy consumers’ protection are further discussed. 2. current state of the art the field of measurements of electrical energy and power is legaly regulated by the law on metrology from 2002 [1]. beside this law a set of rulebooks [2-4] are adopted. currently, there is a process of a transposition of the eu documents in the field of metrology, which are connected to certain types of measuring instruments (sector directives), [5]. through the law on metrology [1], as one of the basic objectives in the legal regulation of the measurements of electrical energy, the condition on insurance of fair trade is posed. this fair trade will be insured by exact measurements of the electrical energy, with legally aproved instruments in compliance to the rulebook on measuring instruments [3], calibrated and verified, and with measurements traceable to the national standard. the traceability requirement, defined in article 3 of the law on metrology [1], although it is not explicitly stated, should insure measurement traceability to the national standard of the r. macedonia, to national standards of other states and to international standards. abstract in the paper the current state and the establishment of a metrological infrastructure and traceability of the measurements of electrical power and energy, i.e. the creation of conditions for unity of power and energy measurement results, international comparability of the results and measurements which insure fair trade and consumers’ protection are elaborated. beside the legal aspects, also other features are discussed, like the needs for calibration and verification in the field of electrical power and energy, participants in the chain of measurements and trade with electrical energy, the organization, the infrastructure, the methods and the systems of calibration and verification. an organization and certain documents which will contribute to the establishment of a system in accordance with the international standards and practice, as well as traceability and fair trade, are proposed. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 87 by taking into account that the trade of electrical energy is not only internal, but in a large part international, it is necessary to insure traceability and comparability of the results not only on a national, but also on an international level. the same addresses also to the usage of electricity meters, which should comply with the specifications of the domestic regulation [3] as well as with the european regulation [5]. from the aspect of law regulation, this requirment is fulfilled through transposition of the eu directives and harmonization of the national laws and regulation with the eu directives in the field of metrology. in respect to the compliance of the electricity meters to the requirements of the regulation, which is declared through posing of the „ce“ mark, and which presumes existence of a notified body which will participate in the procedure of approval and marking, it is unclear how the domestic producers could address this question, since the r. macedonia is not a member of the eu. according to the law on metrology [1], in order to insure the conditions for fair trade, the electricity meters should be verified, which is done by legal and physical entities selling electrical energy, and formally by the bureau of metrology of the r. macedonia. if the requirements of the international standard for bodies performing inspection [6] and the international standard for testing and calibration laboratories [7] are taken into account, this solution, inherited from the former system, is in contradiction to the requirements of these standards in respect to the independence, impartiality, integrity and confidentiality of the bodies performing verification (control), e.g. calibration. this has become especially obvious after the process of privatization of the distribution of electrical energy in 2006. the control performed by the interested body is in conflict with the requirement for independent, impartial, and confidential control of the electrical energy measurements. the market of electrical energy in the r. macedonia is not well developed yet. main participants in the trade of electrical energy are evn-macedonia, mepso, elem, licenced trade houses as well as the large industrial consumers (mak-steel skopje, feni industry kavadarci, mital steel-skopje etc.), light industry and the households. the bureau of metrology of the r. macedonia is in charge to insure the unity of the measurements of electrical energy, national standard of electrical energy, type approvals of the electricity meters produced according to the legal requirements, periodical verification/calibration, metrological surveilance of the electricity meters and measurement traceability to the national standard. the bureau of metrology of rm, by the help of the eu, has formed and partially equipped its laboratories in the last years. however in the laboratory for electrical energy there is no electrical power standard, so the bureau is not in a position to insure traceability of the electrical energy measurements i.e. is not in a position to calibrate standards and instruments for electrical power. the legal requirement for verification of the electricity meters is realized through verification (calibration) of electricity meters in the evn-macedonia laboratory for verification of electricity meters, which has 6 verification systems emh/mte on disposal equipped with a power standard, accuracy class 0.05. after the control/verification by the evn-macedonia, the bureau of metrology on the basis of the control results formally performs the verification and sealing. the bureau of metrology performs the metrological surveilance and verification through its department for verification of electricity meters. however, the methods of surveillance of the verification (metrological control) according to the equipment for control, sampling frequency, number of samples and the application of proper statistical methods according to the standards (iec 62058-11, [8], iec 62058-21, [9], iec 62058-31, [10]), do not give enough confidence of possible deviations from the requirements of the rulebook on measuring instruments [3]. the fee for verification of electricity meters which is paid to the bureau of metrology of rm (7 euro for three-phase electricity meters and 4 euro for single-phase meters), could be used for significant improvement of the equipment and the other preconditions and resources for confident metrological control as well as for insurance of traceability to the national and international standards [11], [12]. the electrical energy consumption in the r. macedonia, like in other countries, is connected to the life standard and is in constant increase. according to the state statistical office in the r. macedonia 82.6% (2009) and 83.6% (2010), out of 8265 837 mwh (2009), i.e. 8 677 969 mwh (2010), gross domestic consumption of electrical energy is of domestic production, and around 17% is imported [13, 14]. the biggest consumers of electricity in 2010 were the households with a share of 37.3%, the industrial sectors (energy sector plus industry) with 25.3%, and the other sectors with 17.7% of the gross national electricity consumption. own consumption (in production, transmission and distribution) of electricity in 2010 was 5%, while distribution losses were 14.7% of the gross national electricity consumption [14]. all this energy is measured at the level of system interconnections, at the level of large consumers, at the level of industry, at the level of small consumers and households at different voltage levels, different locations, and different types of electricity meters for direct and indirect electrical energy measurements. currently, in the r. macedonia approximately 800,000 electricity meters in the households and 7500 meters in the industry are in usage. the electricity meters used in the households of rm were in a large scale three-phase meters, accuracy class 2, produced by iskra-kranj, slovenia and in smaller scale of domestic production by video inzeneringohrid (by licence of siemens) and energetika-vds strumica (own development). the electricity meters used in the industry are mainly for indirect measurements, accuracy class 1 and 0.5, while the electricity meters at the system interconnections are of accuracy class 0.1. roughly, it can be estimated that these meters are used with approximately 30,000 instrument transformers of accuracy class 0.5. according to the law on metrology it is foreseen first, periodical and, if necessary, extra verification of the electricity meters to be done. the legal period for periodical verification table 1. annual verifications of electricity meters performed by bom. year 2007 year 2008 year 2009 year 2010 year 2011 101 400 99 800 231 900 180 000 120 000 acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 88 of the electricity meters is 10 years, which imposes that approximately 80,000 meters should be verified annually. according to the data of the bureau of metrology [15], in the last years the number of electricity meters verifications performed by the bom is given in table 1. these figures comprise the number of first verifications performed by the meter producers (foreign producers), as well as the number of periodically verified electricity meters by evn-macedonia. the legal period of verification of instrument transformers is 5 years, so the annual number of verified instrument transformers should be 6000. all the meters used for billing the electrical energy must have a type approval and verification. the type approval means compliance of the meter type to the technical standards and the legal regulation. in the procedure of type approval about. 30 different tests, including testing of the electrical and metrological characteristics, isolation tests, mechanical tests as well as some newer tests: software validation, life cycle and confidence, are performed. practically, it is impossible to test each meter, therefore samples of the newly developed meters are taken, and if all the tests are satisfying, the type approval is issued. the meters identical to the approved ones are considered to comply to the standards. according to the law, each electricity meter used for billing of electrical energy mus be verified. for verification of the meters and instrument transformers a limited set of tests is required, which is mainly testing of the meter error at different working conditions. 3. establishing of legal requirements, traceability and confidence of electrical power measurements the current legislation which regulates the measurements of electrical energy in the r. macedonia is discussed and referred to in section 2. if this regulation is compared with the regulation for measurement of electrical energy used in the eu countries, it can be noticed that a transposition of the mid directive, [5] is done, but for full coverage of the field of measurements of electrical energy and power, it is necessary to apply the european standards en 50470-1 [16] and en 504703 [17] for classes a, b, c (classes а, b and c correspond to accuracy classes 2, 1 and 0.5, respectively). the standards iec 62052-11 [18], iec 62053-21 [19] and iec 62053-22 [20], for classes 2, 1, 0.5, 0.5s and 0.2s should be applied too. one of the possible problems in the legal harmonization would be the insurance of indepence, impartiality, integrity and confidence of the bodies performing type testings, calibration and verification. therefore, changes in the law on metrology [1] and in the rulebook [4] are necessary. the goal of these changes is to enable the activity of independent accredited and authorized bodies for control of electricity meters, independent accredited calibration laboratories and independent services for electricity meters. one of the possible options of organization of the legal metrology for electricity meters is given in figure 1. the verification of the electricity meters and the instrument transformers in an authorized inspection body would be realized according to the sequence in figure 2. the traceability of the electrical energy measurements to national and international standards could be realized through a three-scale hierarchy chain shown in figure 3. for the realization of the traceability chain it is necessary to create a national laboratory which will be equipped with a national primary standard for electrical energy. this laboratory should fulfil the requirements of the standards iso 17025 [7] as well as to participate at international comparisons with other national laboratories, i.e. to calibrate its standard at national laboratories of other states with standards of a higher accuracy class. this national laboratory and the national standard could be in the frame of the bureau of metrology of the r. macedonia, but also other solutions are possible. the laboratory can be created where staff, equipment and space for such a laboratory already exist. according to the practice in most of the countries, the calibration laboratories for electricity meters would serve the figure 1. scheme-proposal on legal metrology of measurements of electrical energy. ministry of economy of rm, bom application approval authorization notification producers and importers independent control body verification of electricity meters and instrument transformers services application for type approval application for verification own inspection type testing application for verification of repaired meters verification/ inspection power companies consumers acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 89 industry and other owners of instruments for electrical energy and power for calibration and type testing, and should be independent laboratories which fulfil the requirements of the standard iso 17025 [7]. one of the important conditions for insurance of fair trade of electrical energy is the creation of independent and confident control/inspection body for verification of electricity meters. according to the international standard for control bodies iso 17020 [6], such bodies of type a should fulfil the following requirements: 1. the inspection body and its staff shall be independent to the parties involved, i.e. shall not be the designer, manufacturer, supplier, installer, and purchaser, owner, user or maintainer of the items which are inspected, nor authorized, representative of any of these parties. 2. the inspection body and its staff shall not be engaged in any activities that may conflict with their independence of judgment and integrity in relation to their inspection activities. in particular they shall not become directly involved in the design, manufacture, supply, installation, use or maintenance of the items inspected or similar competitive items. 3. all interested parties shall have access to the services of the inspection body. there shall not be undue financial or other conditions. the procedures under which the body operates shall be administrated in a non-discriminatory manner. beside the organizational, management and documentation requirements, the inspection body must be equipped with competent and responsible staff, which will respect the inspection procedures, criteria and ethics as well as proper equipment which must be calibrated with traceability to the national electrical energy standard. the inspection body should have on disposal proper procedures and protocols of testing and procedures for processing of the results, as well as statistical indicators for the pool of verified meters. for the activities of this inspection body, beside the standard iso 17020 and the national legislation, there are more international standards and international guides. 4. conclusions the current state and the importance of the measurements of electrical energy for billing and other purposes requires quick measures for harmonization and upgrading of the macedonian system of legal and industrial metrology for electrical energy. it is necessary to establish a national standard of electrical energy and a national laboratory for electrical energy, as well as traceability of the electrical energy measurements to them. it is necessary also to create independent competent calibration and testing laboratories, i.e. independent, impartial and confident inspection bodies for control/verification of electricity meters in compliance to the international standards iso 17025 and iso 17020, the standards for electricity meters and the practice in the other countries in europe and the world. the proposed schemes for practicing the legal metrology and establishing of figure 2. verification process. figure 3. traceability chain of the electrical energy measurements. international comparisons and calibrations national laboratory of electrical energy primary national standard (class 0.01) accredited laboratory for electrical energy standards (class 0.05) inspection body for verification of electricity meters standards (class 0.05) consumers electricity meters (class 2, 1, 0.5s, 0.2s) review and preparation testing assessment sealing document for refusal return/ delivery acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 90 the traceability chain in the measurements of electrical energy are possible options which should be further elaborated by taking into account all the aspects: legal, current state, staff, technical requirements, as well as the economic aspects. references [1] law on metrology, official gazette of r. macedonia no. 55/02, 84/07 and 120/09. [2] rulebook of the definitions, nomenclature and symbols, the scope, application and obligation for usage and writing of the legal measurement units, official gazette of r. macedonia, no. 104/2007. [3] rulebook on measuring instruments, official gazette of r. macedonia no. 17/10 [4] rulebook on determination of the categories and types of measuring instruments for which the verification is obligatory, procedures of verification, deadlines of periodical verification and the categories and types of measuring instruments on which an authorization for verification can be obtained, official gazette of r. macedonia”, no. 102/2007. [5] directive 2004/22/ec on measuring instruments (mid), official journal of the european union, 2004. [6] en iso 17020, general criteria for the operation of various types of bodies performing inspection, cenelec, brussels, 2004. [7] en iso/iec 17025, general requirements for the competence of testing and calibration laboratories, cenelec, brussels, 2005. [8] iec 62058-11, electricity metering equipment (ac) acceptance inspection, part 11: general acceptance inspection methods, international electrotechnical commission, geneve, 2008. [9] iec 62058-21, electricity metering equipment (ac) acceptance inspection part 21: particular requirements for electromechanical meters for active energy (classes 0,5, 1 and 2), international electrotechnical commission, geneve, 2008. [10] iec 62058-31, electricity metering equipment (ac) acceptance inspection part 31: particular requirements for static meters for active energy (classes 0,2 s, 0,5 s, 1 and 2), international electrotechnical commission, geneve, 2008. [11] decision on the amount and form of payment of the fee for services of the bureau of metrology and the authorized legal entity, official gazette of r. macedonia, no. 51/2004, no. 64/2008, no. 121/2010. [12] press release no. 0302-864/1 of bureau of metrology of r. macedonia from 1. 03. 2010. [13] press release no. 6.1.10.83, state statistical office of r. macedonia from 02.12.2010. [14] press release no. 6.1.11.91, state statistical office of r. macedonia from 30.11.2011. [15] strategic plan for the development of the bureau of metrology and the metrological infrastructure of r. macedonia 2010-2012, bureau of metrology of r. macedonia, 2010. [16] cen en 50470-1, electricity metering equipment (ac) general requirements, tests and test conditions. metering equipment (class indices a, b and c), cenelec, brussels, 2006. [17] cen en 50470-3: electricity metering equipment (ac) part 3: particular requirements static meters for active energy (class indices a, b and c), cenelec, brussels, 2006. [18] iec 62052-11, electricity metering equipment (ac) general requirements, tests and test conditions part 11: metering equipment, international electrotechnical commission, geneve, 2003. [19] iec 62053-21, electricity metering equipment (ac) particular requirements part 21: static meters for active energy (classes 1 and 2), international electrotechnical commission, geneve, 2003. [20] iec 62053-22, electricity metering equipment (ac) particular requirements part 22: static meters for active energy (classes 0.2 s and 0.5 s), international electrotechnical commission, geneve, 2003. analysis of multiband rectangular patch antenna with defected ground structure using reflection coefficient measurement acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 6 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 analysis of multiband rectangular patch antenna with defected ground structure using reflection coefficient measurement thalluru suneetha1, s. naga kishore bhavanam1 1 department of ece, acharya nagarjuna university, guntur, andhra pradesh-522510, india section: research paper keywords: measurement; defected ground; sensing; wlan; wimax planar monopole antenna citation: thalluru suneetha, s. naga kishore bhavanam, analysis of multiband rectangular patch antenna with defected ground structure using reflection coefficient measurement, acta imeko, vol. 11, no. 1, article 28, march 2022, identifier: imeko-acta-11 (2022)-01-28 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received november 20, 2021; in final form march 20, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: thalluru suneetha, e-mail: tsuneetha701@gmail.com 1. introduction the contemporary telecommunications sector has evolved in response to the increasing demands of customers for smart gadgets. in contrast to their predecessors, these devices are capable of handling many applications at the same time. global positioning system (gps), wireless fidelity (wi-fi), global system for mobile communication (gsm), bluetooth, and worldwide interoperability for microwave access (wimax), all have their own operating frequency bands. the addition of a separate antenna for each application increases the device's size and makes it more uncomfortable to use. due to this the demand for antennas operating at multiple frequencies is increasing. high data rates in wireless communication systems are quite common these days. as a result, contemporary gadgets must have a single antenna that can operate in many frequency bands. lot of research has been going on in the field of multiband antennas by many researchers and several methods were devised in recent years. the authors of [1] reported a coplanar waveguide (cpw)fed rectangle antenna with modified ground and open complementary split ring resonator (ocsrr) loading that could operate in three bands. in [2] u shaped antenna with partial ground plane to resonate at three different frequencies is presented. in [3] authors designed ‘nine’ and ‘epsilon’ shaped antennas along with switches to realize triple band operation for use in wi-fi, wimax and wlan applications. in [4] a five band antenna with kite shaped radiating patch with ‘c‘ and ‘g’ shaped slots in the ground plane was realized. authors in [5] used meta material technique to design dual band antenna. a cpw fed meta material based multiband antenna was reported in [6]. in [7] a small cpw-fed monopole antenna for dual band operation was presented. the authors used the state of the art substrate integrated waveguide technique and slots to get dual frequency abstract in this paper, a novel quad band antenna to operate at four different frequency bands is designed and simulated using computer simulation technology (cst) microwave studio software. to achieve better performance of the antenna, various parameters were optimized using parametric analysis. during this analysis, various antenna parameters were need to be measured. the proposed antenna uses asymmetrical ‘u’ and ‘t’ shaped radiating elements printed on an fr-4 substrate with dimensions of 1.6 × 34 × 20 mm3. the measurement of various antenna parameters like reflection coefficient, return loss, radiation intensity are key tasks to be performed in the antenna measurement laboratory before going to the real time application. a stair case defected ground structure with rectangular center slot is used to attain a better bandwidth. the antenna resonates at four different frequencies 3 ghz, 4.8 ghz, 9 ghz, 13.2 ghz with operating bandwidths of 980 mhz, 2.05 ghz, 3.84 ghz, and 3.82 ghz respectively. the s11 value at these resonant frequencies is measured as -23.6 db, -29.4 db, -34.2 db, -49.05 db respectively. the voltage standing wave ratio of the proposed antenna at four resonant frequencies is equal to one. the gain of the antenna is consistent throughout the four pass bands. the antenna is suitable for bluetooth (2.4) ghz, wlan (5.125-5.35) ghz and (5.725-5.825) ghz, wimax (5.25-5.85) ghz, c-band (3.7-4.2) ghz and ku-band (12-18) ghz. mailto:tsuneetha701@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 response and wide bandwidth in [8]. as reported in [9] increase in gain parameter is achieved by using antenna arrays. several techniques, such as frequency-selective surface [10]-[12] and electromagnetic band-gap structure [13]-[15], have been studied by several researchers to create multiband antennas. to improve the polarization performance of the link budget, circularly polarized antennas are frequently employed in wlan and satellite applications as reported in [16]-[18]. in every wireless communication system, the antenna is extremely crucial. the well-designed antenna reduces receiver complexity and improves receiver performance. the antenna's size, shape, and design will be controlled by the antenna's application [19], [20] and operating frequency. a simple rectangular patch was used in this project, and a portion of it was removed to create a symmetrical ushaped structure. a portion of the left and right arms are cut from that structure, which leads to asymmetrical u shape and to that shape one vertical strip combined with a horizontal strip, is added to the centre to form a t-shaped structure. to achieve better bandwidth, a staircase defected ground structure along with centre rectangular slot was used. proposed antenna resonates at multiple frequencies owing to its structural modifications. this antenna is applicable to a wide range of modern portable wireless applications, bluetooth (2.4) ghz, wlan (5.125-5.35) ghz and (5.725-5.825) ghz, wimax (5.25-5.85) ghz, c-band (3.7-4.2) ghz and ku-band (12-18) ghz. to achieve multiband operation other designs used slots and modified the shapes of radiating elements which is also complex. in this design staircase defective ground structure was employed to get wider bandwidth at resonating frequencies. defected ground structure (dgs), apart from conventional antennas, produces discontinuities on the signal plane, which disrupts shielded current distribution on the signal plane. as a result, the apparent permittivity of the substrate fluctuates as a function of frequency and plays a vital role in the antenna's performance. various parameters such as radiation pattern, s11, vswr, gain, are measured and analysed for the final design of the antenna. furthermore, getting the multiband operation owing to its less structural complexity is the most outstanding feature of this design. general procedure for designing any antenna was discussed in section 2. the evaluation steps of the proposed antenna design were discussed in section 3. in section 4 parametric analysis through performance measures measurement of the proposed antenna is discussed. section 5 deals with results and discussion. in section 6 literature comparison of the proposed antenna was done with earlier reported structures. 2. antenna design procedure any antenna design technique begins with study of various antennas for a certain application. then, potential design approaches must be studied. later, patch dimensions such as width and length are determined to get design criteria. geometrical parameters and material qualities must then be accurately specified in the following stage. the simulation process is done by finalizing one simulator among the available simulators. using the simulator's parametric analysis feature, simulation work is carried out until the best possible result is obtained. when the required behaviour is achieved, the simulation is terminated. the antenna fabrication operation is then initiated, and the desired antenna prototype is created. to validate the design technique, the fabricated antenna behaviour is evaluated and compared to that of the simulated ones. if the required behaviour is not obtained, the geometric parameters of the antenna as well as the material qualities must be modified. simulation is again carried out with modified parameters. this is continued until the desired behaviour of the proposed antenna is obtained. figure 1 clearly displays the antenna general design procedure. 3. evaluation steps of the proposed antenna design the planned antenna has been designed in three phases, as illustrated in figure 2. figure 2 i) show a typical rectangular patch antenna with a simple micro strip line in step 1. in step2 a portion of patch is removed to get symmetrical u-shaped antenna as depicted in figure 2 ii) which leads to multiband response. in step3 as shown in figure 2 iii) portions of left and right arms are cut leading to asymmetrical u-shaped structure and centre horizontal and vertical arms are added leading to t shaped structure. this is the proposed design. this proposed antenna refines impedance matching and achieves better performance in terms figure 1. antenna general design procedure. i) antenna1 ii) antenna2 iii) antenna3 figure 2. steps in the design of the proposed multiband antenna. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 of wider bandwidth and vswr. gain is also considerable throughout the pass bands. figure 3 depicts the design for the proposed multi band planar antenna with asymmetrical u and t shaped patch structure. from the simple normal rectangular patch, a portion is removed to realize symmetrical u-shaped patch structure. portions of left and right arms were cut and centre horizontal and vertical arms were added leading to t shaped patch structure at the centre. this structure deals with defected ground technology. defected ground plane with stepped staircase along with centre rectangular slot has been added beneath the substrate, to extend the bandwidth. substrate dimensions are 20 × 34 × 1.6 mm3. the material chosen is fr-4, which has good mechanical properties with a thickness of 1.6 mm, a dielectric permittivity of 4.4, and a loss tangent of 0.02. the antenna was fed by 50 ω micro strip line. simulation of the proposed antenna has been carried out by using cst micro wave studio. the antenna was optimized using the cst microwave studio's parametric analysis tool. the antenna is capable of resonating at four distinct frequencies. antenna 1 only resonates at two frequencies, the first at 3 ghz and the second at 4.8 ghz. antenna 2 resonates at four separate frequencies, although the bandwidth at the pass bands and impedance matching were not ideal. antenna 3 resonates at four separate frequencies, with a wide bandwidth and good impedance matching, as well as significant gain at all four resonant frequencies. antenna 3 is the proposed construction, which resonates at four distinct frequencies. simulated s11 of different antennas as depicted in figure 2 are shown in figure 4. 4. parametric study of the proposed antenna to obtain the optimal design, parametric analysis is the greatest choice accessible when utilising simulators. study of effect of changing various parameters like feed length l1, right arm length l2 and length of t strip l3 has been carried out to optimize the proposed design. effect of varying l1: figure 5 clearly displays the effect of changing the value of feed length l1. the feed length l1 was tested for three different lengths. the findings show that as the length of the feed increases, the antenna only resonates at two frequencies, with l1 = 10 mm yielding the best results with four resonant frequencies. effect of varying l2: to investigate the impact of changing the right arm length l2 all other previously optimized parameters were held constant at their optimum values, while l2 is changed between 9 mm to 12 mm. the second, third, and fourth resonances do not vary significantly when the value is increased, but impedance matching at the first resonant frequency decreases, and l2 =9 mm provides the best performance in terms of improved bandwidth and impedance matching. figure 6 displays the impact of changing the values of right arm length l2. a) b) figure 3. designed antenna architecture: a) front view, b) back view, ws =20 mm, ls = 34 mm, l1 = 10 mm, l2 = 9 mm, l3 = 14 mm, l4 = 2 mm, w1 = 2.2 mm, w3 = 4.4 mm, w4 = 11 mm, wg = 20 mm, w = 2 mm, l = 2 mm, ln = 8 mm, wn = 4 mm. figure 4. s11 of different antennas as depicted in figure 2. figure 5. optimized s11 variation for various values of l1. figure 6. optimized s11 variation for various values of l2. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 effect of varying l3: to explore the influence of length of t strip l3, it was varied from 13 to 15 while all other parameters were kept at their previously optimized levels. for different values of l3, there wasn't much of a variation in the first three resonances. for l3 = 14 mm, a wider bandwidth and improved impedance matching were achieved as portrayed in figure 7. as surface current distribution helps in understanding the behaviour of the antenna figure 8 portrays the surface current distribution of the proposed antenna at four resonant frequencies. different portions of the antenna are responsible for radiations at four resonant frequencies. radiation at 3 ghz is directed by the right arm of the u structure as well as centre t strip right portion. at 4.8 ghz, the lower part of the u structure right arm is responsible for radiation. the lower part of the u shape is responsible for 9 ghz radiation. at 13.2 ghz, both the centre arm and the upper section of the u shape are responsible for radiation. 5. results and discussion the designed antenna's simulated s11 and gain variations with respect to frequency can be seen in figure 9. the antenna is simulated by cst microwave studio which uses finite integration method. the four resonant frequencies are occurring at 3 ghz, 4.8 ghz, 9 ghz, and 13.2 ghz. the return loss at these frequencies is -23.6 db, -29.4 db, -34.2 db, -49.05 db, respectively. these values clearly indicate good impedance matching at all the four resonant frequencies. the pass bands around these resonances are 2.484 ghz to 3.389 ghz, 3.923 ghz to 5.974 ghz, 7.232 ghz to 10.921 ghz, and 12.551 ghz to 16.363 ghz with bandwidths of 0.905 ghz, 2.051 ghz, 3.689 ghz and 3.812 ghz, respectively. wide bandwidths at four resonant frequencies are also identified. considerable gain is observed at four resonant frequencies. getting the multiband operation owing to its less structural complexity is the most outstanding feature of this design. the voltage standing wave ratio (vswr) measures the mismatch between an antenna and the feed line that connects to it. the range of vswr values ranges from 1 to infinity. vswr of less than 2 is deemed adequate for the majority of antenna applications. in the proposed design vswr of 1 indicates a good match. the proposed quad band antenna's voltage wave standing ratio (vswr) is portrayed in figure 10. from the figure it is evident that at all the four resonant frequencies good vswr is identified which is almost equal to one. the proposed quad band antenna's far field patterns at 3 ghz, 4.8 ghz, 9 ghz, 13.2 ghz are clearly displayed in figure 10. 6. literature comparision the comparison analysis between the proposed antenna and previously published designs for multiband operation is summarized in table 1. proposed design is compared in terms of size, frequencies, radiating elements. compared to earlier reported designs in [1], [2], [3], [4], [5] and [6] proposed antenna is better in terms of size; structural complexity of the proposed antenna is less compared to most of the structures. gain is consistent in the operating bands and operating bandwidth is also more compared to the most of the structures. vswr of the figure 7. optimized s11 variation for various values of l3. a) b) c) d) figure 8. surface current distribution at resonant frequencies: a) 3 ghz, b) 4.8 ghz, c) 9.0 ghz and d) 13.2 ghz. figure 9. the designed antenna's simulated s11 and gain. figure 10. the designed antenna's simulated vswr. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 antenna at resonating frequencies is almost equal to one which is promising. in this design without the necessity of complex structures able to achieve multiband operation by simply removing some portions from the patch and adding additionally centre t strip. 7. conclusions the design of a novel quad band antenna with asymmetrical u and t shaped patch structure for portable wireless applications was presented in this study. the difference of this design with others can be seen through its simple design steps leading to less complex structure. the proposed antenna resonates at 3 ghz, 4.8 ghz, 9 ghz, and 13.2 ghz frequencies with bandwidths 980 mhz, 2.05 ghz, 3.84 ghz, and 3.82 ghz, respectively. over the working frequency ranges, the antenna has reasonable gains. bandwidth of the antenna over the operating frequencies is high. the gain, vswr, and reflection coefficient are taken into consideration for design and analysis operations in the frequency range of 1 ghz to 20 ghz. in comparison to most other designs, this structure is simple and compact, producing quad bands for use in portable electronic devices. the proposed patch antenna is made utilizing printed circuit board technology, which is easy and affordable. this antenna is applicable to a wide range of modern portable wireless applications, bluetooth (2.4) ghz, wlan (5.125-5.35) ghz and (5.725-5.825) ghz, wimax (5.25-5.85) ghz, c-band (3.7-4.2) ghz and ku-band (12-18) ghz. references [1] r. pandeeswari, s. raghavan, a cpw-fed triple band ocsrr embedded monopole antenna with modified ground for wlan and wimax applications, microwave and optical technology letters, vol. 57, 2015, pp. 2413–2418. doi: 10.1002/mop.29352 [2] mahesh kendre, a. b. nandgaonkar, pratima nirmal, sanjay l. nalbalwar, u shaped multiband monopole antenna for spacecraft, wlan and satellite communication application, ieee international conference on recent trends in electronics information communication technology, bangalore, india, 20-21 may 2016, , pp. 1528-1532. doi: 10.1109/rteict.2016.7808088 [3] v, jyothika, m. s. p. c. shekar, s. v. krishna, m. z. u. rahman, design of 16 element rectangular patch antenna array for 5g applications, journal of critical reviews, 7(9), pp. 53-58. [4] t. ali, k. d. prasad, r. c. biradar, a miniaturized slotted multiband antenna for wireless applications. j comput. electron. 17, 2018, pp. 1056–1070 . doi: 10.1007/s10825-018-1183-z [5] k. a. rao, k. s. raj, r. k. jain, m. z. u. rahman, implementation of adaptive beam steering for phased array antennas using enlms algorithm, journal of critical reviews, 7(9), pp. 59-63. doi: 10.31838/jcr.07.09.10 [6] n. thamil selvi, p. thiruvalar selvan, s. p. k. babu, r. pandeeswari, multiband metamaterial-inspired antenna using split ring resonator, computers & electrical engineering, vol. 84, 2020, 106613, issn 0045-7906. doi: 10.1016/j.compeleceng.2020.106613 [7] s. kesana, s. gatikanti, m. z. u. rahman, b. radhika, d. mounika, triple frequency g-shape mimo antenna for wireless applications, international journal of engineering and advanced technology, 8 (5) , 2019, pp. 942-947 [8] s. v. devika. k. karki, s. k. kotamraju, k. kavya, m. z. u. rahman, a new computation method for pointing accuracy of cassegrain antenna in satellite communication, journal of theoretical & applied information technology, 95(13), 2017. [9] m. l. m. lakshmi, k. rajkamal, s. v. a. v. prasad, m. z. ur rahman, amplitude only linear array synthesis with desired nulls using evolutionary computing technique, applied computational electromagnetics society journal, 31(11), 2016. [10] m. z. u. rahman, v. a.kumar, g. v. s. karthik, a low complex adaptive algorithm for antenna beam steering international conference on signal processing, communication, computing and networking technologies, thuckalay, india, 21-22 july 2011, pp. 317-321. doi: 10.1109/icsccn.2011.6024567 [11] m. a. meriche, h. attia, a. messai, t. a. denidni, gain improvement of a wideband monopole antenna with novel artificial magnetic conductor, 17th international symposium on antenna technology and applied electromagnetics (antem), montreal, qc, canada, 10-13 july 2016, pp. 1-2. doi: 10.1109/antem.2016.7550150 [12] n. wang, q. liu, c. wu, l. talbi, q. zeng, j. xu, wideband fabry-perot resonator antenna with two complementary fss layers, ieee transactions on antennas and propagation, vol. 62, a) b) c) d) figure 11. patterns of radiation at: a) 3 ghz, b) 4.8 ghz, c) 9 ghz and d) 13.2 ghz. table 1. comparison of proposed antenna with earlier reported structures. s.no year dimensions in mm2 frequency in ghz radiating element 1 2015 40 × 30 2.4, 3.5, 5.8 pentagonal radiating patch with two slots 2 2016 43 × 20 2.8, 5.8, 10.8 u shape monopole antenna 3 2017 35 × 53 2.4, 3.5, 5.5. epsilon and nine shaped antennas 4 2018 23 × 23 3.6 ,5.8, 6.3, 8.3, 9.5 kite-shaped, cand modified g-shaped slots 5 2019 35 × 25 5.7, 10.3 metamaterial cell 6 2020 40 × 40 2.9, 2.10, 3.5, 4.5, 5.7, 6.5 penta-ring srr 7 proposed 34 × 20 3, 4.8, 9, 13.2 asymmetrical u and t shaped patch structures https://doi.org/10.1002/mop.29352 https://doi.org/10.1109/rteict.2016.7808088 https://doi.org/10.1007/s10825-018-1183-z https://doi.org/10.31838/jcr.07.09.10 https://doi.org/10.1016/j.compeleceng.2020.106613 https://doi.org/10.1109/icsccn.2011.6024567 https://doi.org/10.1109/antem.2016.7550150 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 no. 5, 2014, pp. 2463–2471. doi: 10.1109/tap.2014.2308533 [13] y.ge, k. p. esselle, t. s. bird, a method to design dualband,high-directivity ebg resonator antennas using singleresonant, singlelayer partially reflective surface, progress in electromagnetics research c, vol. 13, 2010, pp. 245–257. doi: 10.2528/pierc10020901 [14] j.tak, y. hong, j. choi, textile antenna with ebg structure for body surface wave enhancement, electronics letters, vol. 51, no. 15, 2015, pp. 1131–1132. doi: 10.1049/el.2015.1022 [15] r. m. hashmi, k. p. esselle, enhancing the performance of ebg resonator antennas by individually truncating the superstructure layers, iet microwaves antennas & propagation, vol. 10, no. 10, 2016, pp. 1048–1055. doi: 10.1049/iet-map.2015.0674 [16] j. lacik, circularly polarized siw square ring-slot antenna for xband applications, microwave & optical technology letters, vol. 54, no. 11, 2012, pp. 2590–2594. doi: 10.1002/mop.27113 [17] k.saraswat, t. kumar, a. r. harish, a corrugated g-shaped grounded ring slot antenna for wideband circular polarization, international journal of microwave & wireless technologies, 2020, 1–6. doi: 10.1017/s1759078719001624 [18] m. j. hua, p. wang, y. zheng, s. l. yuan, compact tri-band cpw-fed antenna for wlan/wimax applications, electronics letters, vol. 49, no. 18, 2013, pp. 1118–1119. doi: 10.1049/el.2013.1669 [19] armando coccia, federica amitrano, leandro donisi, giuseppe cesarelli, gaetano pagano, mario cesarelli, giovanni d'addio, design and validation of an e-textile-based wearable system for remote health monitoring, acta imeko, vol. 10, 2021, no. 2, pp. 220-229. doi: 10.21014/acta_imeko.v10i2.912 [20] imran ahmed, eulalia balestrieri, francesco lamonaca, iomtbased biomedical measurement systems for healthcare monitoring: a review, acta imeko, vol. 10, 2021, no.2, pp. 1-11. doi: 10.21014/acta_imeko.v10i2.1080 https://doi.org/10.1109/tap.2014.2308533 https://doi.org/10.2528/pierc10020901 https://doi.org/10.1049/el.2015.1022 https://doi.org/10.1049/iet-map.2015.0674 https://doi.org/10.1002/mop.27113 https://doi.org/10.1017/s1759078719001624 https://doi.org/10.1049/el.2013.1669 https://doi.org/10.21014/acta_imeko.v10i2.912 https://doi.org/10.21014/acta_imeko.v10i2.1080 metrological characterization of instruments for body impedance analysis acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 7 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 metrological characterization of instruments for body impedance analysis valerio marcotuli1, matteo zago2, alex p. moorhead1, marco vespasiani3, giacomo vespasiani3, marco tarabini1 1 department of mechanical engineering, politecnico di milano, via privata giuseppe la masa 1, 20156 milan, italy 2 faculty of exercise and sports science, università degli studi di milano, via festa del perdono 7 20122 milan, italy 3 technical department, metadieta s.r.l., via antonio bosio, 2, 00161 rome, italy section: research paper keywords: bioimpedance; body composition; measurement uncertainty; calibration; multivariate linear regression citation: valerio marcotuli, matteo zago, alex p. moorhead, marco vespasiani, giacomo vespasiani, marco tarabini, metrological characterization of instruments for body impedance analysis, acta imeko, vol. 11, no. 3, article 14, september 2022, identifier: imeko-acta-11 (2022)-03-14 section editor: francesco lamonaca, university of calabria, italy received october 7, 2021; in final form august 31, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: valerio marcotuli, e-mail: valerio.marcotuli@polimi.it 1. introduction body composition describes the main components of the human body in terms of free fat mass (ffm), fat mass (fm) or their ratio ffm/fm. the analysis of body composition is used in different fields such as biology and medicine to estimate the nutritional status, muscular volume variations and potentially event pathological status. for example, physiological aging leads to a reduction of ffm and muscular mass, while fat increases and is redistributed over the body areas [1]. different levels of body composition, atomic, molecular cellular, tissular and global, can be analyzed depending on the measurement methods [2]. body mass index (bmi) is a generic indicator of the body composition, but it tends to give inaccurate information when subjects are highly overweight or obese; in fact, it is possible that malnutrition exists yet is masked by the high amount of fat mass [3]. a solution for measuring body composition is represented by the dual-energy x-ray absorptiometry (dxa). this is an imaging technique, similar to magnetic resonance imaging (mri), which scans the patient with two beams of x-rays with different energy (usually 40 and 70 kv). in recent years, dxa has become recognized as the “gold standard” for measuring body composition [4]. it evaluates both the global and the regional distribution of the three main body components: bone mineral content (bmc), fm and ffm. the accuracy of dxa makes it very effective in studying patient composition within specific body regions and evaluating their effect on the patient health [5]. unfortunately, a dxa machine is expensive ($20,000+) making it typically available only at big infrastructure such as clinics and hospitals. an alternative technique is the bioelectrical impedance analysis (bia): this employs a low alternate current (ac) with high frequency at 50 khz transmitted across the body to estimate its composition based on the hydration level of tissues [6]. bia allows for quick examinations, and it is much less expensive than dxa. additionally, bia is less dangerous than dxa as it does abstract body impedance analysis (bia) is used to evaluate the human body composition by measuring the resistance and reactance of human tissues with a high-frequency, low-intensity electric current. nonetheless, the estimation of the body composition is influenced by many factors: body status, environmental conditions, instrumentation, and measurement procedure. this work studies the effect of the connection cables, conductive electrodes, adhesive gel, and bia device characteristics on the measurement uncertainty. tests were initially performed on electric circuits with passive elements and on a jelly phantom simulating the body characteristics. results showed that the cables mainly contribute to increase the error on the resistance measurement, while the electrodes and the adhesive introduce a negligible disturbance on the measurement chain. this paper also proposes a calibration procedure based on a multivariate linear regression to compensate for the systematic error effect of bia devices. mailto:valerio.marcotuli@polimi.it acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 not use x-rays, meaning it can be also repeated several times with no contraindications. nevertheless, bia can be highly affected by many factors such as altered hydration of the subject, measurement conditions, ethnic background, and health conditions [7]. bia devices measure the magnitude of the impedance opposed to the current that varies with respect to the body anatomy. specifically, the physical principle assumes that the body is made up of tissues with different composition. some tissues are good conductors due to their water content while others are insulators. the water content is inversely related to the resistance that opposes the current flow. on the other hand, cellular membranes, able to accumulate electrical loads, can be considered capacitors. the presence of capacitors is directly proportional to reactance and introduces an observable delay on the current flow. the sum of the resistance and reactance defines the impedance. its evaluation indicates the body hydration and provides an estimate of the nutritional state equivalent to the cellular amount. since water is the main component of the cells and it is almost absent in fat, it is possible to deduce the amount of ffm from the water content. consequently, fm is evaluated by simply subtracting the ffm to the total weight [8]. 1.1. fricke’s circuit: a human body electrical model the human body can be modeled as a set of resistance and capacitance connected in parallel or in series. the most common body model used in the field of bia is the fricke’s circuit, whose two parallel branches represent the intracellular and extracellular components. in this model, a high-frequency current passes through the intracellular water, while at low frequencies through the extracellular space. this is because at zero or low frequency, the current does not penetrate the cell membrane (acting as an insulator), while it passes through the extracellular medium made of water and sodium [9]. the intracellular behavior, in turn, can be modeled as a resistance ri (due to the water and potassium content) and a capacitance 𝑋𝑐 of the cell membrane, while the extracellular behavior is described by a single resistance re as shown in figure 1. the total body resistance r measured by a bia instrument is in turn a combination of the two resistances ri and re which indicate the real part of a complex number [10]. generally, the phasor and other indices such the ratio ri/re can be good estimators of diseases presence, nutritional status, and hydration condition [11]. 1.2. the calibration plots the cole-cole plot is commonly used to visualize the electrical response of body measurements with the resistance r on the x-axis and the negative reactance 𝑋𝑐 on the y-axis. at extremely high or ideally infinite frequency, the intracellular branch is the only one with the minimum resistance value ri. at low or zero frequency, the current passes only in the extracellular space since the cell membranes act as insulators. consequently, re is the maximum value of resistance. the relationship between the capacitance 𝑋𝑐 and the total resistance r of a body can be expressed by a phase angle φ [12]. therefore, the resulting phasor ranging from ri and re describes an arc segment as shown in figure 2 and all the measured values would lie below it. this plot can be standardized with respect to height, gender, and ethnicity, to form a calibration model divided into adjacent areas contained in tolerance ellipses at 50 %, 75 %, and 95 % belonging to a certain population group (as seen in figure 3, which shows an example of a calibration model standardized by the height, h) [13]. the plot is used as a calibration map by companies for converting a measurement performed by means of a device into a body status information [14]. if the bia device displays a low measurement accuracy, measurement results could be misleading. 1.3. measurement uncertainty the biasing factors on bioimpedance estimation can be attributed to the subject (the measurand is not constant), to the measurement protocols, and to the instrumentation [15]. in this figure 1. fricke's circuit model for body composition consisting into two branches related to the intracellular and extracellular behaviors. figure 2. example of cole-cole plot of the fricke's circuit. figure 3. example of a calibration model standardized by the height (h). acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 study we investigate the possible source of errors of the bia instrumentation, consisting in a control unit, cables, and electrodes. the control unit is composed of electronic circuitry placed in a case with one or more ports for connecting the cables. even if protected, the circuitry is subjected to thermal, electrical, and magnetic disturbances [16], [17]. the identification of these disturbances is essential for the performances of the devices and to improve competitivity in the market. for this reason, the control unit and the accessories should be metrologically characterized through a specific test for each possible sources of error [18], [19]. moreover, if the disturbances are properly identified, a corrective calibration strategy can be applied [20], [21]. 2. materials and methods 2.1. instrumental equipment the instrumentation selected for this study consists of a bia device (metadieta bia), three sets of cables, three sets of electrodes of different manufacturers, a series of resistances and capacitors, and a breadboard. metadieta bia (figure 4) is an electromedical device for the evaluation of the corporal composition manufactured by the company meteda s.r.l. (rome, italy). the bioimpedance is measured by placing four electrodes on the hands and feet, with a single cable connected to the main unit. the impedance value is computed from the response to a sinusoidal current of 350 μa with a frequency of 50 khz (a standard de facto for most bia devices with a single frequency). the device has a size of 43 × 43 × 12 mm3 and a mass of 50 g; a lithium battery can supply the device up to 14 hours in working conditions. it does not have a screen on the control unit, but it can be managed by an application running on phones, tablets, and computers with a bluetooth connection. the device is designed to be used in clinics by physicians, nutritional biologists and qualified sanitary personnel but also by consumers in home environment. the application provides the user with information about the preparation and the execution of the test measurement, then it sends and stores the data on the cloud for later analyses. data measurements are processed on the cloud application and the results can be either quantitative for clinical personnel or qualitative with displayed information in graphs along with the tendencies for individual users. the additional equipment for the test is represented by three cables of the same model between the main unit and the four electrodes clamps and a series of electrodes of three producers: biatrodes® by akern slr (firenze, italy), bia electrodes by rjl systems inc (clinton twp, mi, usa), and regal™ resting ecg by vermed® inc (bells falls, vt, usa). 2.2. proposed method the first operation to perform with a measurement device regards the metrological characterization in terms of repeatability and reproducibility after the identification of the possible sources of error [22]. generally, this kind of device makes use of empirical equations whose parameters are established by means of a calibration operation performed in laboratory [23]. since the calibration curves can assume a large set of values, a simplification of the process can rely on the study of a group of key values. this research proposes a data selection based on six values of resistance between 200 ω and 900 ω with a step of 140 ω, combined with six values of reactance between 15 ω and 115 ω with a step of 20 ω. these values are represented in the grid in figure 5. to assemble a physical circuit starting from the reactance values, suitable capacitors can be identified by converting 𝑋𝑐 into a capacitance c with the formula: 𝐶 = 1 2 ∙ 𝜋 ∙ 𝑓 ∙ 𝑋𝑐 , (1) where f is the frequency of ac generated by the metadieta bia device, i.e. 50 khz. the capacitance values obtained after the conversion are therefore: 212 nf, 91 nf, 58 nf, 42 nf, 34 nf, and 28 nf. by combining the values of resistance and capacitance, we defined a grid of 36 combinations and we evaluated the measurements’ repeatability and reproducibility in each condition. the procedure also allows identifying compensation functions allowing to reduce the systematic errors affecting the reading [24]. 2.3. experimental design the metadieta bia is turned on when the cables are inserted in the miniusb port and the connection is initiated by the application on a master device. measurements are typically performed by placing four electrodes on the hands and feet. the electrodes are silver plated for a low resistance and attached to the skin using an adhesive gel. however, for consistency, all the experiments were performed on laboratory instrumentation with electric circuits figure 4. picture of the metadieta bia control unit. figure 5. calibration grid with 36 combinations of the key values selected. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 representing the body composition through the fricke’s model, so the electrodes were included only in specific tests. the tests were performed in metrospace lab of politecnico di milano and can be divided into: 1. preliminary tests for the metrological characterization of metadieta bia device, cables, electrodes, and adhesive gel. 2. test for systematic error compensation based on the calibration grid in figure 5. a high precision lcr meter, model lcr-819 gw instek (good will instrument co., ltd, taiwan), was used as a reference system for measuring the impedance of the test components, while a multimeter, model agilent 34401a, was used for the only resistance measurements of the electrical components. 2.4. preliminary tests first, the measurement repeatability of the control unit was tested by performing 30 measurements of the resistance r and reactance 𝑋𝑐 repeated on five different electric circuits connecting the cable clamps directly to the circuit with no other modifications between each test and the next. the three different cables of the same model were tested with 30 measurements each with the lcr meter, on the same electric circuit directly connecting the clamps of the cables. keeping the same configuration, the effect of the electrodes was studied applying these components without the adhesive material between the clamps and the electric circuit with passive elements. a total of 30 different sets of electrodes of the three manufacturers were tested, 4 electrodes for each set. at the same time, the resistance r of the cables and the electrodes was measured 30 times for each component through the multimeter. the variability of electrical resistance of the electrodes was estimated by placing the multimeter terminals in two positions, on the tab and on an opposite area far from it (circled in figure 6). the effect of the adhesive gel, which determines the interaction with the bia device and a biological tissue, was simulated by means of a jelly phantom (figure 7) with nominal resistance of 𝑅𝑝ℎ = (571.2 ± 1.2) ω (c.i. = 68 %) and nominal reactance of 𝑋𝑐 𝑝ℎ = (75.1 ± 1.9) ω (c.i. = 68 %) [25]. for this test, 30 measurements for each manufacturer’s electrode were performed to calculate the mean value of the resistance �̅� and reactance �̅�𝑐 and the relative standard deviation. the four electrodes were positioned at the edges of the container, one couple on the left side and the other couple on the right side with a distance of about 30 cm. the distance between the two electrodes of each couple was of about 10 cm as recommended by the manufacturer. this configuration with the dominant distance (30 cm > 10 cm) between the two couples of electrodes aimed to replicate the measurement behaviour on a human body, avoiding uncontrolled dispersion of the electric charge. 2.5. tests for systematic error compensation a set of 36 circuits with passive elements was built by combining selected components with the resistance and capacitance collected in table 1, to the key values of the calibration grid in figure 5. table 1 also includes the reactance values after the conversion obtained by inverting the eq.1. the resistances components have a manufacturing tolerance of 0.1%, whereas the capacitors have a value of 1%. the circuits were mounted on a breadboard and the values read by metadieta bia device were compared to the values read by the lcr meter as references [26]. the differences between the measured and the reference allowed to calculate the rmse and control for the presence of defined patterns related to systematic disturbances. part of these disturbances was removed by adding two corrections terms 𝑅𝑎 and x𝑐 𝑎, obtained by a least square minimization of a multivariate linear model, to the generic measurements r and 𝑋𝑐 in the form: 𝑅𝑎𝑑𝑗 = 𝑅 + 𝑅𝑎 (2) and 𝑋𝑐 𝑎𝑑𝑗 = 𝑋𝑐 + x𝑐 𝑎 , (3) where 𝑅𝑎𝑑𝑗 and 𝑋𝑐 𝑎𝑑𝑗 are the compensated results. figure 6. area of the electrode for measuring the resistance. figure 7. preliminary test of the electrodes on a jelly phantom. table 1. resistances and capacitances of the selected components and the reactance values after conversion for the calibration map experiments. component 1 2 3 4 5 6 r in ω 200 330 470 615 780 910 c in nf 225 92 51 36 32 27 xc in ω 14 35 56 89 99 120 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 3. results 3.1. preliminary tests the results of the repeatability test of the control unit on the 5 electric circuits with 30 measurements performed on each circuit are shown in table 2: r and 𝑋𝑐 are the key values chosen for the experiments, 𝑅𝑟𝑒𝑓 and 𝑋𝑐 𝑟𝑒𝑓 are the reference values read by the lcr meter, �̅� and �̅�𝑐 are the mean values read by the metadieta bia device with 𝜎𝑅 and 𝜎𝑋𝑐 the relative standard deviations. the three tested cables showed a standard deviation of the resistance of 𝜎𝑅 = 1.8 ω, while the standard deviation of the reactance is 𝜎𝑋𝑐 = 0.1 ω. from these values it was possible to evaluate the uncertainty values 𝑢𝑅 = 𝜎𝑅 √30 =⁄ 0.33 ω and 𝑢𝑋𝑐 = 𝜎𝑋𝑐 √30 =⁄ 0.018 ω (c.i. = 68 %). the electrode without the adhesive gel were tested on a circuit with the nominal resistance r = (617.812 ± 0.011) ω (c.i. = 68 %) and the equivalent reactance 𝑋𝑐= (90.137 ± 0.019) ω (c.i. = 68 %) with the metadieta bia device. the mean and the standard deviation of the resistance and the reactance are reported in table 3. the maximum standard deviation values were reported by the rjl systems electrodes equal to 𝜎𝑅 = 0.5 ω and 𝜎𝑋𝑐 = 0.1 ω with the correspondent uncertainties equal to 𝑢𝑅 = 𝜎𝑅 √30 =⁄ 0.091 ω and 𝑢𝑋𝑐 = 𝜎𝑋𝑐 √30 =⁄ 0.018 ω (c.i. = 68 %). the resistance-only measurements of the same electrodes performed through the multimeter are reported in table 4. in this case both rjl systems and vermed® electrodes reported a maximum standard deviation of 𝜎𝑅 = 0.4 ω and an uncertainty of 𝑢𝑅 = 𝜎𝑅 √30 =⁄ 0.073 ω (c.i. = 68 %). the last experiment of the preliminary test on the jelly phantom are reported in table 5. all three electrode samples showed a standard deviation of 𝜎𝑅 = 0.1 ω with an uncertainty of 𝑢𝑅 = 𝜎𝑅 √30 =⁄ 0.018 ω (c.i. = 68 %), whereas akern and vermed® electrodes reported a standard deviation different from zero and equal to 𝜎𝑋𝑐 = 0.1 ω corresponding to an uncertainty of 𝑢𝑋𝑐 = 𝜎𝑋𝑐 √30 =⁄ 0.018 ω (c.i. = 68 %). 3.2. systematic error compensation the measurements on 36 electric combinations with the metadieta bia device and the reference values are depicted in figure 8. from these data, the rmse of the 36 configurations resulted 𝑅𝑅𝑀𝑆𝐸 = 4.17 ω and 𝑋𝑐,𝑅𝑀𝑆𝐸 = 7.28 ω. the minimization of the least square on the multivariate linear regression returned the following correction terms: 𝑅𝑎 = −1.592 + 0.994 ∙ 𝑅 + 0.002 ∙ 𝑋𝑐 + 2.45 ∙ 10−5 ∙ 𝑅 ∙ 𝑋𝑐 (4) and x𝑐 𝑎 = −3.412 + 0.010 ∙ 𝑅 + 1.079 ∙ 𝑋𝑐 − 2.19 ∙ 10−5 ∙ 𝑅 ∙ 𝑋𝑐 . (5) with r and 𝑋𝑐 the actual values read by the bia device. furthermore, the multivariate linear regression reported the table 2. results of the repeatability test of the control unit on 5 electric circuits. r in ω xc in ω rref in ω xcref in ω �̅� in ω σr in ω �̅�𝐜 in ω σxc in ω 200 15 200.1 17.9 202.7 0.0 18.9 0.1 200 75 191.4 92.3 193.5 0.1 88.7 0.1 340 115 330.5 124.4 333.4 0.1 116.1 0.0 620 75 617.8 90.1 622.1 0.0 80.2 0.0 900 95 910.9 101.1 916.9 0.0 98.3 0.0 table 3. results of the repeatability test of the three producer’s electrodes without the adhesive gel by the metadieta bia device. manufacturer �̅� in ω σr in ω �̅�𝐜 in ω σxc in ω akern 619.2 0.2 91.4 0.0 rjl systems 619.5 0.5 91.4 0.1 vermed® 619.4 0.1 91.5 0.0 table 4. results of the repeatability test of the three producer’s electrodes without the adhesive gel by the multimeter agilent 34401a. manufacturer �̅� in ω σr in ω akern 1.6 0.3 rjl systems 2.1 0.4 vermed® 2.1 0.4 table 5. results of the repeatability test of the three producer’s electrodes with the adhesive gel on the jelly phantom by the metadieta bia device. manufacturer �̅� in ω σr in ω �̅�𝐜 in ω σxc in ω akern 573.2 0.1 79.1 0.1 rjl systems 573.3 0.1 78.5 0.0 vermed® 573.2 0.1 78.9 0.1 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 adjusted r2 values of �̅�𝑅 2 =0.947 for the resistance and �̅�𝑋𝑐 2 = 0.696 for the reactance. compensating for the values in figure 8 with the terms 𝑅𝑎 and x𝑐 𝑎, the values of rmse decrease to 𝑅𝑅𝑀𝑆𝐸 =1.16 ω and 𝑋𝑐,𝑅𝑀𝑆𝐸 =1.28 ω. 4. discussion the tests on the metadieta bia device revealed that the cables, the silver-plated electrodes, and the gel have a negligible influence on the overall measurement chain: the cables showed an uncertainty of 𝑢𝑅 = 3.3 ∙ 10 -1 ω (c.i. = 68 %) and 𝑢𝑋𝑐 = 1.8 ∙ 10-2 ω (c.i. = 68 %), while the maximum uncertainties introduced by the electrodes were 𝑢𝑅 = 8.6 ∙ 10 -2 ω (c.i. = 68 %) and 𝑢𝑋𝑐 = 1.7 ∙ 10 -2 ω (c.i. = 68 %). the comparison between the three electrode models also showed that these elements have the same electric characteristics for which the device performance does not change, as proved by sanchez et al. [27]. also, the tests for the gel on the jelly phantom did not report any significant influence since the maximum uncertainties were 𝑢𝑅 = 1.7 ∙ 10-2 ω (c.i. = 68 %) and 𝑢𝑋𝑐 = 1.7 ∙ 10 -2 ω (c.i. = 68 %). this means that the adhesive gel is essential for keeping the contact between the electrodes and the skin but it does not add any relevant disturbance to the measurement process [28]. the comparison between the reference values and the measurements with the bia device in figure 8 showed that the uncertainties of the reactance and resistance tend to increase for the combinations with higher values. nonetheless, the trend was corrected effectively by the multivariate linear regression. in fact, the two terms 𝑅𝑎 and x𝑐 𝑎 can decrease the uncertainties to 𝑅𝑅𝑀𝑆𝐸 =1.16 ω and 𝑋𝑐,𝑅𝑀𝑆𝐸 =1.28 ω. moreover, by observing the expressions of 𝑅𝑎, it is evident that the read reactance contribution is negligible. conversely, the read resistance value has a relevant influence on the compensation procedure. 5. conclusions bia is an effective and valid tool to estimate body composition from a fast and safe single measurement. nonetheless, the estimation can fail when the measurement conditions change or if there is a poor calibration of the bia device. in this paper, we evaluated the causes of variability of bioimpedance measurements. first, the equipment was metrologically characterized showing that it does not influence the measurements significantly with uncertainties lower than 0.35 ω (c.i. = 68 %) for both resistance and reactance. for what concerns the validation of bia equations, it must be carried out against gold standards, even though they exhibit limitations due to hydration conditions, age, and ethnicity. this study proposed a calibration grid made of 36 configurations of key values. the grid allowed to calculate multivariate linear models minimizing the least square errors which can be used to calibrate the metadieta bia device. in the case-study presented in this work, the bias error compensation reduced the rmse from 4.2 ω to 1.2 ω for the resistance and from 7.3 ω to 1.3 ω for the reactance with the adjusted r2 values respectively of 0.947 and 0.696. prospectively, the calibration maps can be extended to higher values and the key points grid can be further populated for more robust results. references [1] u. g. kyle, l. genton, d. o. slosman, c. pichard, fat-free and fat mass percentiles in 5225 healthy subjects aged 15 to 98 years, nutrition. 17 (2001), pp. 534-541. doi: 10.1016/s0899-9007(01)00555-x [2] h. c. lukaski, methods for the assessment of human body composition: traditional and new, am. j. clin. nutr. 46 (1987), pp. 537-556. doi: 10.1093/ajcn/46.4.537 [3] a. talluri, r. liedtke, e. i. mohamed, c. maiolo, r. martinoli, a. de lorenzo, the application of body cell mass index for studying muscle mass changes in health and disease conditions, acta diabetol. 40 (2003). doi: 10.1007/s00592-003-0088-9 [4] a. choi, j. y. kim, s. jo, j. h. jee, s. b. heymsfield, y. a. bhagat, i. kim, j. cho, smartphone-based bioelectrical impedance analysis devices for daily obesity management, sensors (switzerland). 15 (2015), pp. 22151-22166. doi: 10.3390/s150922151 [5] p. pisani, a. greco, f. conversano, m. d. renna, e. casciaro, l. quarta, d. costanza, m. muratore, s. casciaro, a quantitative ultrasound approach to estimate bone fragility: a first comparison with dual x-ray absorptiometry, meas. j. int. meas. confed. 101 (2017), pp. 243-249. doi: 10.1016/j.measurement.2016.07.033 [6] m. dehghan, a. t. merchant, is bioelectrical impedance accurate for use in large epidemiological studies?, nutr. j. 7 (2008), pp. 1-7. doi: 10.1186/1475-2891-7-26 [7] u. g. kyle, i. bosaeus, a. d. de lorenzo, p. deurenberg, m. elia, j. m. gómez, b. l. heitmann, l. kent-smith, j. c. melchior, m. pirlich, h. scharfetter, a. m. w. j. schols, c. pichard, bioelectrical impedance analysis part ii: utilization in clinical practice, clin. nutr. 23 (2004), pp. 1430-1453. doi: 10.1016/j.clnu.2004.09.012 [8] j. hlubik, p. hlubik, l. lhotska, bioimpedance in medicine: measuring hydration influence, j. phys. conf. ser. 224 (2010), 012135. doi: 10.1088/1742-6596/224/1/012135 [9] f. villa, a. magnani, m. a. maggioni, a. stahn, s. rampichini, g. merati, p. castiglioni, wearable multi-frequency and multisegment bioelectrical impedance spectroscopy for unobtrusively tracking body fluid shifts during physical activity in real-field applications: a preliminary study, sensors (switzerland). 16 (2016), pp. 1-15. doi: 10.3390/s16050673 [10] i. v. krivtsun, i. v. pentegov, v. n. sydorets, s. v. rymar, a technique for experimental data processing at modeling the dispersion of the biological tissue impedance using the fricke equivalent circuit, electr. eng. electromechanics. 0 (2017), pp. 27-37. doi: 10.20998/2074-272x.2017.5.04 figure 8. comparison between the reference values provided by the lcr meter (blue dots) and the measurements performed with the metadieta bia device (orange dots). https://doi.org/10.1016/s0899-9007(01)00555-x https://doi.org/10.1093/ajcn/46.4.537 https://doi.org/10.1007/s00592-003-0088-9 https://doi.org/10.3390/s150922151 https://doi.org/10.1016/j.measurement.2016.07.033 https://doi.org/10.1186/1475-2891-7-26 https://doi.org/10.1016/j.clnu.2004.09.012 https://doi.org/10.1088/1742-6596/224/1/012135 https://doi.org/10.3390/s16050673 https://doi.org/10.20998/2074-272x.2017.5.04 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 7 [11] s. cigarrán guldrís, future uses of vectorial bioimpedance (biva) in nephrology, nefrologia. 31 (2011), pp. 635-643. doi: 10.3265/nefrologia.pre2011.oct.11108 [12] f. savino, f. cresi, g. grasso, r. oggero, l. silvestro, the biagram vector: a graphical relation between reactance and phase angle measured by bioelectrical analysis in infants, ann. nutr. metab. 48 (2004), pp. 84-89. doi: 10.1159/000077042 [13] r. gonzález-landaeta, o. casas, r. pallàs-areny, heart rate detection from plantar bioimpedance measurements, ieee trans. biomed. eng. 55 (2008), pp. 1163-1167. doi: 10.1109/tbme.2007.906516 [14] f. ibrahim, m. n. taib, w. a. b. wan abas, c. c. guan, s. sulaiman, a novel approach to classify risk in dengue hemorrhagic fever (dhf) using bioelectrical impedance analysis (bia), ieee trans. instrum. meas. 54 (2005), pp. 237-244. doi: 10.1109/tim.2004.840237 [15] s. f. khalil, m. s. mohktar, f. ibrahim, the theory and fundamentals of bioimpedance analysis in clinical status monitoring and diagnosis of diseases, sensors (switzerland). 14 (2014), pp. 10895-10928. doi: 10.3390/s140610895 [16] a. ferrero, measuring electric power quality: problems and perspectives, meas. j. int. meas. confed. 41 (2006), pp. 121-129. doi: 10.1016/j.measurement.2006.03.004 [17] g. m. d’aucelli, n. giaquinto, c. guarnieri caló carducci, m. spadavecchia, a. trotta, uncertainty evaluation of the unified method for thermo-electric module characterization, meas. j. int. meas. confed. 131 (2018), pp. 751-763. doi: 10.1016/j.measurement.2018.08.070 [18] m. yang, z. guan, j. liu, w. li, x. liu, x. ma, j. zhang, research of the instrument and scheme on measuring the interaction among electric energy metrology of multi-user electric energy meters, meas. sensors. 18 (2021), 100067. doi: 10.1016/j.measen.2021.100067 [19] e. pittella, e. piuzzi, e. rizzuto, s. pisa, z. del prete, metrological characterization of a combined bio-impedance plethysmograph and spectrometer, meas. j. int. meas. confed. 120 (2018), pp. 221229. doi: 10.1016/j.measurement.2018.02.032 [20] a. ferrero, c. muscas, on the selection of the “best” test waveform for calibrating electrical instruments under nonsinusoidal conditions, ieee trans. instrum. meas. 49 (2000), pp. 382–387. doi: 10.1109/19.843082 [21] b. qi, x. zhao, c. li, methods to reduce errors for dc electric field measurement in oil-pressboard insulation based on kerreffect, ieee trans. dielectr. electr. insul. 23 (2016), pp. 16751682. doi: 10.1109/tdei.2016.005507 [22] s. corbellini, a. vallan, arduino-based portable system for bioelectrical impedance measurement, ieee memea 2014 int. symp. med. meas. appl. proc. (2014), pp. 4-8. doi: 10.1109/memea.2014.6860044 [23] t. kowalski, g. p. gibiino, j. szewiński, p. barmuta, p. bartoszek, p. a. traverso, design, characterisation, and digital linearisation of an adc analogue front-end for gamma spectroscopy measurements, acta imeko 10 (2021) 2, pp. 70-79. doi: 10.21014/acta_imeko.v10i2.1042 [24] a. ferrero, m. lazzaroni, s. salicone, a calibration procedure for a digital instrument for electric power quality measurement, ieee trans. instrum. meas. 51 (2002), pp. 716-722. doi: 10.1109/tim.2002.803293 [25] m. peixoto, m. v. moreno, n. khider, conception of a phantom in agar-agar gel with the same bio-impedance properties as human quadriceps, sensors. 21 (2021). doi: 10.3390/s21155195 [26] l. cristaldi, a. ferrero, s. salicone, a distributed system for electric power quality measurement, ieee trans. instrum. meas. 51 (2002), pp. 776-781. doi: 10.1109/tim.2002.803300 [27] b. sanchez, a.l.p. aroul, e. bartolome, k. soundarapandian, r. bragós, propagation of measurement errors through body composition equations for body impedance analysis, ieee trans. instrum. meas. 63 (2014), pp. 1535-1544. doi: 10.1109/tim.2013.2292272 [28] t. ouypornkochagorn, influence of electrode placement error and contact impedance error to scalp voltage in electrical impedance tomography application, ieecon 2019, 7th int. electr. eng. congr. proc. (2019). doi: 10.1109/ieecon45304.2019.8939016 https://doi.org/10.3265/nefrologia.pre2011.oct.11108 https://doi.org/10.1159/000077042 https://doi.org/10.1109/tbme.2007.906516 https://doi.org/10.1109/tim.2004.840237 https://doi.org/10.3390/s140610895 https://doi.org/10.1016/j.measurement.2006.03.004 https://doi.org/10.1016/j.measurement.2018.08.070 https://doi.org/10.1016/j.measen.2021.100067 https://doi.org/10.1016/j.measurement.2018.02.032 https://doi.org/10.1109/19.843082 https://doi.org/10.1109/tdei.2016.005507 https://doi.org/10.1109/memea.2014.6860044 https://doi.org/10.21014/acta_imeko.v10i2.1042 https://doi.org/10.1109/tim.2002.803293 https://doi.org/10.3390/s21155195 https://doi.org/10.1109/tim.2002.803300 https://doi.org/10.1109/tim.2013.2292272 https://doi.org/10.1109/ieecon45304.2019.8939016 is our understanding of measurement evolving? acta imeko issn: 2221-870x december 2021, volume 10, number 4, 209 213 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 209 is our understanding of measurement evolving? luca mari1 1 università cattaneo liuc, c.so matteotti, 22, 21053 castellanza (va), italy section: research paper keywords: foundations of measurement; measurement and quantification; measurement as empirical process; measurement as representation citation: luca mari, is our understanding of measurement evolving?, acta imeko, vol. 10, no. 4, article 32, december 2021, identifier: imeko-acta10 (2021)-04-32 section editor: francesco lamonaca, university of calabria, italy received october 1, 2021; in final form november 20, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: luca mari, e-mail: lmari@liuc.it 1. introduction the terminology about and around measurement is often not so specific, and sometimes even a bit sloppy. for sure, a long tradition allows us to assume a reasonably common understanding of a phrase like “to measure the length (or the mass, or…) of a given physical body”. but the claim that for example thermal comfort (as in [1]) is a measurable property is not as obviously meant in the same way by all relevant stakeholders. do different experts refer to the same sort of situations when they talk about the measurement of thermal comfort? what do they mean when they use instead phrases like “determination of thermal comfort”, “assessment of thermal comfort”, “quantification of thermal comfort”, “assignment of a value to thermal comfort”? and what are the conditions that make the determination, or the assessment, or… of thermal comfort a measurement? advancements of science and technology are not driven by terminological works, and therefore this kind of questions could be dismissed as immaterial if our goals are scientific or technological. admittedly, indeed, clearer ideas about the meaning of a term – like “measurement”, or “measurand”, or “measurement uncertainty”, and so on – do not improve our ability to design measuring instruments and perform measurements. nevertheless, metrology (in the broad sense given by the international vocabulary of metrology (vim: [2]): “science of measurement and its application”, thus in principle not limiting it to physical quantities) is a very special body of knowledge, and this peculiarity suggests that terminology might be more important in metrology than in other experimental fields like physics and chemistry. it is a fact that some key contents of metrology derive at least in part from social understanding and agreement, and not only from the outcomes of observation and experimentation. an obvious example is the identification of an individual quantity as the unit for a given kind, like the kilogram for mass: there can be empirical criteria to be taken into account, but the selection is ultimately conventional, and as such not falsifiable, in popper’s sense [3]. in fact, any discipline is grounded on some presuppositions and conventions that are chosen because they are mutually consistent, simple, elegant, …, and not because they are true. the peculiarity of metrology is that this applies to a substantial part of its body of knowledge: even though measurement is an experimental process (the idea that a measurement can be a gedankenexperiment, a thought experiment, sounds strange), it is as if the foundational role that metrology plays for all empirical sciences prevents it to obtain its own foundations somewhere else. metrology is a foundation without a foundation [4]. while this seems to be a structurally obvious situation – if xi is founded on xi–1 that is founded on xi–2 that…, then the sequence must stop where an x0 has no foundations –, acknowledging that metrology has sometimes the delicate role of x0 could generate an embarrassing doubt: isn’t metrology a “real” science then? and however, how can we forget that what is possibly the “most foundational” component of the metrology of the last 150 years is the metre convention, i.e., first of all a political treaty? or as another example, closer to us in time, consider the interesting discussion about base quantities, abstract traditionally understood as a quantitative empirical process, in the last decades measurement has been reconsidered in its aims, scope, and structure, so that the basic questions are again important: what kind of knowledge do we obtain from a measurement? what is the source of the acknowledged special efficacy of measurement? a preliminary analysis is proposed here from an evolutionary perspective. mailto:lmari@liuc.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 210 triggered by the 2019 revision of the definitions of some units given in terms of numerical values assigned to some quantities modeled as constants (the speed of light in vacuum etc). does this revision imply that the very idea of base quantity is now unjustified? or that base quantities should be those of the defining constants (hence speed, action, etc)? or finally that nothing should be changed on this matter (and therefore that length, mass, etc should remain the base quantities)? the question has in fact some importance, but no experiments can be designed to provide an answer: again, it is a matter of shared understanding of the pros and the cons, and then of agreement. this solicits us to reconsider the role of terminology in metrology: where social agreement plays a key role, and when disagreements cannot be settled by seeking the truth, welldefined concepts and well-chosen terms – this is what terminological works aim at – may be useful, if not indispensable. moreover, the fact that measurement is a fundamental enabler not only of top science, but also of technology, commerce, health care, environment protection, etc, adds a further reason to the special importance of terminology in metrology: again, this role requires shared concepts and terms, on which shared knowledge – like the one that guarantees the metrological traceability of measurement results – can be grounded. metrology is a social body of knowledge. basically everything that has been considered so far applies also and particularly to the very starting point of metrology: what is measurement? how should ‘measurement’ be defined? (someone might object that the starting point of metrology is the definition of ‘quantity’, not the one of ‘measurement’, perhaps by referring to the fact that the first entry of the first chapter of the vim is about ’quantity’; i respectfully disagree: ‘quantity’, like ‘property’, is a pre-metrological concept). in fact, there is nothing new in discussions about the scope of measurement and the terminological endeavor of providing an appropriate definition of ‘measurement’: the sometimes also harsh clashes about the measurability of psychological properties (i.e., the issue of whether psychometrics is actually about measurement) highlight that on the table there is way more than a dictionary issue. the distinction between, say, “my opinion on the competence of the candidate is…” and “the result of the measurement of the competence of the candidate is…” is not only about the occurrence of the term “measurement” in the second sentence, and in fact the position that psychological properties can be measured has been under scrutiny for decades [5]. rather, a fundamental question is at stake here: what kind of knowledge do we obtain from a measurement? and also, given that “measurement is often considered a hallmark of the scientific enterprise and a privileged source of knowledge” [6]: “what [is] the source of [the] special efficacy” of measurement?” [7]. this is the subject to which the present paper is devoted. 2. the received position: measurement as a coin with a euclidean side and a galilean side it is remarkable that the answer to these questions is also historically ambiguous, with two main lines that have been developed [8]. on the one hand, the euclidean tradition emphasizes the quantitative nature of measures, where in this sense “measurable” is basically synonym of “divisible by ratio”, as clearly explained for example by de morgan: “the term ‘measure’ is used [by euclid] conversely to ‘multiple’; hence [if] a and b have a common measure [they] are said to be commensurable” [9]. hence, this concept of measure applies first of all to numbers: “a measure of a number is any number that divides it, without leaving a reminder. so, 2 is a measure of 4, of 8, etc” [10], as in fact stated by euclid himself: “a number is part of a(nother) number, the lesser of the greater, when it measures the greater” (euclid). there is nothing necessarily empirical in this concept of measure, and in fact “in the geometrical constructions employed in the elements [...] empirical proofs by means of measurement are strictly forbidden” ([11]; in the introductory notes to the translation of euclid’s elements). on the other hand, the galilean tradition emphasizes the empirical nature of measurement, where before galileo “no one ever sought to get beyond the practical uses of number, weight, measure in the imprecision of everyday life” [12]. the tight connection between instrumentation and measurement witnesses the acknowledged role of measurement as a key enabler of the experimental method: a measurement is the process performed by making a physical device, i.e., a measuring instrument, interact with an empirical object according to the instructions provided by a measurement procedure. being quantitative and being empirical are orthogonal conditions: there are quantitative empirical processes, but a process may be quantitative and not empirical, or empirical and not quantitative. this means that in principle a process that is not a (galilean) measurement might produce a (euclidean) measure, and a process that is a (galilean) measurement might not produce a (euclidean) measure (such a lexical peculiarity was already highlighted by bunge [13], who discussed the difference between ‘measure’ and ‘measurement’; this becomes further clear by comparing the scopes of measure theory and a measurement theory). however, historically galileo came later, and drew from euclid and his interpretations, so that the principled independence became a factual convergence: (galilean) measurement was assumed to be a quantitative process, that produces (euclidean) measures. the lexicon maintains some traces of the coexistence of these two standpoints and of the interest of endorsing both, as witnessed in particular by the expression “weights and measures”, as if weighing were not a way to measure. indeed, euclid was concerned with geometric quantities, so that in the euclidean tradition measurable (or: “mensurable”, as it was said) were considered geometric quantities: according to hutton, “mensuration” is “the act, or art, of measuring figured extensions and bodies, or of finding the dimensions and contents of bodies, both superficial and solid” [10], so that for example in the case of temperature he used the term “observation” (this shows that the scope of measurement already broadened in the past!). plausibly, the synthesis of the euclidean and the galilean standpoints led to the idea that only extensive (taken from euclid) physical (taken from galileo) quantities are measurable, at least in the fundamental sense specified by campbell [14]. with some simplification, we may then summarize that the received view about 100 years ago was about measurement as characterized by two complementary components: measurement as a coin with a euclidean side and a galilean side. such a position is so strict – it is the outcome of the intersection of two independent standpoints, and thus inherits two sets of constraints – that not surprisingly trying to overcome it has been a target for several decades. in this perspective, the well known report of the ferguson committee to the british association for the advancement of science, published in 1940 [15], stating that “the main point against the measurability of the acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 211 intensity of a sensation was the impossibility of satisfactorily defining an addition operation for it” [16], can be read as a move to defend the orthodoxy of the synthesis of the euclidean and the galilean sides of the coin. 3. rethinking the received position in the last century the assumptions that measurement is quantification (the euclidean side) and is about (geometric) physical properties only (the galilean side) have been reanalysed, apparently by asking if really both such requirements need to be fulfilled, and to what extent. in particular, from stevens’ theory of types of scales [17] representationalism [18] explored how to broaden the euclidean side, sometimes by simply dropping any reference to the galilean side. in this sense we can read statements claiming that “the theory of measurement is difficult enough without bringing in the theory of making measurements” [19], or that a representation theorem “makes the theory of finite weak orderings a theory of measurement, because of its numerical representation” [20]. in this complex context, the definition given by the vim – “process of experimentally obtaining one or more quantity values that can reasonably be attributed to a quantity” [2] – is still quite conservative, with both the euclidean side (“quantity”) and the galilean side (“experimental”) explicitly maintained. is it sufficient for our society, that requires criteria of trust about determining / assessing / attributing values to … properties, like thermal comfort, that are not necessarily quantitative and are not entirely empirical? and is it sufficient for our society, in which the widespread digitalization is producing larger and larger amounts of data (the so-called “big data” phenomenon)? indeed, in an age of fake news and post-truth, providing criteria that make explicit and possibly operational the vim condition of “reasonable attribution”, and therefore such that not any data deserve to be called “measurement results”, seems to be a valuable achievement. in other words, our complex society would definitely benefit from an effective answer to kuhn’s question about the source of the special efficacy of measurement, while at the same time pushing toward reconsidering the actual necessity of euclidean and galilean conditions. and in fact conservative positions like vim’s are challenged today, so that measurement seems to have become a moving target. a significant and authoritative example is the standpoint of the us national institute of standards and technology’s simple guide for evaluating and expressing the uncertainty of nist measurement results, that defines ‘measurement’ as “an experimental or computational process that, by comparison with a standard, produces an estimate of the true value of a property of a material or virtual object or collection of objects, or of a process, event, or series of events, together with an evaluation of the uncertainty associated with that estimate, and intended for use in support of decision-making” [21]. with a mix of tradition (the reference to true values) and innovation (the evaluation of uncertainty as a necessary condition), here both euclidean and galilean conditions have been dropped: also non-quantitative properties are in principle measurable, and measurement can be also a non-empirical process about non-empirical (“virtual”) objects. is our understanding of what measurement is still evolving then, and in which direction(s)? 4. some possible evolutionary perspectives listing some necessary conditions that characterize measurement, and that plausibly are generally accepted, is not a hard task: measurement is (i) a process (ii) designed on purpose, (iii) whose input is a property of an object, and (iv) that produces information in the form of values of that property. indeed, (i) removes the ambiguity of using the term “measurements” also for the results of the process; (ii) highlights that measurement is not a natural process “that happens”; (iii) establishes that phrases like “to measure an object” are not correct, because measured are properties of objects, and not objects as such; (iv) characterizes measurement as an information process. however, not any such process is a measurement, thus acknowledging that not any data acquisition is a measurement. we may call “property evaluation” a process fulfilling (i)-(iv). what sufficient conditions characterize measurement as a specific kind of property evaluation? the answer does not seem as easy. the term “measurement” does not have a single inherent meaning and is not trademarked, so that nobody can be prevented from using it as she/he likes. nevertheless, without a common foundation, a foundational body of knowledge, as metrology is, is at risk of emptying, or at least of becoming unable to provide a convincing, socially agreeable, and useful answer to kuhn’s question: indeed, not any data acquisition process is claimed to have a “special efficacy”. as discussed above, the traditional, i.e., euclidean & galilean, answer to this question relies on coupling quantification and instrumentation: the assumption that only quantitative properties are measurable guarantees that measurement results are embedded in the nomological network generated by physical laws, from which specific, and then falsifiable, predictions and inferences can be drawn and the hypothesis of the correctness of the measurement results can be corroborated in turn; and the requirement that measuring instruments are empirical devices guarantees that the degree of objectivity of measurement results can be assessed by analysing how such instruments behave. this is the safe starting point. but if both such sides are erased, what remains of the coin? while sticking to the tradition is safe, it might be too strict with respect to what our society needs, as the definition of ‘measurement’ given by the nist seems to suggest. this is a key challenge for metrology, whose solution is then a matter of social responsibility, not truth seeking. in this context, in which respect of the tradition and new societal needs must be balanced, the possible evolutionary perspectives of measurement can be considered along four main complementary, though somewhat mutually connected, dimensions: – measurable entities as quantitative or non-quantitative properties (i.e., the reference to the euclidean tradition); – measurable entities as physical or non-physical properties; – measuring instruments as technological devices or human beings; – measurement as an empirical or an informational process, and therefore the relation between measurement and computation (i.e., the reference to the galilean tradition). again with the explicit admission that at stake here is adequacy, not truth, let us shortly discuss each of these issues. 4.1. measurable entities as quantitative or non-quantitative properties as stevens’ theory of scale types shows, the distinction between quantitative and non-quantitative properties is not acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 212 binary. the strongest type is absolute: an absolute evaluation is additive and has both a natural unit and a natural zero, as is the case of counting. the weakest type is nominal: a nominal evaluation only classifies objects by property. several intermediate types exist between absolute and nominal (e.g., ratio, interval, and ordinal, in the initial version of stevens’ theory), and there is not a single objective criterion to decide where a property stops to be quantitative and becomes nonquantitative. for example, according to the vim a total order is sufficient for a property to be a quantity, whereas the axiomatic approaches developed from holder’s [22] consider “continuity as a feature of the scientific concept of quantity”. the connection between being quantitative and being measurable inherits this ambiguity [23]. for sure, evaluations performed in richer scales convey more structural information, but this does not seem a sufficient criterion to rule out any given type from the scope of measurement: only by convention the decision can be made whether nominal (or ordinal, or...) evaluations can be measurements (and what conditions are required to make quantitative a property, for what it is worth). 4.2. measurable entities as physical or non-physical properties independently of the scale type, the condition of being measurable could be connected to the nature of the considered properties, and in particular to their being physical. a plausible, good justification is the requirement to assess, and possibly to control, the degree of objectivity of the behaviour of the measuring instrument. indeed, this is guaranteed by the operation of a physical sensor at the core of the instrument, where only a physical (or chemical, or biological) property may affect the transduction performed by the sensor. however, a non-physical property, like thermal comfort, may be evaluated as a function of one or more physical properties, such as air temperature and humidity, in a process that is structurally the same as those traditionally considered to be indirect measurements, and where values of such physical properties can be then obtained by means of sensors. the key difference between the evaluation of, say, thermal comfort and density – the latter being a case of indirect measurement through the measurement of mass and volume – is that non-physical properties miss the rich nomological network provided by physics, so that their combination function is not as substantially validated. whether this is sufficient to rule out non-physical properties from the scope of measurement seems to be again a matter of convention. 4.3. measuring instruments as technological devices or human beings complementary to the option that also non-physical properties are measurable, some evaluations directly performed by human beings could be accepted as measurements. the relatively long history of what has been considered psychophysical measurement shows that there is nothing really new in this. there are in fact three strategies to develop humanbased instruments that attribute values to (physical or, more usually, non-physical) properties. first, the behaviour of a “typical” individual is idealized in a model, like in the case of the luminosity function, that describes the sensitivity of a “standard” human eye to different wavelengths. second, a statistic of the behaviour of a set of human beings (e.g., their average response) is considered, under the assumption that individual peculiarities are compensated in a sufficiently large sample, like when the quality of a product or a service is evaluated by synthesizing the responses given by several customers. third, an individual or a small group of individuals operates, under the condition that they are domain experts and therefore their evaluation can be considered to be calibrated against some agreed standards, like in the case of gymnastics judges and wine sommeliers. while at least some cases of the first strategy are widely accepted as measurements, as the inclusion of the candela as the si unit of luminous intensity witnesses, whether and under what conditions human beings can be measuring instruments, possibly operating with the support of guidelines, checklists, etc, is again a controversial issue. 4.4. measurement as an empirical or an informational process measurements are aimed at attributing values to properties: since values are information entities, any measurement must then include an informational component. rather, the issue here is whether there can be measurements that are entirely informational processes, with no empirical components at all (note that this is not the case of gymnastics judges and wine sommeliers mentioned above: they are expected to operate by (empirically) observing gym competitions and tasting wines). there are at least two cases at stake. one is about the evaluation of properties that are in turn informational, for example the number of lines of code in the source of a software program. as quoted in section 3, the “computational process” about a “virtual object” to which the nist definition refers could be such, and actually shares several structural features with the processes that are commonly accepted to be measurements. more controversial is instead the hypothesis to consider to be a measurement any computation performed on values of properties, like when one is asked to compute the acceleration that a given force would produce if applied to a body of a given mass. of course, the information on the force and the mass (the “input quantities”) could also include an uncertainty, that in this case should be somehow propagated to the acceleration, and someone could decide that this is sufficient to make such a computation a measurement, by then accepting that what has been propagated is a “measurement” uncertainty, through a “measurement” model. in an evolutionary situation, this is also a possible standpoint. 5. conclusions were we in charge of updating a definition of ‘measurement’, for example for a new edition of the vim, what would we propose, then? how tightly would we stick to the traditional conception of measurement as a quantitative empirical process? or what criterion would we adopt toward a different, and plausibly broader, characterization? conscious that there is not one right, or true, answer, and by taking for granted the necessary conditions listed at the beginning of section 4, i dare to suggest – as a working hypothesis – that the most fundamental and most characterizing task of measurement is to produce information that is explicitly and socially justifiable (it is also the conclusion reached in [8]). this is not related to the quality of the produced information nor to the scope of the process (as the vim states, measurement should be characterized “irrespective of the level of measurement uncertainty and irrespective of the field of application” [2]), but to the condition that a measurement is a “white box” process, so that the quality of its results – be it good or bad – can always in principle be justified. accordingly, the source of the special efficacy of measurement, as investigated by kuhn, is the possibility to reach a common understanding on how acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 213 trustworthy its results are, along the two key dimensions of the objectivity and the intersubjectivity of the provided information [24], [25]. this explains the strategic importance of some components of the metrology of physical quantities, like the widely agreed definition of measurement units and the condition of metrological traceability of measurement results to such units through the appropriate calibration of measuring instruments. whether and how a sufficient objectivity and intersubjectivity of the information produced by processes that aim at being acknowledged as measurements can be obtained: in the perspective we have proposed here, this is the key challenge for an evolutionary understanding of measurement. references [1] iso 7730:2005, ergonomics of the thermal environment – analytical determination and interpretation of thermal comfort using calculation of the pmv and ppd indices and local thermal comfort criteria, international organization for standardization, geneva, 2005. [2] jcgm 200:2012, international vocabulary of metrology – basic and general concepts and associated terms, 3rd ed., paris: joint committee for guides in metrology, 2012. online [accessed 15 december 2021] https://www.bipm.org/en/committees/jc/jcgm/publications. [3] k. popper, the logic of scientific discovery, routledge, abingdon-on-thames, 1959. [4] l. mari, the problem of foundations of measurement, measurement, 38(4) (2005) pp. 259-266. doi: 10.1016/ j.measurement..2005.09.006 [5] j. michell, measurement in psychology: a critical history of a methodological concept, cambridge university press, cambridge, 1999. [6] e. tal, measurement in science, in: the stanford encyclopedia of philosophy, e.n. zalta (ed.), 2020. online [accessed 15 december 2021] https://plato.stanford.edu/archives/fall2020/entries/measurem ent-science [7] t.s. kuhn, the function of measurement in modern physical science, isis, 52(2) (1961) pp. 161-193. [8] l. mari, m. wilson, a. maul, measurement across the sciences – developing a shared concept system for measurement, springer nature, 2021. online [accessed 15 december 2021] https://link.springer.com/book/10.1007%2f978-3-030-65558-7 [9] a. de morgan, the connection of number and magnitude: an attempt to explain the fifth book of euclid, taylor and walton, london, 1836. online [accessed 15 december 2021] https://archive.org/details/connexionofnumbe00demorich [10] c. hutton, a mathematical and philosophical dictionary, johnson, london (freely available on google books), 1795. [11] euclid’s elements of geometry, the greek text of j. l. heiberg (1883-1885) edited, and provided with a modern english translation, by richard fitzpatrick, 2008. online [accessed 15 december 2021] http://farside.ph.utexas.edu/books/euclid/euclid.html [12] a. koyré, du monde de l’à peu près à l’univers de la précision, in: a. koyré (ed.), etudes d’histoire de la pensée philosophique (pp. 341-362), gallimard, paris, 1948. [13] m. bunge, on confusing ‘measure’ with ‘measurement’ in the methodology of behavioral science, in: m. bunge (ed.), the methodological unity of science, d. reidel, dordrecht-holland, 1973. [14] n. r. campbell, physics: the elements, cambridge university press, cambridge, 1920. [15] a. ferguson, c. s. myers, r. j. bartlett, h. banister, f. c. bartlett, w. brown, w. s. tucker, final report of the committee appointed to consider and report upon the possibility of quantitative estimates of sensory events. report of the british association for the advancement of science, 2 (1940) pp. 331-349. [16] g. b. rossi, measurability, measurement, 40 (2007) pp. 545-562. doi: 10.1016/j.measurement.2007.02.003 [17] s. s. stevens, on the theory of scales of measurement, science, 103(2684) (1946) pp. 677-680. [18] d. h. krantz, r. d. luce, p. suppes, a. tversky, foundations of measurement, vol. 1: additive and polynomial representations, academic press, new york, 1971. [19] h.e. kyburg, theory and measurement, cambridge university press, cambridge, 1984. [20] p. suppes, representation and invariance of scientific structures, csli publications, stanford, 2002. [21] a. possolo, simple guide for evaluating and expressing the uncertainty of nist measurement results, technical note, national institute of standards and technology, gaithersburg, md, 2015. doi: 10.6028/nist.tn.1900 [22] o. holder, die axiome der quantität und die lehre vom mass. berichte über die verhandlungen der koeniglich sächsischen gesellschaft der wissenschaften zu leipzig, mathematischphysikaliche klasse, 53 (1901) pp. 1-46. part 1 translated in j. michell, c. ernst, the axioms of quantity and the theory of measurement, j. mathematical psychology, 40(3) (1996) pp. 235252. doi: 10.1006/jmps.1997.1178 [23] l. mari, a. maul, d. torres irribarra, m. wilson, quantities, quantification and the necessary and sufficient conditions for measurement, measurement, 100 (2016) pp. 115-121. doi: 10.1016/j.measurement.2016.12.050 [24] a. maul, l. mari, d. torres irribarra, m. wilson, the quality of measurement results in terms of the structural features of the measurement process, measurement, 116 (2018) pp. 611-620. doi: 10.1016/j.measurement.2017.08.046 [25] a. maul, l. mari, m. wilson, intersubjectivity of measurement across the sciences, measurement, 131 (2019) pp. 764-770. doi: 10.1016/ j.measurement..2018.08.068 https://www.bipm.org/en/committees/jc/jcgm/publications https://doi.org/10.1016/j.measurement.2005.09.006 https://plato.stanford.edu/archives/fall2020/entries/measurement-science https://plato.stanford.edu/archives/fall2020/entries/measurement-science https://link.springer.com/book/10.1007%2f978-3-030-65558-7 https://archive.org/details/connexionofnumbe00demorich http://farside.ph.utexas.edu/books/euclid/euclid.html http://dx.doi.org/10.1016/j.measurement.2007.02.003 https://doi.org/10.6028/nist.tn.1900 https://doi.org/10.1006/jmps.1997.1178 http://dx.doi.org/10.1016/j.measurement.2016.12.050 https://doi.org/10.1016/j.measurement.2017.08.046 https://doi.org/10.1016/j.measurement.2018.08.068 first considerations on post processing kinematic gnss data during a geophysical oceanographic cruise acta imeko issn: 2221-870x december 2021, volume 10, number 4, 10 16 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 10 first considerations on post processing kinematic gnss data during a geophysical oceanographic cruise valerio baiocchi1, alessandro bosman2, gino dardanelli3, francesca giannone4 1 dipartimento ingegneria civile, edile ed ambientale(dicea), sapienza università di roma, via eudossiana 18, i00184, italy 2 istituto di geologia ambientale e geoigegneria, consiglio nazionale delle ricerche (cnrigag), rome, italy 3 department of civil, environmental, aerospace, materials engineering (dicam), university of palermo, italy 4 department of engineering, niccolò cusano university, via don carlo gnocchi 3, rome, i 00166, italy section: research paper keywords: gnss; bathymetry survey; rtk-lib; geoid; tyrrhenian sea citation: valerio baiocchi, alessandro bosman, gino dardanelli, francesca giannone, first considerations on post processing kinematic gnss data during a geophysical oceanographic cruise, acta imeko, vol. 10, no. 4, article 6, december 2021, identifier: imeko-acta-10 (2021)-04-06 section editor: silvio del pizzo, university of naples 'parhenope', italy received june 1, 2021; in final form december 6, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by italian national research council (cnr) corresponding author: valerio baiocchi, e-mail: valerio.baiocchi@uniroma1.it 1. introduction the problem of calculating the connection between on-shore and off-shore heights is still very open and debated. in fact, even though they are all heights, they are referred to reference systems with non-equivalent definitions, so conceptually and numerically very different, and this often makes the connection between the two types of heights complex [1]. the ground altimetric reference system is generally based on the definition of a specific equipotential surface of the gravitational field, identified by the mean level of a reference tide gauge. this method has been used in italy both historically [2] and currently [3] and various national terrestrial altimetric systems have been defined, for this reason the possibility of unifying them at a regional [4] and global level [5] is currently subject of research. gnss systems have made it possible for the first time to obtain highly accurate planoaltimetric measurements (and therefore also elevation measurements) both on land and at sea; however, these measurements do not refer to a physical reality but to a mathematical surface, the ellipsoid of rotation. on land, ellipsoid elevations are transformed into orthometric elevations using local geoid models [6], which are generally more accurate, or global models, which are generally less accurate [7]. the problem of elevations at sea is even more complex because the sea surface is not an equipotential surface, which is "... characterized by uniform temperature and density and free of perturbations related to currents, winds and tides." [8] unfortunately, these are ideal conditions that are never found in nature, and for this reason the materialisation of altimetric reference systems at sea is profoundly different from terrestrial altimetric reference systems [9]. it is important to underline that the interest in this case is mainly in seafloor depth in order to construct high resolution digital elevation model (dems) for seafloor, habitat abstract differential gnss positioning on vessels is of considerable interest in various fields of application as navigation aids, precision positioning for geophysical surveys or sampling purposes especially when high resolution bathymetric surveys are conducted. however ship positioning must be considered a kinematic survey with all the associated problems. the possibility of using high-precision differential gnss receivers in navigation is of increasing interest, also due to the very recent availability of low-cost differential receivers that may soon replace classic navigation ones based on the less accurate point positioning technique. the availability of greater plano-altimetric accuracy, however, requires an increasingly better understanding of planimetric and altimetric reference systems. in particular, the results allow preliminary considerations on the congruence between terrestrial reference systems (which the gnss survey can easily refer to) and marine reference systems (connected to national tidegauge network). in spite of the fluctuations due to the physiological continuous variation of the ship's attitude, gnss plot faithfully followed the trend of the tidal variations and highlighted the shifts between gnss plot and the tide gauges due to the different materialization of the relative reference systems. mailto:valerio.baiocchi@uniroma1.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 11 mapping and bathymetric cartographies which are mainly used for navigational, safety and scientific bathymetric surveys [10][12]. the elevations at sea are often referred to a conventional local zero identified with a local tide gauge, conventionally the tide gauge's "zero" has no connection with the "zero" of the national altimetric system; bathymetries are referred to the tide gauges reference system for prudential reasons related to navigation and nautical charts [9]. therefore, there is no congruence between the zero of the tide gauges and the national altimetric system, not even between the various tide gauges at a given time. it must also be taken into account that the connection between the national altimetric system and the elevations on the islands cannot be made by precision geometric levelling but it is often made by trigonometric levelling or gnss levelling corrected later with the same geoid models [13]. the interest in the correct relationship between terrestrial and tidal altimetry systems is constantly increasing both for the growing interest in the automatic extraction of coastlines [14]-[16] and for the very recent availability of low cost gnss receivers that can acquire in differential mode, allowing centimetric and potentially also millimetric [17] planimetric accuracy even at sea. in this paper, the first results of a post processing kinematic (ppk) survey performed during the “thygraf – tyrrhenian gravity flow” oceanographic campaign in the southern tyrrhenian sea conducted on board of the urania r/v (research vessel) are reported. the aim was to make an initial comparison between the altitudes acquired by the gnss device and corrected with a geoid model and those recorded simultaneously by the tide gauges present in the area. in the paragraph "materials and methods" the instruments used and the measurements carried out will be illustrated, in the paragraph "gnss data processing" the processing carried out and the different strategies used will be reported and finally, in the paragraph "results and discussion" the results obtained will be compared with the tidal data and the conclusions will be illustrated. 2. materials and methods during the thygraf oceanographic survey conducted on board of the urania r/v from 12th to 19th of february 2013, the geodetic team of the scientific crew was engaged in experimenting and validating some innovative techniques of satellite survey in navigation. for this purpose, the geodetic class gps-gnss receiver topcon legacy-e was used and the antenna (topcon pga) was installed on the top of the ship itself, thanks to the collaboration of the ship's personnel (in figure 1 square antenna to the right of the main mast). before departure, a survey was carried out using a total station with the aim to measure the antenna height respect to the waterline. the result was 18.132 m with respect to the bottom of antenna mount. the installation height was necessary to avoid cycle slip and electromagnetic disturbances from the ship's machinery, but it certainly amplified the variability of the three-dimensional coordinates recorded due to the continuous and physiological variation of the ship's attitude. so, the antenna was installed on one of the highest points of the ship where the only obstruction was the highest part of the mast. the antenna was in basic configuration without multipath limiting devices because these could have increased the effect of the wind, so for this first experiment it was decided not to use them. no extensive analysis of antenna effects was carried out in this first experiment. the data surveyed (acquired in double frequency with sampling interval of one second) have subsequently been postprocessed with respect to the permanent stations of the new national dynamic network, these reference measurements have been made available by various agencies (university of palermo, calabria region etc.). the survey and the subsequent postprocessing allowed to evaluate the three-dimensional position of the antenna itself, statically fixed to the body of the ship throughout the oceanographic survey, with centimetre accuracy. it is important to underline that the antenna was obviously affected by all the displacements and attitude variations (heave, roll, pitch and yaw) that affected the ship during its navigation. this series of measurements allowed us to evaluate their variations with great accuracy. the gnss survey allowed the altimetric comparison between the data provided by the tidal networks (referred to a conventional local zero with respect to a local tide gauge) and the ones observed in navigation (ellipsoidal heigh that must be corrected with geoid model italgeo05). this analysis highlights a fundamental aspect linked to the compatibility between different altimetric reference systems, in order to allow a connection between on-shore and off-shore heights. 3. gnss data processing the gnss receiver positioned on the urania r/v (figure 1) acquired dual frequency data with 1s sampling interval from 14th to 18th of february 2013; during the research survey the marine weather conditions were optimal. the permanent station of tropea (table 1) belonging to the permanent network of calabria region [18] was selected for data post-processing, hourly rinex files with a sampling interval of 1 s were downloaded. figure 1. gnss antenna and its installation on the urania ship. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 12 3.1. rtklib the first step of the data processing was performed with rtklib ver. 2.4.2, open-source package for gnss positioning developed by tomoji takasu in 2007 [20], [21]. rtklib can process data from different satellite systems (gps, glonass, galileo, qzss and beidou) considering approaches both in realtime and post-processing and various position modes: single, dgps/dgnss, kinematic, static, moving-baseline, fixed, ppp-kinematic, ppp-static and ppp-fixed. unfortunately, rtklib provides a graphical interface where it is possible to upload only two rinex files: one for the rover and a second for the base station; furthermore, the software does not manage the raw format of the topcon receiver (figure 2). then, to facilitate the processing operations the data were managed with the open-source package teqc by unavco [22]. teqc is a toolkit with three main functions: translation, editing, and quality checking, from which it gets its name. for our purposes, the two functions of translation and editing were exploited: the first function allows to convert (translate) gnss raw receiver files into rinex format (observation and navigational files); the second one allows to cut or splice rinex files. the data, in tps format, acquired by the receiver positioned on the urania r/v were then converted to rinex and reorganised into daily files with the teqc software. in addition, the data from the trop permanent station, already in rinex format, were edited with teqc to create a single daily file. the output files from the teqc software were imported into rtklib, together with the precise ephemerides released by crustal dynamics data information system (cddis) [4] and processed in kinematic mode. for the doy046 (acquisition period from 15/02/13 00:00:00 to 15/02/13 23:59:59) and doy047 (acquisition period from 16/02/13 00:00:00 to 16/02/13 23:59:59), the number of fixed/float solution (table 2) and the estimated standard deviations (table 3) were reported. unfortunately, the trop permanent station acquisitions for the period 19:00:00 – 19:59:59 was not available, then rtklib software could only process the solution in single position (table 2). as a consequence, the coordinates related to these 2412 observations were not considered in the results (table 3) and in all figures related to doy047 because of very low and not reliable solution precision. the positions measured in kinematic mode, obtained processing the gnss data acquired during the two days of navigation, show a similar level of precision with a mean value of 0.005 m for the planimetric coordinates and 0.012 m for heights. the planimetric navigation paths of the ship is represented in gis software (qgis): on the first day (doy046) the path was mainly a round trip between the two islands of stromboli and lipari and therefore away from the mainland. on the second day (doy047) navigation started with a long “transfer” section from the islands to the mainland in a north-west -> south-east direction and then navigation along the coast and some grids in specific areas. in figure 3 the navigation paths doy046 (red track) and doy047 (yellow track) and the permanent gnss station trop location (red triangle) are represented. the daily trend of heights, despite high variability due to the vessel attitude, shows a periodic variation almost certainly due to the tidal effect (figure 4a). during the second day (figure 4b), a discontinuity is observed in the central hours of the day, probably table 1. coordinates and characteristics of the trop station as reported on the operator's website (reference system wgs84-etrs89; epsg: 4937 [19]). name station trop latitude 38°40' 45.6525" n longitude 15°53' 48.2067" e h in m 100.086 antenna type leiat504gg leis receiver leica grx1200ggpro figure 2. rtklib options for the ppk data processing. table 2. position estimated and their solutions. position estimated doy046 doy047 total 85563 85233 fixed solution 47573 (56 %) 39609 (46 %) float solution 37990 (56 %) 43212 (51 %) single solution 0 2412 (3 %) table 3. position estimated and their solutions. doy046 doy047 sdn (m) sde (m) sdu (m) sdn (m) sde (m) sdu (m) avg 0.006 0.005 0.012 0.005 0.005 0.012 max 0.543 0.407 0.988 0.637 0.486 1.065 min 0.003 0.003 0.008 0.003 0.002 0.008 figure 3. navigation paths on doy046 (red track) and doy047 (yellow track) and the permanent gnss station trop location (red triangle). acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 13 this trend is due to the well-known effects of local tidal disturbance observed in the messina strait. 3.2. topcon tools the results of the heights measured in kinematic mode obtained from the processing of gnss data in rtklib was repeated for verification with a commercial software, topcon tools. this verification was carried out because rtklib sometimes has small “bugs” [21] and because, according to some authors, topcon tools would show very accurate results, in some cases even comparable with those of scientific software. this is probably due to a complete configurability of the processing [23] the topcon tools package ver. 8.2.3 by topcon corporation was used for the kinematic measurements. the software allows the data processing from different devices such as total stations, digital levels and gnss receivers, and it was used in several technical-scientific applications [24], [25]. topcon tools uses the modified hopfield model for the tropospheric corrections [26]. the employed positioning mode was code-based differential (“code diff”), the time range and the cut-off angle were set to 15 seconds and 10 degrees, respectively. recently dardanelli et al. [27] showed that the hypothesis of a normal distribution is confirmed in most of the pairs and, specifically, the static vs. nrtk pair seems to achieve the best congruence, while involving the ppp approach, pairs obtained with csrs software achieve better congruence than those involving rtklib software. although the lowest congruencies seem characterizing the pairs involving rtklib, this result should not be considered a criticism on the performance of this well-known open access program, which undoubtedly is one among of the most useful gnss processing software available, given its very straightforward applicability, considering also that our analysis is limited to few hours of data. the results obtained from the topcon tools processing are different from those obtained with rtklib, but the general trend that seems to mainly reproduce the tidal effects is very similar (figure 5a and b) as well as the numerical results (table 4). 4. results and discussion the ellipsoidal heights processed with rtklib and topcon tools were then compared with the hydrometric levels of some a) b) figure 4. ellipsoidal heigh variations during doy046 (a in figure) and during doy047 (b in figure) table 4. positions estimated with both packages and their estimated standard deviations. doy046 doy047 sdn (m) sde (m) sdu (m) sdn (m) sde (m) sdu (m) avg 0.101 0.062 0.118 0.078 0.078 0.051 max 0.140 0.106 0.150 0.283 0.283 0.112 min 0.070 0.032 0.100 0.042 0.042 0.013 doy046 doy047 sdn (m) sde (m) sdu (m) sdn (m) sde (m) sdu (m) avg 0.006 0.005 0.012 0.005 0.005 0.012 max 0.543 0.407 0.988 0.637 0.486 1.065 min 0.003 0.003 0.008 0.003 0.002 0.008 a) b) figure 5. ellipsoidal height variations during doy046 (a in figure) and during doy047 (b in figure), comparison between rtklib (blue dots) and topcon tools (orange dots) results. in red is the 10-minute moving average from topcon tools and in grey that of rtklib. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 14 stations belonging to the rmn – national tidegauge network [28]. rmn stations close to the study area are: “ginostra”, “strombolicchio”, “reggio calabria” and “messina” (figure 6).unfortunately, the data from strombolicchio station are not available for the period under examination. we decided to compare the hydrometric level with the ellipsoidal height, only on the tide gauge station of ginostra and palinuro, while reggio calabria and messina were not selected because they showed a very different trend probably due to the well-known local effects near the messina strait (figure 7). at first, the variations measured by the tide gauges were compared only in terms of trend with the heights measured by the gnss receiver on board the ship. in order to filter the heights and limit the effect of the oscillations caused by navigation, a moving average over ten-minute periods was adopted (in the figure is the grey trend). as already observed, there is an apparent similar trend between the tide gauge and the averaged gnss altitudes but also a constant shift between them. for doy046 the trend agreement continues very well for the whole day while for doy047 there seems to be a noticeable discontinuity at a certain point. the causes may be various but probably the aforementioned tidal effects near the strait are the main cause (figure 8). for the reasons outlined above, it was decided to continue the analysis only on doy046, which seems more significant. it is important to remember that the orthometric correction operated with a geoid model brings the elevations to an equipotential surface of the gravitational field at a given point on the national territory (for italy, genoa) that in general does not figure 6. tidegauge stations available in the study area. figure 7. tide gauges and path of the navigation during doy046 and doy047. a) b) figure 8. tide gauges trend and mean heights from gnss receiver for doy046 (a) and doy047 (b), note the different origin of height of the two series. figure 9. orthometric heights for doy046 (a) converted using italgeo2005 geoid model. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 15 coincide with the sea level in another position of the national territory at a given time [29], [30]. moreover, there is a problem of connection of the various tide gauges to the national altimetric system and conventionally the "zero" of the tide gauge has no connection with the "zero" of the national altimetric system. the ellipsoidal elevations measured with the gnss receiver were converted into orthometric elevations (h in figure 9) with respect to the national altimetric system using the geoid model italgeo2005, which is accredited with 2.5 cm of accuracy [5]. however, it should be noted that the reported accuracy is estimated on land where geoid–ellipsoid separation values measured on altimetric benchmarks are used to improve the estimate of geoid–ellipsoid separation itself. in our case some of the survey points are close to the coast and therefore the estimate of the geoid–ellipsoid separation should be reliable, while for offshore points the estimate is certainly less reliable. moreover, the connection between the national and islands elevation systems cannot be made by precision geometric levelling but it is often made by trigonometric or gnss levelling corrected later with the same geoid models. the elevations were then converted from ellipsoidal to orthometric using the software "geotrasformer" [31] that applies the resampling algorithms of the gridded geoid–ellipsoid separation values provided by the italian military geographic institute (igmi), the official national geodetic agency that released the grids of the italgeo2005 model. to compare surveyed data with the tide gauges information it is necessary subtract estimated antenna elevation (section 2) from the orthometric values themselves (gnss measurements corrected with italgeo2005); it must be considered, however, that during navigation the ship can progressively but significantly change its height due to the discharge of waste water and fuel consumption, such variations can reach several centimetres during a campaign and their variation may not be constant, therefore an average value during the day can still give reliable information. it was decided to take an average of the orthometric heights for the entire day to reduce the effect of the tide, obtaining an average height of 18.276 m. (including the height of the pga2 antenna). considering the approximations mentioned above, a comparison was made between the heights obtained with gnss measurements corrected with italgeo2005 and antenna elevation (𝐻 in figure 10) and the heights reported at the same time by the two tide gauges considered significant: "ginostra" and "palinuro" (hydrostatic level in figure 10). the main trend of all three tracks (figure 10) follows the same tidal repetitions. the gap in height between ship and tidal gauge data is due to the different definition of reference altimetric system, but the tidal effect is still predominant. there is also a systematic shift between the heights of the two tide gauges, which may be due to local reasons of a different average sea level or, more likely, to a different materialisation during the installation of the tide gauge itself, but also to an imperfect connection between the tide gauges due to the difficulty of connecting the heights of an island (ginostra) to the national altimetric system on the mainland. 5. conclusions and further developments considering the fluctuations due to the physiological continuous variation of the ship's attitude, the gnss plot faithfully followed the trend of the tidal variations and highlighted the shifts between the gnss data and the tide gauges due to the different materialisation of the relative reference systems. in fact, even if the installation height amplified the variability due to the continuous variation of the ship's attitude, the average trend of the gnss plot showed a relative trend very similar to that of the neighbouring tide gauges considered significant. after the orthometric correction of the heights and the estimation of the antenna height, it was possible to compare the data also "in absolute" (without forgetting the different altimetric references) and this comparison showed a remarkable agreement between the heights measured by the gnss and the tide gauges, highlighting, at the same time, the effects of the different altimetric references. this experimentation highlighted the need to rethink or update marine altimeter datums especially in view of the possible imminent diffusion of low-cost differential gnss receivers in navigation. to study the possibility of highlighting any anomalies in operation, usually due to momentary dysfunctions of the satellite segment, the experimentation of the innovative approach "multiconstellation" will be tested. this approach, designed by the same research group, can facilitate the detection of these anomalies while highlighting the corrections to be made. if this is verified, it would be possible to make the positioning measurements made during navigation more "robust" (i.e. less prone to errors), significantly improving their reliability. this could also be verified by a comparison with the information provided by the ship's dgps-ins navigation system. acknowledgement this research was funded by the national research council (cnr) and carried out in the framework of the flagship project ritmare (ricerca italiana per il mare). the authors would like to special thank the technical and scientific crews of the thygraf mission for their support in the installation of the receiver and for their continuous support in carrying out the measurement operations and its recordings. references [1] e. alcaras, c. parente, a. vallario,. the importance of the coordinate transformation process in using heterogeneous data in coastal and marine geographic information system, journal of marine science and engineering. 8(9) (2020) 708. doi: 10.3390/jmse8090708 [2] a. mori, la cartografia ufficiale in italia e l’istituto geografico militare nel cinquantenario dell’istituto geografico militare figure 10. orthometric heights for doy046 (a) comparison with the heights reported at the same time by the two tide gauges considered significant: "ginostra" and "palinuro". https://doi.org/10.3390/jmse8090708 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 16 (1872-1922); istituto geografico militare, stabilimento poligrafico per l’amministrazione della guerra: roma, italy, 1922, 425 pp. [3] l. surace, i sistemi di riferimento geotopocatografici in italia. bollettino di geodesia e scienze affini 57(1996) pp. 181–234, in italian. [4] r. barzaghi, d. carrion, m. reguzzoni, g. a. venuti, feasibility study on the unification of the italian height systems using gnssleveling data and global satellite gravity models, international association of geodesy symposia (2016) pp. 281-288. [5] r. barzaghi, c. i. de gaetani , b. betti, the worldwide physical height datum project, rendiconti lincei, 31 (2020) pp. 27-34. [6] g. fastellini, f. radicioni , a. stoppini , r. barzaghi, d. carrion , new active and passive networks for a support to geodetic activities in umbria, bollettino di geodesia e scienze affini, 67(3) (2008) pp. 203-227. [7] l. e. sjöberg, m. bagherbandi, quasigeoid-to-geoid determination by egm08, earth science informatics. 5(2) (2012) pp. 8791. [8] g. inghilleri, topografia generale, ed. utet, torino, italy (1974) 1019 pp. [9] intergovernmental oceanographic commission (ioc), manual on sea-level measurements and interpretation, volume iv: an update to 2006. paris, intergovernmental oceanographic commission of unesco. (2006) 78 pp. (ioc manuals and guides no.14, vol. iv; jcomm technical report no.31; wmo/td. no. 1339). [10] d. casalbore, a. bosman, d. casas, e. martorelli, d. ridente, morphological variability of submarine mass movements in the tectonically–controlled calabro–tyrrhenian continental margin (southern italy), geosciences (switzerland) 9(1) (2019) 43. doi: 10.3390/geosciences9010043 [11] e. petritoli, f. leccese, high accuracy attitude and navigation system for an autonomous underwater vehicle (auv). acta imeko 7 (2018) 2, pp. 3‐9. doi: 10.21014/acta_imeko.v7i2.535 [12] e. martorelli, f. italiano, m. ingrassia, l. macelloni, a. bosman, a. m. conte, s. e. beaubien, s. graziani, a. sposato, f. l. chiocci, evidence of a shallow water submarine hydrothermal field off zannone island from morphological and geochemical characterization: implications for tyrrhenian sea quaternary volcanism, journal of geophysical research: solid earth 121(12) (2016) pp. 8396–8414. doi: 10.1002/2016jb013103 [13] istituto mareografico nazionale. online [accessed 19 may 2021] https://www.mareografico.it/?session=0s26747870448387k707 58188d&syslng=ita&sysmen=-1&sysind=-1&syssub=1&sysfnt=0&code=home [14] s. zollini, m. alicandro, m. cuevas-gonzález, v. baiocchi, d. dominici, p. m. buscema, shoreline extraction based on an active connection matrix (acm) image enhancement strategy, journal of marine science and engineering 8(1) (2020) 9. doi: 10.3390/jmse8010009 [15] e. alcaras, c. parente, a. vallario, comparison of different interpolation methods for dem production, international journal of advanced trends in computer science and engineering 8(4) (2019) pp. 1654-1659. doi: 10.30534/ijatcse/2019/91842019 [16] d. costantino, m. pepe, g. dardanelli, v. baiocchi, using optical satellite and aerial imagery for automatic coastline mapping, geographia technica 15(2) (2020) pp. 171–190. doi: 10.21163/gt_2020.152.17 [17] u. robustelli, v. baiocchi, l. marconi, f. radicioni, g. pugliano, precise point positioning with single and dual-frequency multignss android smartphones, ceur workshop proceedings, 2020, 2626 [18] regione calabria. online [accessed 19 february 2021] www.regione.calabria.it [19] igmi, nota per il corretto utilizzo dei sistemi geodetici di riferimento all’interno dei software gis aggiornata a febbraio 2019. online. [accessed 22 november 2021] https://www.sitr.regione.sicilia.it/wpcontent/uploads/nuova_nota_epsg.pdf [20] t. takasu, rtklib: open source program package for rtk gps, foss4g 2009 tokyo, japan, november 2, 2009 [21] p. dabove, m. piras, k. n. jonah, statistical comparison of ppp solution obtained by online post-processing services. ieee/ion position, location and navigation symposium (plans), (2016) pp. 137-143. doi: 10.1109/plans.2016.7479693 [22] unavco. online [accessed 19 may 2021] https://www.unavco.org/software/dataprocessing/teqc/teqc.html [23] topcon, topcon tools 7.3 manual. online [accessed 19 may 2021] https://www.topptopo.dk/files/manual/7010_0612_revl_to pcontools7_3_rm.pdf [24] k. dawidowicz,g. krzan, k. świątek, relative gps/glonass coordinates determination in urban areas–accuracy anaysis.; in proceedings of the15th international multidisciplinary scientific geoconference sgem 2015, albena, bulgaria, 18–24 june 2015 volume 2 (2015) pp. 423–430. doi: 10.5593/sgem2015/b22/s9.053 [25] m. uradziński, m. bakuła, assessment of static positioning accuracy using low-cost smartphone gps devices for geodetic survey points’ determination and monitoring. appl. sci. 10 (2020) 1–22. doi: 10.3390/app10155308 [26] c. goad, l. godman, a modified hopfield tropospheric refraction correction model. in proceedings of the fall annual meeting american geophysical union; san francisco, ca, usa, 12–17 december 1974. [27] g. dardanelli, a. maltese, c. pipitone, a. pisciotta, m. lo brutto, nrtk, ppp or static, that is the question. testing different positioning solutions for gnss survey. remote sens. 13 (2021) 1406. doi: 10.3390/rs13071406 [28] istituto maregografico nazionale, rete mareografica nazionale. online. [accessed 19 february 2021] https://www.mareografico.it/?session=0s1758247488uc8488w ub85&syslng=ita&sysmen=-1&sysind=-1&syssub=1&sysfnt=0&code=live [29] m. pierozzi, il sistema altimetrico italiano, la livellazione, lo zero idrografico ed i riflessi in ambito portuale, in italian. online. [accessed 19 may 2021] http://www.assoporti.it/sites/www.assoporti.it/files/eventieste rni/pierozzi.pdf [30] r. barzaghi, a. borghi, d. carrion, g. sona, refining the estimate of the italian quasigeoid, bollettino di geodesia e scienze affini 3 (2007) pp. 145–160. [31] v. baiocchi, p. camuccio, m. zagari, a. ceglia, s. del gobbo, f. purri, l. cipollini, f. vatore, development of a geographic database of a district area in open source environment, geoingegneria ambientale e mineraria 151 (2017) 97–101. https://doi.org/10.3390/geosciences9010043 https://doi.org/10.21014/acta_imeko.v7i2.535 https://doi.org/10.1002/2016jb013103 https://www.mareografico.it/?session=0s26747870448387k70758188d&syslng=ita&sysmen=-1&sysind=-1&syssub=-1&sysfnt=0&code=home https://www.mareografico.it/?session=0s26747870448387k70758188d&syslng=ita&sysmen=-1&sysind=-1&syssub=-1&sysfnt=0&code=home https://www.mareografico.it/?session=0s26747870448387k70758188d&syslng=ita&sysmen=-1&sysind=-1&syssub=-1&sysfnt=0&code=home https://doi.org/10.3390/jmse8010009 https://doi.org/10.30534/ijatcse/2019/91842019 https://doi.org/10.21163/gt_2020.152.17 http://www.regione.calabria.it/ https://www.sitr.regione.sicilia.it/wp-content/uploads/nuova_nota_epsg.pdf https://www.sitr.regione.sicilia.it/wp-content/uploads/nuova_nota_epsg.pdf https://doi.org/10.1109/plans.2016.7479693 https://www.unavco.org/software/data-processing/teqc/teqc.html https://www.unavco.org/software/data-processing/teqc/teqc.html https://www.topptopo.dk/files/manual/7010_0612_revl_topcontools7_3_rm.pdf https://www.topptopo.dk/files/manual/7010_0612_revl_topcontools7_3_rm.pdf https://doi.org/10.5593/sgem2015/b22/s9.053 https://doi.org/10.3390/app10155308 https://doi.org/10.3390/rs13071406 https://www.mareografico.it/?session=0s1758247488uc8488wub85&syslng=ita&sysmen=-1&sysind=-1&syssub=-1&sysfnt=0&code=live https://www.mareografico.it/?session=0s1758247488uc8488wub85&syslng=ita&sysmen=-1&sysind=-1&syssub=-1&sysfnt=0&code=live https://www.mareografico.it/?session=0s1758247488uc8488wub85&syslng=ita&sysmen=-1&sysind=-1&syssub=-1&sysfnt=0&code=live http://www.assoporti.it/sites/www.assoporti.it/files/eventiesterni/pierozzi.pdf http://www.assoporti.it/sites/www.assoporti.it/files/eventiesterni/pierozzi.pdf biomechanics in crutch assisted walking acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 5 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 biomechanics in crutch assisted walking francesco crenna1, matteo lancini2, marco ghidelli3, giovanni b. rossi1, marta berardengo1 1 measurement and biomechanics lab – dime – università degli studi di genova,via opera pia 15 a, 16145 genova, italy 2 dsmc università degli studi di brescia,v.le europa 11, 25121 brescia, italy 3 dii università degli studi di brescia, via branze 38, 25123 brescia, italy section: research paper keywords: biomechanical measurements; crutches; articular loads; force measurements citation: francesco crenna, matteo lancini, marco ghidelli, giovanni b. rossi, marta berardengo, biomechanics in crutch assisted walking , acta imeko, vol. 11, no. 4, article 6, december 2022, identifier: imeko-acta-11 (2022)-04-06 section editor: eric benoit, université savoie mont blanc, france received july 9, 2022; in final form december 11, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this research was partially supported by eu h2020 program, project eurobench grant n° 779963 – subproject bullet, sledge and faet. corresponding author: francesco crenna, e-mail: francesco.crenna@unige.it 1. introduction different diseases may require patients to use crutches in their daily life. while upper limbs fatigue is limited when a temporary impairment is considered, it may become an important issue when considering permanent impairments such as those due to stroke or multiple sclerosis. these situations are rather common and important in today’s society. stroke is the leading cause of movement disability in the usa and europe [1]. people who suffer a stroke experience change in strength, muscle tone, and neuromuscular coordination [2], the consequence of which are mobility, balance, and walking disabilities [3]. similar symptoms are present in multiple sclerosis, along with fatigue and cerebellar involvement. up to 10% of adults suffer from reduced mobility because of conditions such as a central nervous system lesion that affects balance and gait. on the other hand, walking is a fundamental human activity [4] and if impaired, people prioritize it as a goal of treatment [5]. in europe walking aids, such as crutches, are the most prescribed tools in case of central nervous system lesions [6] and, in a gait rehabilitative framework, physical therapists guide patients in using crutches to better support weight by reducing the magnitude of the load on the legs, and to improve balance by increasing the body’s base of support [7]. moreover, crutches use is fundamental for people walking with the assistance of exoskeletons, for example after a spinal cord injury (sci). exoskeletons help in closing the gap toward a normal life for sci people. since, generally speaking, exoskeletons require the contemporary presence of a pair of crutches, their continuous and daily usage requires attention to possible consequences such as shoulder pain [8]. on this basis, a pair of instrumented crutches was developed to measure both crutch load and orientation [8]-[9], and they were integrated with an optoelectronic motion capture system, an anthropometric volume scanner, and a biomechanical model, in the bullet project [10]-[11]. figure 1 depicts bullet main concept. bullet biomechanical model is fed with kinematic data describing trajectories and accelerations [12], and crutch force data describing movement dynamics. eventually ground reaction forces under the feet can abstract crutch-assisted walking is very common among patients with a temporary or permanent impairment affecting lower limb biomechanics. correct crutches’ handling is the way to avoid undesired side effects in lower limbs recovery or, in chronic users, upper limbs joints diseases. active exoskeletons for spinal cord injured patients are commonly crutch assisted. in such cases, in which upper limbs must be preserved, specific training in crutch use is mandatory. a walking test setup was prepared to monitor healthy volunteers during crunch use as a first step. measurements were performed by using both a motion capture system and instrumented crutches measuring load distribution. in this paper, we present preliminary tests results based on different subjects having a variety of anthropometrical characteristics during walking with parallel or alternate crutches, the so-called three and two-points strategies. tests results present inter and intra subject variabilities and, as a first goal, influencing factors affecting crutch loads have been identified. in the future we aim to address crutch use errors that could lead to delayed recovery or upper limbs suffering in patients, giving valuable information to physicians and therapists to improve user’s training. mailto:francesco.crenna@unige.it acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 be included to operate the model in its complete, full body version. to obtain upper limbs loads the model requires subject’s anthropometry also. to obtain such segment detailed information generally we refer to anthropometric tables, where values relative to overall subject mass and height are reported [13]-[14]. to compensate for the consistent subject to subject differences , that are common among exoskeleton users, bullet includes a volume scanner to measure segments volumes using a kinect azure camera [15]-[16]. the scanner considers a set of subject images recorded in both rgb and tof frames, and a segmentation software based on the biomechanical model definition to obtain segments volumes. segments masses and inertia are then determined considering segment’s density values as reported in tables and literature [14]. bullet biomechanical model, process the input data to obtain limbs loads with special attention to torques at shoulders [17]. the focus, in this case, is related to exoskeletons assisted walking in the eurobench project framework [11], but the approach is general [18] and, in this paper, some preliminary results are presented for healthy subjects walking with crutches without exoskeletons. the main goal is to investigate differences in crutch-assisted gait following the three points strategy or parallel crutch use – and the two points strategy – or alternate crutch movement. therefore, some results regarding crutch kinematics and loads for the two strategies are here presented. 2. experimental setup and protocol the experimental setup includes an optoelectronic motion capture system, to reconstruct the full-body and crutches kinematics during gait, two force plates for measuring the ground reaction forces under the left-right foot – both from bts bioengineering, milan, italy –, and a pair of instrumented crutches to measure force load and orientation on each side, as described in [9]. calibration procedures have been applied for both optoelectronic and force measurement systems [18] before each subject test session. in the following, special attention will be given crutches kinematics obtained from optoelectronic systems and related crutches forces. the experimental protocol requires placing a set of 39 markers on specific subjects’ landmarks, plus three on each crutch. in this experimentation heathy subjects only are involved. since they have to use crutches simulating a spinal cord injury walking impairment, they trained for a short time before test. to this purpose subjects consider to have problems to move and load both legs, so a pre-training is required to establish a proper crutch load and movement, according to experimenter’s instructions. moreover during the training subjects find the proper path and foot sequence in the corridor, to place feet properly on force plates without crutch interference. during the training and the following tests there is no real time control on crutch load, so, at the end, it depends on subject voluntary behaviour. then, after performing calibration procedures for all the instruments, three repetitions (minimum) for each walking condition were carried out: three points gait with parallel crutches and two points gait with alternate crutches, as shown in figure 2. a set of 14 subjects of which 2 females, mean age 25.7 years, standard dev. 5.6 years undergo the experimental protocol, after signing an informed consensus agreement. finally, 124 valid tests were obtained, 65 of which in the twopoints (alternate) gate conditions. 3. experimental results biomechanical model can operate in complete – whole body – or partial – upper limbs only – mode. in the following we consider the whole body version that includes 18 segments to biomechanical model optoelectronic measumrement system instrumented crutches volume scanner kinematics kinetics anthropometry shoulder loads bullet conceptual scheme figure 1. bullet conceptual scheme. x anteroposterior walking direction z mediolateral left right x anteroposterior walking direction z mediolateral left right y y alternate parallel figure 2. alternate and parallel gait strategies. figure 3. alternate and parallel gait strategies. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 3 describe subject and crutches movement as shown in figure 3 for an alternate – two points gait. figure 4 presents an example of the three crutch force components according to the biomechanical reference systems indicated in figure 2 – x anteroposterior, y vertical, z mediolateral – for a three-point gait and left-right sides. by using force measurement it is possible to define the crutch contact on the ground –figure 3 lower graphand consequently identify initial and final contact angles on the two most important planes describing the movement: the sagittal (x, y) and the frontal (z, y) ones. besides that, it is possible to compute the angular crutch range of motion (rom), maximum, and rms load force during crutch ground contact. since data recording was limited to few gaits near and on the force plates, we have excluded from these computations the runs in which some data was missing: for example, when no initial or final crutch ground contact was recorded, and consequently it was not possible to evaluate parameters on a complete contact phase. in the following all the trained subjects are considered independently of their ability to simulate the impairment. results from the 124 valid tests are summed up in table 1 as regards mean values and relative standard deviations of the maximum crutches forces normalised by subjects’ weight. note that variability includes inter subject repeatability and intra subject variability. the large relative standard deviation suggests that, besides subject behaviour, other differences are present. for example, gait condition might affect crutch loading. moreover, even if results are normalised for subject’s weight, other anthropometric differences might have an important role in crutch load. on this basis the biomechanical model can evaluate shoulders torque. figure 5 and figure 6 presents l/r shoulder forces and torques in relation with crutch contact for a parallel gait. it is worth noting that the model we are considering is purely mechanical, so muscles action is summed up in the torque at shoulders joints, while forces do not include reactions due to muscle actions applied at tendon insertion points. even if this is certainly an approximation, the proposed approach is free from any assumption regarding tendon anthropometry and muscle force behaviour that in our specific case could be critical, since we are considering injured subjects walking with crutches. figure 4. l/r crutch relative forces in the three directions: parallel gait. table 1. maximum crutch vertical load: results summary right left mean (% of sbj weight) relative standard deviation mean (% of sbj weight) relative standard deviation parallel 30 32 % 34 30 % alternate 21 43 % 24 40 % figure 5. l/r shoulder forces for parallel gait. figure 6. l/r shoulder torques for parallel gait. figure 7. r crutch vertical maximum relative forces for alternate/ parallel movement. boxes represent 25%-75% intervals, red lines median values. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 4. discussion as mentioned in the introduction the focus here is on two crutch use modalities and on the percentage of the subject weight that is moved from the lower limbs toward the crutches. as shown in figure 4 main load behaviour is very similar for left/right crutches, while the mediolateral component (z, orange in figure 4) is opposed due to the opposed crutch contact angle in the medio lateral plane. the overall data set can be divided according to these two gait conditions – parallel/alternate, and since we have several repetitions (about 3) for a rather consistent set of subjects (18), we can make a multiple-way analysis of variance. the analysed variable can be selected among the available gait parameters we have measured. we can consider a crutch centred approach considering both crutch kinematics and dynamics, or a subject centred approach considering shoulder internal loads in relation with gait behaviour. as an example of the first approach we consider maximum and rms vertical load relative to the subject’s overall weight. as factors for the analysis, besides the mentioned gait conditions, we consider kinematics parameters, such as the angular rom of the crutch around the medio-lateral axis, and subject anthropometric characteristics such as weight, height, or the body mass index (bmi), defined as the ratio between mass and squared height. in table 2 we present an example of anova results, including f values. considering the maximum vertical load on the right and left crutches, f values confirm that the gait condition is significant with a probability < 10-4. the box plot in figure 7 shows an evident gait strategy effect on load levels. there is also evidence of a large variability, probably due to the protocol that requires to simulate an impairment – see paragraph 2 and the absence of an online verification of such imposed behaviour. moreover, there is evidence that, even if working with a load normalized on subject weight, the subject’s bmi is significative (p < 10-4) indicating that the way crutches are loaded is not simply related to the subject’s mass. however, the box plot in figure 8 shows a less evident load dependency on the three bmi categories, defined as follows: bmi < 21.5 kg/m², 21.5 kg/m² ≤ bmi < 25 kg/m² and bmi ≥ 25 kg/m². this aspect deserves some attention in future data analysis to investigate the most significant subject anthropometric characteristic. these results have effect on shoulders loads obtained from the biomechanical model. as shown in figure 6 torque time behaviour is similar for right/left limbs, and present the maximum value, as expected, for torque around the medio lateral axis (z). on this basis a good example is the analysis of variance of shoulder maximum z torque. alternate/parallel walking condition and bmi still have the main effect on this shoulder torque (p < 10-4), as shown in the boxplot in figure 9 for walking condition. while subject’s bmi is significative as for crutch loads, other anthropometric characteristics are not significative anymore at the 5% probability level fisher test, as confirmed by the boxplot in figure 10 for subject’s height. figure 8. r crutch vertical maximum relative forces for the three bmi categories. boxes represent 25%-75% intervals, red lines median values. table 2. anova results for l/r crutch rms vertical load. mean squares f value probability>f moving condition 3.7 25.3 0 subj bmi 5.3 18.1 0 subj mass 1.9 6.5 2 10-3 subj height 0.2 0.7 0.49 crutch rom 2.1 2.9 1.5 10-2 residual error 31 figure 9. l and r shoulders maximum torque around the mediolateral axis z for the two walking conditions. boxes represent 25%-75% intervals, red lines median values. figure 10. l and r shoulders maximum torque around the mediolateral axis z in function of subject height categories. boxes represent 25%-75% intervals, red lines median values. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 5 5. conclusions the paper has presented the bullet project approach to shoulders load evaluation when walking with crutches. a set of experimental data obtained on healthy subjects has been considered to demonstrate the potentialities of the proposed approach. a preliminary synthesis on these results, is obtained applying the analysis of variance. anova has shown that parallel or alternate walking conditions is very important as regards both crutch forces, that can be measured directly, and shoulders loads, that are determined through a biomechanical model. subjects’ anthropometric characteristics affects results even if they are normalized by subject mass or weight, moreover subjects’ bmi is not the only significative parameter since still mass has a significative contribution. of course such considerations have to be limited to this specific experimentation, since it was conducted on healthy subjects only, with a request to simulate a sci walking impairment, and subject behaviour was not subject to control. nevertheless results demonstrate the potentialities of the presented approach. in particular, when applied to injured subjects, it provides a set of information that will be useful to therapist and subjects to improve their training preserving articulations’ health. acknowledgement we thank s. lanino, m. pitto and f. risoldi for the fruitful collaboration. references [1] d. lloyd-jones, robert adams, mercedes carnethon, (+ 32 more authors), heart disease and stroke statistics 2009 update, circulation, 27 january 2009, 119(3):e21-181. doi: 10.1161/circulationaha.108.191261 [2] nathan d neckel, n. blonien, d. nichols, j. hidler, abnormal joint torque patterns exhibited by chronic stroke subjects while walking with a prescribed physiological gait pattern, j neuroengineering rehabil 5, 19 (2008). doi: 10.1186/1743-0003-5-19 [3] f. tamburella, j. c. moreno, d. s. herrera valenzuela, i. pisotta, m. iosa, f. cincotti, d. mattia, j. l. pons, m. molinari, influences of the biofeedback content on robotic post-stroke gait rehabilitation, j neuroengineering rehabil 16, 95 (2019). doi: 10.1186/s12984-019-0558-0 [4] p. r. culmer, p. c. brooks, d. n. strauss, d. h. ross, m. c. levesley, r. j. o’connor, b. b. bhakta, an instrumented walking aid to assess and retrain gait, ieee/asme transactions on mechatronics, vol. 19, no. 1, feb. 2014, pp. 141-148. doi: 10.1109/tmech.2012.2223227 [5] r. c. holliday, dr c. ballinger reader in occupational therapy, e. d. playford, goal setting in neurological rehabilitation: patients' perspectives, disability and rehabilitation, 29:5, 2007, pp. 389394. doi: 10.1080/09638280600841117 [6] f. rasouli, k. b. reed, walking assistance using crutches: a state of the art review, j biomech, vol. 98, 2 january 2020, 109489. doi: 10.1016/j.jbiomech.2019.109489 [7] h. bateni, b. e. maki, assistive devices for balance and mobility: benefits, demands, and adverse consequences, arch phys med rehabil, vol. 86, issue 1, january 2005, pp. 134-145. doi: 10.1016/j.apmr.2004.04.023 [8] e. sardini, m. serpelloni, m. lancini, wireless instrumented crutches for force and movement measurements for gait monitoring, ieee trans instrum. meas. vol. 64, no. 12, dec. 2015, pp. 3369-3379. doi: 10.1109/tim.2015.2465751 [9] m. lancini, m. serpelloni, s. pasinett, e. guanziroli, healthcare sensor system exploiting instrumented crutches for force measurement during assisted gait of exoskeleton users, ieee sensors journal, vol. 16, no. 23, 1 dec. 2016, pp. 8228-8237. doi: 10.1109/jsen.2016.2579738 [10] benchmarking upper limbs loads on exoskeleton testbeds (bullet)eu h2020 program, project eurobench (grant n° 779963–sub-project, bullet). online [accessed 17 december 2022] https://eurobench2020.eu/developing-theframework/benchmarking-upper-limbs-loads-on-exoskeletontestbeds-bullet/ [11] eurobench european robotic framework for bipedal locomotion benchmarking. online [accessed 17 december 2022] https://eurobench2020.eu [12] f. crenna, g. b. rossi, m. berardengo, filtering biomechanical signals in movement analysis, sensors, 21, 13, 2021, 4580, 17 pp. doii: 10.3390/s21134580 [13] r. dumas, l. cheze, j.-p. verriest, adjustments to mcconville et al. and young et al. body segment inertial parameters, journal of biomechanics, vol. 40, issue 3, 2007, pp. 543–553. doi: 10.1016/j.jbiomech.2006.02.013 [14] d. a. winter, biomechanics and motor control of human movement; john wiley & sons: new york, ny, usa, 2009, isbn: 978-0-470-39818-0 [15] g. kurillo, e. hemingway, mu-lin cheng, louis cheng, evaluating the accuracy of the azure kinect and kinect v2, sensors 22 (7), 2022, article n. 2469, 22 pp. doi: 10.3390/s22072469 [16] n. covre, a. luchetti, m. lancini, s. pasinetti, e. bertolazzi, m. de cecco, monte carlo-based 3d surface point cloud volume estimation by exploding local cubes faces, acta imeko 11 (2022) 2, 1-9. doi: 10.21014/acta_imeko.v11i2.1206 [17] f. crenna, g. b. rossi, m. berardengo, a global approach to assessing uncertainty in biomechanical inverse dynamic analysis: mathematical model and experimental validation, ieee transactions on instrumentation and measurement, vol. 70, 2021, art no. 1006809, pp. 1-9. doi: 10.1109/tim.2021.3072113 [18] f. crenna, g. b. rossi, a. palazzo, measurement of human movement under metrological controlled conditions, acta imeko 4 (2015) 4, pp. 48-56. doi: 10.21014/acta_imeko.v4i4.281 https://doi.org/10.1161/circulationaha.108.191261 https://doi.org/10.1186/1743-0003-5-19 https://doi.org/10.1186/s12984-019-0558-0 https://doi.org/10.1109/tmech.2012.2223227 https://doi.org/10.1080/09638280600841117 https://doi.org/10.1016/j.jbiomech.2019.109489 https://doi.org/10.1016/j.apmr.2004.04.023 https://doi.org/10.1109/tim.2015.2465751 https://doi.org/10.1109/jsen.2016.2579738 https://eurobench2020.eu/developing-the-framework/benchmarking-upper-limbs-loads-on-exoskeleton-testbeds-bullet/ https://eurobench2020.eu/developing-the-framework/benchmarking-upper-limbs-loads-on-exoskeleton-testbeds-bullet/ https://eurobench2020.eu/developing-the-framework/benchmarking-upper-limbs-loads-on-exoskeleton-testbeds-bullet/ https://eurobench2020.eu/ https://doi.org/10.3390/s21134580 https://doi.org/10.1016/j.jbiomech.2006.02.013 https://doi.org/10.3390/s22072469 https://doi.org/10.21014/acta_imeko.v11i2.1206 https://doi.org/10.1109/tim.2021.3072113 https://doi.org/10.21014/acta_imeko.v4i4.281 skin potential response for stress recognition in simulated urban driving acta imeko issn: 2221-870x december 2021, volume 10, number 4, 117 123 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 117 skin potential response for stress recognition in simulated urban driving pamela zontone1, antonio affanni1, alessandro piras1, roberto rinaldo1 1 polytechnic department of engineering and architecture, university of udine, via delle scienze 206, 33100 udine, italy section: research paper keywords: stress recognition; electrodermal activity; skin potential response; machine learning; 3d driving simulator citation: pamela zontone, antonio affanni, alessandro piras, roberto rinaldo, skin potential response for stress recognition in simulated urban driving, acta imeko, vol. 10, no. 4, article 20, december 2021, identifier: imeko-acta-10 (2021)-04-20 section editors: roberto montanini, università di messina and alfredo cigada, politecnico di milano, italy received july 23, 2021; in final form december 7, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: pamela zontone, e-mail: pamela.zontone@uniud.it 1. introduction paying attention to drivers’ mental wellbeing is crucial to improve safety in road traffic. if not properly treated, stress can lead drivers to engage in risky behaviours [1] and therefore car accidents [2]. a danger situation occurs whenever stress is caused by the driving activity itself as happens to professional [3] and regular drivers [4], or by personal issues as highlighted in [5] in case of economic reasons, or any other kind of reason as described in [6]. a hidden markov model (hmm) system to assess the probability of assuming certain behaviours considering the current emotion is developed in [7]. in [8], a survey describing the methods to recognize emotions in drivers is also provided. the development of stress detection systems follows two main paths [9]. one is based on physiological signals, including electrodermal activity (eda), electroencephalogram (eeg), blood volume pulse (bvp), electromyography (emg), skin temperature (skt) and respiration (resp) [10], [11]. the second one relies on physical manifestations of stress: data describing human behaviour, for example, could be collected by the global positioning system [12] and facial expressions [13]. a common approach is to identify a stress condition with the aid of machine learning (ml) and deep learning (dl) techniques, as in [14]-[16] where the properties of eeg, ecg and eda signals respectively are exploited for classification purposes. in [17] different kernel configurations for support vector machines (svms) are tested, and then applied to electromyographic signals. an automated way to find the optimal kernel has been used in [18], where the kernel for a deep multiple kernel support vector machine (d-mkl-svm) is selected through a multiple-objective genetic algorithm (moga). the classifier is then used on ecg data. different physiological measurements, eda and ecg signals, are combined in [19], where features are automatically extracted from short signal sections and classified by a multimodal convolutional neural network (cnn). abstract in this paper, we address the problem of possible stress conditions arising in car drivers, thus affecting their driving performance. we apply various machine learning (ml) algorithms to analyse the stress of subjects while driving in an urban area in two different situations: one with cars, pedestrians and traffic along the course, and the other characterized by the complete absence of any of these possible stress-inducing factors. to evaluate the presence of a stress condition we use two skin potential response (spr) signals, recorded from each hand of the test subjects, and process them through a motion artifact (ma) removal algorithm which reduces the artifacts that might be introduced by the hand movements. we then compute some statistical features starting from the cleaned spr signal. a binary classification ml algorithm is then fed with these features, giving as an output a label that indicates if a time interval belongs to a stress condition or not. tests are carried out in a laboratory at the university of udine, where a car driving simulator with a motorized motion platform has been prearranged. we show that the use of one single spr signal, along with the application of ml algorithms, enables the detection of possible stress conditions while the subjects are driving, in the traffic and no traffic situations. as expected, we observe that the test individuals are less stressed in the situation without traffic, confirming the effectiveness of the proposed slightly invasive system for detection of stress in drivers. mailto:pamela.zontone@uniud.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 118 a physical approach can be found in [20]. driver’s expressions and eye movements are recorded by near-infrared (nir) camera sensors, and then aggressive driving behaviour is classified by a cnn. the method proposed in [21] combines both physiological (electrodermal activity) and behavioural (facial) measurements, and fuses together different data types in order to build a sensor fusion emotion recognition (sfer) system, improving the classification performance. in previous works, the authors carried out some experiments with the aid of a driving simulator platform, recreating a highway [22], [23] and inducing stress by adding obstacles along the course. we collected the ecg in addition to the skin potential response (spr) data, and we employed ml and dl algorithms to detect stress in the test individuals. in [24], the authors compared the different physiological responses under manual and autonomous driving tests. in [25], we also examined the possible changes in the physiological responses when different car settings are considered. one of the main contributions of this paper is the analysis and comparison of the performance results of different ml models (extending the work in [26]), which we demonstrated to be valuable in detecting stress episodes in previous experiments, but now considering the stress caused by urban traffic. in this work, moreover, we simplify the system and consider one signal only, i.e., the spr signal taken from the hands of the driver. in this way, we propose a slightly invasive setup, which can be arranged with little discomfort for the driver. in detail, we log spr values from the two hands of individuals while they drive. we apply a motion artifact (ma) removal algorithm, to assess the artifacts that can alter the signal caused by the hand movements turning the wheel. this algorithm outputs a single spr signal, without artifacts, which is fed to an ml classifier, which has been previously trained using a larger dataset. the classifier marks time intervals with a “stress” or “not stress” flag. the individuals were told to drive normally, in an urban setting simulated by the city car driving 3d software simulator. the experiment was setup in a way to present two different situations. one situation recreates an urban area with no traffic and empty streets, while the second recreates an urban area complete with traffic, with cars and pedestrians. findings of this study validate the success of the supervised learning algorithm in its stress detection task. we also demonstrate that spr signals, recorded with minimally invasive and simple sensors, along with ml classifiers, can detect stress in a reliable way. in the end, we observe that, as expected, stress is generally higher in the urban environment filled with traffic. the paper is structured as follows. in the next section we present the fundamental blocks of our proposed system. section 3 introduces the experimental setup. section 4 discusses the results obtained from our comparative study, where different ml algorithms are used for driver’s stress recognition. finally, some conclusions are drawn in section 5. 2. proposed system the proposed measurement system for stress detection in car drivers is shown in figure 1. each subject under test wears the spr sensors on the wrists and is seated on the driving simulator available in the biosens lab at the university of udine. the simulator is composed of a moving platform with two axes (dof reality professional p2), a steering wheel with pedals and gearbox (logitech g29), and a curved screen. for each subject two different simulations are performed on the same city route with two different conditions: “no traffic” and “traffic”. “no traffic” means that there are no other cars nor pedestrians on the road, “traffic” means that we inserted in the simulation other cars and pedestrians with some aggressive events (e.g., lane invasion of other cars or unexpected pedestrian road crossing), as also described in section 3. during the entire route planned in the simulations we acquired the spr signals on the subjects positioning the sensors shown in figure 2 on the wrist like a smartwatch. the differential voltages from the palm and the back of each hand (vp-vb in figure 2) are properly conditioned and acquired by a 12 bit a/d converter on board a dsp with sample rate of 200 sa/s. data are then sent using a low power wifi module which operates at 115.2 kbps baud rate. the detailed description of the sensors and their characterization is provided in [27], [28]. summarizing the architecture and specifications, the sensor analog front end is a band-pass differential amplifier (having input impedance 100 mω) with maximum input range ±10 mv figure 1. block scheme of our proposed stress detection measurement system. figure 2. electrodes arrangement and spr sensor block diagram. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 119 and bandwidth in the [0.08, 8] hz range. the accuracy of spr acquisition, after characterization, resulted in 0.15 % of full scale (corresponding to 30 μv) and the resolution is 4.9 μv. the sensors are battery operated with a single lipo cell with a capacity of 850 mah ensuring ten hours of transmission, since the current consumption is 85 ma. the sensors form a body network where one spr sensor acts as slave (henceforth sensor 1) and the other acts as master (henceforth sensor 2). the slave sensor sends packets to the master and the latter aligns the received data packets with the data acquired by the a/d converter. for consumption reasons, the slave can send packets every 40 ms at minimum. hence, the slave dsp builds a packet composed of eight data acquired every 5 ms and sends them to the master. the module is configured as station (sta) with static ip and operates as a udp client. the gateway address is configured to be the master address. figure 3 shows how the packets are built by the dsp on the slave before transmission. the a/d module provides a 12 bit datum every 5 ms. each byte sent via uart to the master must be identified with a unique code, since the master must recognize if the incoming datum is the upper or lower byte of the slave sample. so, the dsp of the slave builds the lower (upper) byte of information using the six least (most) significant bits of the a/d, adding one bit for lower (upper) byte (l or h bit in figure 3, respectively). the data packets received by the master are dismantled and realigned as in figure 4. the master adds a unique header and builds a packet composed of 18 bytes and containing the information on spr1 and spr2. the packet is then sent to a laptop every 40 ms. the data transmitted from the master are acquired by a dedicated graphical user interface developed in the .net environment, and are then processed by a motion artifact (ma) removal algorithm described in [22], [29]. the two spr signals acquired from the left and right hand of the subjects are processed by the ma algorithm in order to provide as output a single signal that better represents the activity of the autonomic nervous system (ans). as a matter of fact, the spr signals can be typically affected by motion artifacts due to pressure on the sensors during hand movements. ideally, the two spr signals should have approximately the same pattern, since they represent the response to the same stimulus, initiated by the sympathetic response of the ans. the ma removal algorithm is based on two assumptions, the first being that motion artifact enhances the local energy of the signal. the second being that the motion artifacts rarely appear simultaneously in the spr signal of both hands. the output of the ma removal block is thus obtained by computing a weighted combination of the two input spr signals, evaluating their local energy, giving more weight to the less perturbed signal, i.e., the one, between the two input signals, with the least local energy value. in our experiments (see [22]), we found that the motion artifact rarely appears simultaneously in both hands. this, in fact, mostly appears during the steering wheel action, which is predominantly performed by one hand (as also discussed in [24]). after being processed through the ma removal bock, the cleaned spr signal is then sent to various ml classification algorithms, which had already been trained on a bigger dataset. this dataset, including 3195 intervals for each stress and nonstress class, is the result of a previous experiment carried out in the vi-grade firm (vi-grade.com), utilizing their professional dynamic simulator. more specifically, in that case, 18 subjects manually drove for 67 km along a highway, trying to cross 12 obstacles, positioned in prearranged points along the track. these obstacles were: double lane change (right to left or left to right), tire labyrinth, sponsor block (from left or from right), slalom (from left or from right), lateral wind (from left or from right), jersey lr, tire trap, stop. we divided the cleaned spr signal in 15 s time interval blocks, after normalization to leverage the signal amplitudes among subjects, and for each block we computed five statistical features: the interval variance, the energy, the mean absolute value, the mean absolute derivative, and the maximum absolute derivative. each interval overlapped the previous one by 10 s, so we could derive a new feature vector every 5 s. in particular, we could define exactly the obstacle location and span, during which the individuals were supposed to be in a stress state. in this way, we could assign a flag equal to “1” to all of the intervals happening to fall or intersect with the stress episodes, and a flag equal to “0” to all the others, i.e., the ones happening to fall outside these stress episodes. finally, after classification of the test set, we applied a re-label step to address the issue related to the number of single and anomalous “1” flags [22]. we were able to compare the results of an svm, a random forest (rf) classifier, a decision tree (dt), and a k-nearest neighbours (k-nn) classifier, which provided a similar accuracy of about 73 %, with only the k-nn presenting a slightly lower value (68 %). all of the ml classifiers were implemented using matlab (2017.a), and a 10-fold cross validation phase was considered for all of these algorithms. the bayesian optimization was also used during the training procedure for all of the classifiers (for hyperparameter tuning). a radial basis function (rbf) kernel was employed for the svm model (see also [23]). 3. experimental setup as already stated, the test was carried out by using a driving simulator, consisting of a 3d driving simulator software and a motorized platform, located in a lab at the university of udine. the experiment employed 10 test subjects, students of the university of udine. they were asked to drive along a predefined track, in an urban area simulated by the city car driving software. the software enables the creation of an urban area with a nearby motorway, complete of car traffic, with the option of adding different stress-inducing factors, like pedestrian crossing, and vehicles unexpectedly changing lane (also from the opposite direction) or braking suddenly. these stressors do not occur figure 3. construction of the packets on the slave for transmission. figure 4. realignment of the packets on the master. http://vi-grade.com/ acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 120 exactly at the same location and time. however, the type and multiplicity of these stress-inducing events is similar between different simulations. the complete track is displayed in figure 5. the green solid line represents the motorway, and the orange solid line represents the city route. the subjects were asked to drive in two different situations: in the first there is a complete lack of traffic with no cars and people, whereas the second situation has car and pedestrian traffic. in this second situation, the traffic volume is kind of low, but the behaviour of traffic was set to “very aggressive”, where the cars and pedestrians act in a more unpredictable and temperamental way, with cars intruding in the subject’s way, or pedestrians crossing the road in forbidden points. one-half of the individuals started with the no traffic situation first, and then proceeded with the traffic situation (i.e., subjects 1, 2, 3, 4, and 10), while the remaining 50 % did the opposite order (i.e., subjects 5, 6, 7, 8, and 9). completing the track in figure 5 takes on average 10 minutes, with a similar required time to complete the motorway and urban section. 4. experimental results all of the spr data collected from the 10 test subjects, after they had driven along a course in an urban area recreated by city car driving, are then cleaned through the ma removal block. these output signals are scaled to make a meaningful comparison possible, i.e., for each subject and for each driving condition, we standardize the corresponding signal using the mean and standard deviation resulting from the concatenation of both signals coming from the two driving conditions, with traffic and no traffic, for that subject. these standardized signals are ultimately fed to an ml algorithm. more specifically, the same five spr features introduced in the previous section are extracted from each 15 s interval. we make a new interval start 5 seconds after the start of the previous one (therefore each interval is overlapping the previous one by 10 seconds). the various ml classifiers introduced in the previous section are only used for the test phase. in the end, we can look at all of the labels that each classifier gives as output and calculate the final number of labels equal to “1” or “0”, according to the intervals that it labels as “stress” or “non-stress”, for each subject and each driving situation (with and without traffic). table 1 displays the percentage of labels equal to “1” (with stress), when considering the svm, rf, dt, and k-nn classifiers, for each subject, and taking into account the entire complete test course in the traffic and non-traffic conditions. as an example, for the rf classifier, we show in figure 6 the graphical representation of the values reported in table 1. as we can deduce looking at all of the classifiers’ results, the no traffic situation appears to be less stressful than the traffic situation for all of the subjects excluding subject 4. in addition, there are some figure 5. graphical representation of the course: it comprises a motorway and an urban route. table 1. total number of intervals marked as “stress” in %, for each classifier and for each subject in the two driving conditions (with traffic and no traffic). the numeric difference of labels between the two conditions (traffic no traffic) is also shown. svm / subject 1 2 3 4 5 6 7 8 9 10 mean traffic 51.82 68.00 58.86 57.47 41.94 60.00 80.33 94.49 87.60 96.43 69.69 no traffic 22.55 60.31 55.56 65.22 5.04 1.63 75.00 54.62 38.98 91.87 47.08 traffic no traffic 29.26 7.69 3.31 -7.75 36.89 58.37 5.33 39.87 48.62 4.56 22.62 rf / subject 1 2 3 4 5 6 7 8 9 10 mean traffic 56.93 72.00 59.49 62.07 44.35 64.44 77.05 93.70 89.26 92.14 71.14 no traffic 26.47 65.65 54.70 66.09 6.72 0.81 72.58 57.98 37.29 89.43 47.77 traffic no traffic 30.46 6.35 4.79 -4.02 37.63 63.63 4.47 35.72 51.97 2.71 23.37 dt / subject 1 2 3 4 5 6 7 8 9 10 mean traffic 56.20 70.40 60.76 63.22 42.74 64.44 84.43 96.06 90.91 96.43 72.56 no traffic 27.45 65.65 57.26 74.78 5.88 3.25 76.61 57.14 41.53 92.68 50.22 traffic no traffic 28.75 4.75 3.49 -11.56 36.86 61.19 7.81 38.92 49.38 3.75 22.33 k-nn / subject 1 2 3 4 5 6 7 8 9 10 mean traffic 56.93 70.40 58.86 63.79 42.74 63.70 75.41 92.91 87.60 90.71 70.31 no traffic 27.45 67.94 54.70 76.52 5.04 2.44 75.00 55.46 40.68 79.67 48.49 traffic no traffic 29.48 2.46 4.16 -12.73 37.70 61.26 0.41 37.45 46.93 11.04 21.82 figure 6. total number of intervals labelled as “stress” by the rf classifier. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 121 individuals where the difference between the positive labels in the two situations is higher (e.g., subjects 5, 6, and 9), while for others this difference is lower (e.g., subjects 3 and 7). we can try to explain this in different ways. maybe the pressure of taking a test changed the expected stress reaction, or the outcome could be influenced by the order of the simulated situation experienced first by the subjects (traffic and no traffic, or the other way around). still, for 90 % of the subjects the resulting “stress” interval count is higher in the traffic situation. in figure 7 we show the output of the rf classifier for subject 9, where the difference between the positive labels in the traffic and no traffic situation is among the biggest positive ones we observe comparing all of the classifiers. for the sake of simplicity, we only plot the positive labels using a grey stem, located at the end of the corresponding 15 s classified spr interval. the labels corresponding to the non-stress case are not included in the figure. the cleaned and normalized spr signals of the subject in the two different situations (with traffic and no traffic) are also shown in a blue continuous line. the output of the dt classifier for the same subject is displayed in figure 8 (here the difference is slightly lower than the one obtained with the rf). in figure 9 the output of the k-nn classifier for subject 4 is reported instead. this is the only subject where the difference between the positive labels in the traffic and no traffic scenario is always negative, for all of the classifiers. this negative difference is the biggest for the k-nn case. in figure 10 we display a last example considering the svm classifier’s output for subject 3. as we can notice, the classifiers well identify the increased stress level throughout the entire simulations. figure 7. output of the rf classifier for subject 9 in the two situations (without traffic and with traffic). figure 8. output of the dt classifier for subject 9 in the two situations (without traffic and with traffic). acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 122 5. conclusions in this paper, we described a stress detection system that allows us to identify stress of drivers. our system classifies overlapping 15 s signal blocks, and can therefore provide a classification output every 5 s, with a small delay in a real-time application and a good localization in time. the test subjects drove in a simulated urban environment, utilizing a car driving simulator located in our biosens lab at the university of udine. we logged two spr signals, from their hands, and we processed these signals through an ma removal block. we computed some features from the resulting single signal, and we sent them as input to different ml algorithms, thus comparing the final results. we showed that, regardless of the ml algorithm used, all of the subjects, except one, appeared more stressed when driving in an urban area prearranged with traffic. therefore, through the use of a low-complexity spr data acquisition sensor and the application of ml algorithms, we could effectively recognize stress states arising in drivers. references [1] l. bowen, s. l. budden, a. p. smith, factors underpinning unsafe driving: a systematic literature review of car drivers, transportation research part f: traffic psychology and behaviour 72 (2020) pp. 184-210. doi: 10.1016/j.trf.2020.04.008 [2] g. miyama, m. fukumoto, r. kamegaya, m. hitosugi, risk factors for collisions and near-miss incidents caused by drowsy bus drivers, international journal of environmental research and public health 17(12) (2020). doi: 10.3390/ijerph17124370 figure 9. output of the k-nn classifier for subject 4 in the two situations (without traffic and with traffic). figure 10. output of the svm classifier for subject 3 in the two situations (without traffic and with traffic). https://doi.org/10.1016/j.trf.2020.04.008 https://doi.org/10.3390/ijerph17124370 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 123 [3] l. r. hartley, j. el hassani, stress, violations and accidents, applied ergonomics 25(4) (1994) pp. 221-230. doi: 10.1016/0003-6870(94)90003-5 [4] y. amichai-hamburger (edited by), technology and psychological well-being, cambridge university press, 2009, online isbn 9780511635373. doi: 10.1017/cbo9780511635373 [5] d. l. kitara, o. karlsson, the effects of economic stress and urbanization on driving behaviours of boda-boda drivers and accidents in gulu, northern uganda: a qualitative view of drivers, the pan african medical journal 36(47) (2020). doi: 10.11604/pamj.2020.36.47.21382 [6] e. bosch, k. ihme, u. drewitz, m. jipp, m. oehl, why drivers are frustrated: results from a diary study and focus groups, european transport research review 12(52) (2020) pp. 1-13. doi: 10.1186/s12544-020-00441-7 [7] y. liu, x. wang, differences in driving intention transitions caused by driver’s emotion evolutions, international journal of environmental research and public health 17(19) (2020). doi: 10.3390/ijerph17196962 [8] s. zepf, j. hernandez, a. schmitt, w. minker, r. w. picard, driver emotion recognition for intelligent vehicles: a survey, acm computing surveys (csur) 53(3) (2020) pp. 1-30. doi: 10.1145/3388790 [9] s. greene, h. thapliyal, a. caban-holt, a survey of affective computing for stress detection: evaluating technologies in stress detection for better health, ieee consumer electronics magazine 5(4) (2016) pp. 44-56. doi: 10.1109/mce.2016.2590178 [10] m. moghimi, r. stone, p. rotshtein, affective recognition in dynamic and interactive virtual environments, ieee transactions on affective computing 11(1) (2020), pp. 45-62. doi: 10.1109/taffc.2017.2764896 [11] c. maaoui, a. pruski, f. abdat, emotion recognition for humanmachine communication, proc. of the 2008 ieee/rsj international conference on intelligent robots and systems (iros), nice, france, 22-26 september 2008, pp. 1210-1215. doi: 10.1109/iros.2008.4650870 [12] j. li, j. lv, b. oh, z. lin, y. j. yu, identification of stress state for drivers under different gps navigation modes, ieee access 8 (2020) pp. 102773-102783. doi: 10.1109/access.2020.2998156 [13] su-jing wang, wen-jing yan, xiaobai li, guoying zhao, chunguang zhou, xiaolan fu, minghao yang, jianhua tao, microexpression recognition using color spaces, ieee transactions on image processing 24 (12) (2015), pp. 6034-6047. doi: 10.1109/tip.2015.2496314 [14] hanna becker, julien fleureau, philippe guillotel, fabrice wendling, isabelle merlet, laurent albera, emotion recognition based on high-resolution eeg recordings and reconstructed brain sources, ieee transactions on affective computing 11(2) (2017), pp. 244-257. doi: 10.1109/taffc.2017.2768030 [15] bosun hwang, jiwoo you, thomas vaessen, inez myin-germeys, cheolsoo park, byoung-tak zhang, deep ecgnet: an optimal deep learning framework for monitoring mental stress using ultra short-term ecg signals, telemedicine and e-health 24(10) (2018), pp. 753-772. doi: 10.1089/tmj.2017.0250 [16] f. al machot, a. elmachot, m. ali, e. al machot, k. kyamakya, a deep-learning model for subject-independent human emotion recognition using electrodermal activity sensors, sensors 19(7) (2019), art. no. 1659. doi: 10.3390/s19071659 [17] o. vargas-lopez, c. a. perez-ramirez, m. valtierra-rodriguez, j. j. yanez-borjas, j. p. amezquita-sanchez, an explainable machine learning approach based on statistical indexes and svm for stress detection in automobile drivers using electromyographic signals, sensors 21(9) (2021), art. no. 3155. doi: 10.3390/s21093155 [18] k. t. chui, m. d. lytras, r. w. liu, a generic design of driver drowsiness and stress recognition using moga optimized deep mkl-svm, sensors 20(5) (2020), art. no. 1474. doi: 10.3390/s20051474 [19] j. lee, h. lee, m. shin, driving stress detection using multimodal convolutional neural networks with nonlinear representation of short-term physiological signals, sensors 21(7) (2021), art. no. 2381. doi: 10.3390/s21072381 [20] rizwan ali naqvi, muhammad arsalan, abdul rehman, ateeq ur rehman, woong-kee loh, anand paul, deep learning-based drivers emotion classification system in time series data for remote applications, remote sensing 12(3) (2020), art. no. 587. doi: 10.3390/rs12030587 [21] geesung oh, junghwan ryu, euiseok jeong, ji hyun yang, sungwook hwang, sangho lee, sejoon lim, drer: deep learning–based driver’s real emotion recognizer, sensors 21(6) (2021), art. no. 2166. doi: 10.3390/s21062166 [22] pamela zontone, antonio affanni, riccardo bernardini, alessandro piras, roberto rinaldo, fabio formaggia, diego minen, michela minen, carlo savorgnan, car driver's sympathetic reaction detection through electrodermal activity and electrocardiogram measurements, ieee transactions on biomedical engineering 67(12) (2020) pp. 3413-3424. doi: 10.1109/tbme.2020.2987168 [23] pamela zontone, antonio affanni, riccardo bernardini, leonida del linz, alessandro piras, roberto rinaldo, supervised learning techniques for stress detection in car drivers, advances in science, technology and engineering systems journal 5(6) (2020), pp. 2229. doi: 10.25046/aj050603 [24] pamela zontone, antonio affanni, riccardo bernardini, leonida del linz, alessandro piras, roberto rinaldo, stress evaluation in simulated autonomous and manual driving through the analysis of skin potential response and electrocardiogram signals, sensors 20(9) (2020), art. no. 2494. doi: 10.3390/s20092494 [25] pamela zontone, antonio affanni, riccardo bernardini, leonida del linz, alessandro piras, roberto rinaldo, emotional response analysis using electrodermal activity, electrocardiogram and eye tracking signals in drivers with various car setups, proc. of the 2020 28th european signal processing conference (eusipco), amsterdam, nl, 18-21 january 2021, pp. 1160-1164. doi: 10.23919/eusipco47968.2020.9287446 [26] p. zontone, a. affanni, a. piras, r. rinaldo, stress recognition in a simulated city environment using skin potential response (spr) signals, proc. of the 2021 ieee international workshop on metrology for automotive (metroautomotive), bologna, italy, 12 july 2021, pp. 135-140. doi: 10.1109/metroautomotive50197.2021.9502867 [27] a. affanni, dual-channel electrodermal activity and an ecg wearable sensor for measuring mental stress from the hands, acta imeko 8(1) (2019), pp. 56-63. doi: 10.21014/acta_imeko.v8i1.562 [28] a. affanni, wireless sensors system for stress detection by means of ecg and eda acquisition, sensors 20(7) (2020), art. no. 2026. doi: 10.3390/s20072026 [29] a. affanni, a. piras, r. rinaldo, p. zontone, dual channel electrodermal activity sensor for motion artifact removal in car drivers' stress detection, proc. of the 2019 ieee sensors applications symposium (sas), sophia antipolis, france, 11-13 march 2019, pp. 1-6. doi: 10.1109/sas.2019.8706023 https://doi.org/10.1016/0003-6870(94)90003-5 https://doi.org/10.1017/cbo9780511635373 https://doi.org/10.11604/pamj.2020.36.47.21382 https://doi.org/10.1186/s12544-020-00441-7 https://doi.org/10.3390/ijerph17196962 https://doi.org/10.1145/3388790 https://doi.org/10.1109/mce.2016.2590178 https://doi.org/10.1109/taffc.2017.2764896 https://doi.org/10.1109/iros.2008.4650870 https://doi.org/10.1109/access.2020.2998156 https://doi.org/10.1109/tip.2015.2496314 https://doi.org/10.1109/taffc.2017.2768030 https://doi.org/10.1089/tmj.2017.0250 https://doi.org/10.3390/s19071659 https://doi.org/10.3390/s21093155 https://doi.org/10.3390/s20051474 https://doi.org/10.3390/s21072381 https://doi.org/10.3390/rs12030587 https://doi.org/10.3390/s21062166 https://doi.org/10.1109/tbme.2020.2987168 https://doi.org/10.25046/aj050603 https://doi.org/10.3390/s20092494 https://doi.org/10.23919/eusipco47968.2020.9287446 https://doi.org/10.1109/metroautomotive50197.2021.9502867 http://dx.doi.org/10.21014/acta_imeko.v8i1.562 https://doi.org/10.3390/s20072026 https://doi.org/10.1109/sas.2019.8706023 extended buffer zone algorithm to reduce rerouting time in biotelemetry systems using sensing acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 -7 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 extended buffer zone algorithm to reduce rerouting time in biotelemetry systems using sensing bachina surendra babu1, satish kumar ramaraj2, karuganti phani rama krishna3, pinjerla swetha4 1 department of ece, bapatla engineering college, bapatla, guntur district, andhra pradesh, 522102, india, 2 department of medical electronics, sengunthar college of engineering, tiruchengode-637205, tamilnadu, india 3 dept of ece, pvp siddhartha institute of technology, vijayawada-520007, andhra pradesh, india 4 dept of ece, malla reddy college of engineering and technology, hyderabad,500100, telanagna, india section:research paper keywords: routing; buffer zone; measurement; rerouting time; virtual zone citation: bachina surendra babu, satish kumar ramaraj, karuganti phani rama krishna, pinjerla swetha, extended buffer zone algorithm to reduce rerouting time in biotelemetry systems using sensing, acta imeko, vol. 11, no. 1, article 26, march 2022, identifier: imeko-acta-11 (2022)-01-26 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received november 29, 2021; in final form march 4, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: b. surendra babu, e-mail: surendrabachina@gmail.com 1. introduction a mobile adhoc network (manet) is a dynamic network made up of several nodes. an ad hoc is a self-contained system that operates without the assistance of a centralized authority. routing is a difficult operation outstanding to the flexibility of the nodes. the ad hoc changing architecture causes frequent route breakups. route failure has an impact on network connection. furthermore, the nodes are reliant on the partial battery power. a lack a power in any node can lead to network apportioning [1]. routing is the main function to guide communication across extensive networks. the basic duty of every routing is to find and preserve routes to the network destinations necessary. ad hoc network routing protocols are distributed into two types such as reactive and proactive and. proactive is the circumstance in which a node may send data to a certain terminus as soon as it receives it. a reactive routing protocol, on the other hand, determines a route as and when it is requested by a network node [2]. this article is about mobile adhoc networks that employ a proactive routing system. because of their dynamic topology, link breaks are a typical feature of mobile adhoc networks. in such circumstances, the routing protocol must seek alternate routes [3]. the rerouting interval is the period earlier new pathways are identified, and the rerouting time is the period of the rerouting interim [4]. stale routes exist across the broken connection during the rerouting interval. only when the routing protocol detects that the connection is damaged can rerouting take place. in fact, detecting the connection break accounts for a considerable portion of the rerouting time [5]. in summary, the rerouting time due to connection breakdowns is resolute by the time obligatory to complete the following processes: • link break detection. • network-wide linking-state notification of new pathways • draining of everything stale packets from output queue. the basic duty of every routing protocol is to find and preserve routes to the network endpoints necessary [6], [7]. ad abstract mobile adhoc network (manet) routing methods must deal with connection breakdowns caused by frequent node movement, measurement and a dynamic network topology. in these cases, the protocol must discover alternate routes. rerouting time refers to the lag that happens during this retransmission. researchers have proposed many ways to reduce rerouting period. one such technique is buffer zone routing (bzr), which divides a node transmission region into a safe zone adjacent to the node and a hazardous zone towards the end of the broadcast range. this technique, however, has rare gaps and restrictions, such as the ideal dimensions of the buffer zone, a rise in hop duration, network stress, and so on. this study offers a method to improve or expand buffer zone communication by grouping nodes inside the buffer zone into virtual zones based on their energy flat. when the routing decisions are made quickly, the energy consumption of the nodes is minimized. in the safe area of extended bzr, transfer time is reduced, and routing efficiency is increased. it solves issues in the present algorithm and fills holes in it, decreasing the time required for rerouting in manet. mailto:surendrabachina@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 hoc network routing protocols are separated into two classes such as, proactive and reactive [8]. this article is about manet environment that employ a proactive routing system. because of their dynamic topology, link breaks are a typical feature of manets. in such circumstances, the routing protocol must seek alternate routes. the rerouting interval is the period earlier new pathways are identified, and the redirecting time is the rerouting break period [9]. stale routes exist across the broken connection during the rerouting intermission. only when the routing protocol detects that the connection is damaged can rerouting take place. in fact, detecting the connection break accounts for a considerable portion of the rerouting time [10]. in summary, the rerouting period due to connection breakdowns is determined by the time essential to complete the following processes. in research work, bzr interact and update their routing tables once the nodes are live. in addition to the neighbouring information, the virtual zone information of the node in ebzr is also efficient in the routing table. this data is then used during the choice to route. finally, the routing could be achieved by continuously measuring the available zones, interference levels etc. the remainder of the paper is structured as trails. section ii first provides the different linked works. section iii describes the background of the optimized link state routing (olsr), the zone routing algorithm (zra), and link breakage and redirection time. the proposed method is obtainable in section iv, along with a comprehensive discussion and its benefits. simulation results are provided in section v, with the performance comparison charts of the standard olsr, buffer and virtual zone algorithm. lastly, in section vi, the scope for areas of future work is outlined out, with the conclusion. 2. related work this article is a continuation of [11] which examined mobile adhoc network routing protocols and metrics. it was discovered that rerouting time is a significant enactment measure. other authors, [12] offer an adaptive retry edge technique to decrease rerouting period. it also identifies stand in line as one of the key variables having a significant influence on rerouting time. the recommended remedy was also put into action and tested. g. jisha and s. dhanya [13] offer a review of different zone-based routing enhancements in mobile adhoc networks. other than the fundamental protocol implementation, this article performs a poll on ten other enhancements offered to zone routing protocol (zrp). it also analyses these enhancements and recommends apps that are best suited to them, [14]. this article implements a virtual zone-based routing enhancement technique that is described in this study. the authors present a novel zonebased routing algorithm for manet in [15]. to discover numerous stable routes among the source and destination nodes, the proposed technique integrates the concept of a buffer zone with the olsr procedure offer a trust-based computation method for improving the security of zone-based routing. when a lot of net nodes disobey and lose data packets, the performance of a vulnerable zrp is tested. the olsr is protracted in [16] to preserve node energy. in the qualnet simulator, the presentation of the projected system is evaluated in terms of control overheads, energy ingesting, end-to-end latency, and packet delivery ratio (pdr). the author from [17] have investigates the queuing problem and proposes methods to decrease queue stagnation. the author from [18] investigates the adaptive retry limit approach of eliminating the queueing problem in mobile adhoc networks. it also proposes a solution of asynchronous invocation, where the gaps in the adaptive retry limit approach are addressed. as a result, queueing is eliminated, and the rerouting time is reduced in mobile adhoc network routing. the simulation findings show that when the buffer zone communication is implemented in olsr, the rerouting time is decreased. when associated to regular olsr, the addition of a transmission buffer zone improves throughput. the use of a buffer zone is advantageous in both traffic situations of low and high [19]. 3. background 3.1. olsr the olsr is active in nature and routes for all network destinations are accessible immediately at each node. the open shortest path first is an optimisation of the pure link formal routing protocol (ospf). this method is linked to the multipoint relay idea (mpr). a multi-point relay minimizes the control messages' size [20]. the usage of mprs also reduces control traffic flooding. multipoint transmits forward control memos that give the benefit of lowering the sum of transmitted broadcast control memos. the olsr has two main features: neighbor detection and topology distribution. each node constructs routes with these two attributes for all known destinations. the hello and (topology control) tc are the two most significant messages in olsr: 1) hello messages: each node transmits hello messages on a regular basis to allow connection sensing, neighbour discovery and mpr selection signalling. the suggested hello messages emission intermission is 2 seconds and neighbour info time are 6 secs. a neighbour is deemed lost 6 secs after the neighbour received the last hello note. 2) tc messages: the link state (tc) messages are generated and transmitted via the system by every mpr, based on the information collected via hello messages. for tc communications, the optimum emission interval is 5 seconds and the hold duration is 15 seconds. 3.2. zone routing algorithm this method relies on the definition of nodes as safe or insecure, and whichever use as relay nodes if they are benign or avoid as relay nodes if they are insecure. in addition, if possible, traffic to hazardous nodes within the transmission region of the transmitting node should be tried through secure nodes. the signal power of the hello packets may be used as a criterion to identify in which nodes and what zones with different mobility speeds are regarded acceptable and unsafe. to support surrounding nodes in the routing of their unsafe neighbours, each link admission in the hello packets needs to be included in the zone status and communicated to other neighbours. a packet must not be routed to a relay node that has an unsafe neighbour as its destination. the routing table for every node is chief designed on the basis of the safety zone nodes. if this leads to dividing, it is included in the routing table to cross nodes in the dangerous zone. the buffer zone routing theory applies exclusively to the nodes in the harmless zone as realized in figure 1. the nodes in the dangerous bzr must only be utilized to forward if complete connectivity without them is impossible. as the neighbour defines the two-hop neighbour and topology set and no route modifications are allowed on the already specified routes. if a node is already shown as a acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 destination in the routing database, the afresh identified route to the similar destination will be ignored, even if it has fewer hops than its first route. in figure 2 the phases of the buffer zone routing method are depicted. 3.3. rerouting time one typical feature of ad hoc systems is that connections may break because of variations in radio circumstances, node mobility and other dynamic network. in these cases, the routing is meant to locate other paths. the period before novel paths is discovered is termed the recirculating time, and the recirculating time is called the recirculating time. take the experiment represented in figure 3, i.e., the period since the disruption of the link between a and c (1) to the re-establishment of connection via the intermediary node b (2). 3.4. queueing and rerouting time different conditions affect the timing of return in manet. queueing is one such scenario. the process of queuing packets in each layer is called queueing due to an enhanced transmission rate. in this circumstance, the packages are sequentially processed, which increases the redirection time. this problem has a layered solution, offering an adaptive retry limit to the mac layer. the retry limit shall be reduced by 1, for every packet with the same mac objective, which is lost by reaching the retry limit until every packet is transmitted once. once a packet is transferred effectively, the trial limit is reverted to its unique value that is equivalent to the former standard. two parameters significantly determine the latency associated with queuing, notably the size of the tail and the retry boundary. a large transmission queue size can lead to too many waste packages with stalled routing information. furthermore, too many waste retransmission attempts can result in a high retry number for these waste packages. the grouping of these factors could significantly extend the redirection time. figure 4 presents an overview of each layer's protocol stack and queues. 3.5. factors affecting rerouting time despite so many elements such node energy levels, transmission ranges, network structure, etc., affecting retraining figure 1. (a). communication area zones of node (b) safe node (c) unsafe node. figure 2. the zone routing algorithm. figure 3. rerouting time. figure 4. linux protocol stack. start clear routing table add routing to all neighbours (only in safe zone) if first time? add route to all topology tuplets with increasing hop count (only in safe zone) add two hop neighbours both are in safe zone exit acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 times, node speeds and traffic loads have a greater impact on the performance of this mobile adhoc network characteristic. node speed: reducing the node speed minimizes the amount of link breakage. the routing time grows as the node speed increases. with lowered speed, due to the lower node speed, the risk of a connected break due to an adjacent affecting out of the communication zone is lower. this makes it easy to adjust the threshold range higher to gain the same advantage while reducing the downside of greater track lengths. traffic load: if the whole network is overloaded by a huge number of unneeded broadcasts and an amplified packet loss, the successful transmission leaves a reduced part of the overall network capacity. in the event of link breakdowns, the combination of divisioning and decreased retransmissions increases the output at the lower levels. but the segmentation means that the packets will probably only reach a insufficient hops, exacerbating the injustice among the short and extended path traffic [20]. in short, the re-routing time is openly proportionate to the node speed and traffic load, which would increase the speed and traffic load of the node. 3.6. link breaks and rerouting time link breaks are the cause of the queueing scenario. when a node misplaces its transfer to its neighbour, the routing searches for the shortest alternative available path. in order to avoid such catastrophes, these connection disruptions have to be identified considerably sooner. the details of this preventive mechanism are outside the possibility of this paper. the standard technique to detect connection breakdowns for a routing is finished missed polling packets [21]. the olsr hello packets are then sent among one hop neighbours at the set time frequency to provide information about neighbourhood links and a method for detecting links. when no neighbour hello packet is received within a specific time interval (for example, within 6 seconds, the suggested olsr interval) the neighbour shall be deliberated inaccessible and a link to the neighbour shall be judged broken and invalid. another technique to identify connection failures is by delegating the routing protocol in the underlying link layer to a mechanism. a link break thru the link layer must be expressly notified of the routing protocol. the drawback of this link layer notification (lln) method may be the extra application complexity. the benefits, however, are that the connection layer usually detects the connection breaks earlier. used to detect interruptions through missing hello packets is the buffer zone or bza. the overall performance is vital to recognize the connection break in a timely manner because two negative impacts occur between both the physical connection breakdown and the routing protocol. main, the packets are marked with the next unachievable hop address in the queue of the interface. this means that at this time these packets are never reached and lost. second, these packets are sent numerous times to the mac layer until they are castoff. this will steal packets sent from other nodes of valuable intermediate time at a legal next hop address. 4. proposed solution 4.1. analysis there is a variance among both the buffer zone solution and regular olsr in the average sum of hops among a source and an endpoint. with the buffer zone solution, the number of hops per way is enhanced, as it prefers nodes in the safe zone as relay nodes. the hop length increase is the biggest drawback of the buffer zone key. chief, the hop length increases the number of transmissions required for the identical end-to-end streams and so reduces the overall capacity accessible per traffic stream. secondly, the danger increases as the links get longer that the topology info possessed by the transmitting nodes is incorrect. there is a greater likelihood that the ideal (both the actual and the tables presented) changes while the packet travels between both communication nodes. this increases the danger of a packet loop or a significant detour. an increased average path duration, an increase in packet reverse risk and an increased packet loop risk add to the likelihood of time-to-live (ttl) depletion. however, too much of a buffer zone indication to an unnecessarily larger mean sum of hops among the manet node pairs and a greater risk of network partitions. the bza is also averse to finding a way to predict the ideal size of the buffer zone, according to criteria such as network load and node mobility. finally, the buffer zone technique may be enhanced and extended to categorize a neighbour as safe or unsafe utilizing distance separate criteria [22]. these shortcomings and gaps allow the buffer zone algorithm to be enhanced or extended. 4.2. extending buffer zone algorithm this section describes the key characteristics of the bzr mechanism. they interact and update their routing tables once the nodes are live. then every node will know its individual neighbours, double and multiple. in addition to the neighbouring information, the virtual zone information of the node is also efficient in the routing table. this data is then used during the choice to route. virtual zones are animatedly constructed based on node similarities. a virtual zone generation could be carried out whenever the initial topology changes or can be carried out periodically. in virtual zones, the arrangement of notion of nodes within the safe zone is presented in figure 5. whenever a node leaves a virtual zone, it broadcasts it via the hello packet to its neighbours. the neighbours then update this evidence in their routing databases. as a result, the nodes are informed of their virtual zone at any given time. 4.3. exploring the benefits of the virtual zone algorithm as all nodes and other info on their routing table know all the information about the virtual zone, routed is optimized within the virtual zone and thereby reduces the overall number of hops and the redirection time to a minimum. this solution associations the benefits of the area routing algorithm with the benefits of virtual area induction. the energy equal of the nodes is also assessed in this approach apart from the distance from the nodes employed in the field routing algorithm alone. the energy consumption of the nodes is also reduced as routing decisions figure 5. nodes in virtual safe zone. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 are made quickly. this method increases routing efficiency and minimizes transfer time in the safe area. if the source and endpoint nodes are inside the similar virtual area and if both communication nodes are in a safe zone, it would be the worst scenario. if the neighbour or any of the multi-hop neighbours is in the simulated field, the source or destination would be average. 5. discussion after comprehending the upgrade to the present buffer zone, virtual districts may also be lengthy to the insecure zone, which would cut re-routing time ultimately. there are a few reasons behind this statement. although virtual area upsurges redirect time, it is an overhead to maintain the routing table and virtual area(s). secondly, introducing virtual areas in the buffer zone increases network traffic. it is known that the nodes communicate using hello packets concerning their virtual zone location. in order to retain virtual zone information, the number of packets delivered is increased. the packet size is also enhanced as it contains virtual zone information. given the node energy levels, the virtual zones are currently limited only to the safe zone. however, it is left open for future effort to extend them into the dangerous zone without hurting performance. another effort would be that the creation of virtual regions eventually leads to the zrp, which seeks to resolve proactive and reactive protocol constraints by incorporating the best features of both protocol technology and therefore can be characterised as a proactive or reactive hybrid routing protocol. the majority of communication between knots can be envisaged in a manet. when in the zone, the packet is transmitted proactively. the projected method is simply an allowance of the remaining area routing algorithm and is limited to the shortest path and proactive routing. the improvement is to lessen the time of return solely. this differs from the zrp, which employs reactive and proactive methods. 6. simulation results this section presents the simulation arrangement and then documents the findings. assessment charts between olsr standard, buffer zone and virtual zone are presented. the parameter settings for the simulator utilized throughout the simulations are provided in table 1. the simulations were conducted using the 2.34 version of ns-2 network simulator. the optimized link state routing protocol (olsr)[9] uses multihop routing with the ieee 802.11 scheme is used as the mac layer. each group node relayed packets to the other group nodes. the nodes were split into two equally huge groups. the traffic category was udp, with a constant bit rate of 64 bytes. figure 11 demonstrates the good performance results with a relative high node speed of 10 m/s and a little overall traffic load of 50 kbps. compared with the results of a virtual zone procedure at thresholds of less than 250 m with that of a standard olsr buffer zone algorithm, which equals the bza with a threshold of 250 meters, the increase would be 84 percent − 75 percent = 9 percent on a buffer-zone algorithm at 220 m. figure 6 shows that the main benefit of the algorithm virtual zone is that the loss of retransmission (ret) has been significantly decreased compared to the buffer zone ret loss and regular olsr. this again leads to further packages being lost because of the missing route (nrte loss), as shown in figure 7. most link breakage are then avoided because packets can still be received by nodes beyond the discard zone and reply with a recognition prior to the neighbour moving beyond a transmission radius. thus, the ret figure 6. ret loss (loss caused by mac extreme retransmissions reject). figure 7. nrte loss (loss affected by lack of route). figure 8. ttl loss (loss affected by exhausted time to live). table 1. parameter situations. parameters values radio-propagation tworayground interface queue fifo with droptail and priqueue interface queue size 300 packets antenna ideal omniantenna olsr hello interval 2 s maximum mac retries 7 traffic ttl 32 olsr hello timeout 6 s nominal transmission rate 2 mbps basic rate 1 mbps simulation period 500 s simulation period heuristic olsr tc interval 5 s olsr tc timeout 15 s acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 loss, also for the virtual zone approach, is decreased as seen in figure 6. this advantage is bigger than the advantage of a higher probability of partitioning, which results in a fully higher output than the olsr standard and buffer zone algorithm as shown in figure 11. it is noteworthy to look at the cost of this approach, since the reduction in ret loss has been recognized as the key advantage of the virtual zone method. figure 7 indicates that in terms of packets lost owing to a lack of routing, there is no transformation among the virtual area, buffer zone and normal olsr solutions. thus, the virtual zone algorithm is not increased compared to normal olsr and buffer zone by the odds of network partitioning. this is predicted because, if necessary, the buffer zone algorithm builds connections with neighbours in the buffer zone. the buffer zone solution and the virtual zone differ, however, in terms of the average sum of hops among a both nodes. as illustrated in figure 9, the sum of hops per path is augmented with the buffer zone approach, as the safety zone nodes as relay nodes are favoured. the hop longer is the biggest drawback of the buffer zone solution. this was lowered after the virtual zone was included into the safe zone. a higher average path length, more packet detour risk, and an increased packet loop risk add to the likelihood of exhaustion from time-to-live (ttl). indeed, because of the exhaustive time to live ratio (i.e., loss of ttl) of packets, the buffer zone technique is higher than the usual olsr, as shown in figure 8. figure 12 indicates that the virtual area method boosts the output, even under heavy traffic loads, as compared with the standard olsr and buffer zone results. however, compared with the virtual and buffer zone methods, they are more or less comparable. however, for such large traffic loads, the total profit of the virtual zone solution is inevitably lower. the reason for this is that the whole network is being undermined by a great increased packet loss which leaves the successful diffusion of traffic, whether by standard olsr traffic or traffic using a bza, with fewer parts of the total network capacity. in addition to a reduced average distance, the cost of the solution in the virtual zone also includes a higher routing burden for a higher payload of hello messages. this is because the virtual zone solution depends on the neighbouring nodes' zone status in hello messages being published. as in figure 10, the increase in routing load for 40 nodes at 10 m/s is about 3 kbps at a threshold of 250 m, which is rather low over the whole network capacity. 7. conclusion entering virtual zones in the buffer zone in the olsr improves the performance of the buffer zone alone in comparison with the olsr. when transmission takes place within virtual zones, the retraining time is also significantly decreased. this paper discusses the rationale of confining the virtual zone within the safe zone. there is also talk of the difference among the virtual zone and the area routing protocol. the approach proposed is simulated using ns 2.34 and experimentations are undertaken to demonstrate that the performance of the virtual zones is increased. comparison charts indicate that the redirection time is reduced when virtual zones are inducted. extending virtual areas to the dangerous area would be a stimulating piece of work. in addition to the node criterion supplied to construct a virtual zone, further criteria for the creation of virtual zones could be introduced in future. by improving both control traffic performance and delay of proposed zrp, the techniques can be applied to single or multiple channels of manets. it could also be proven that such additional criteria recover the routing presentation of the manet. although the routing speed with virtual areas is increased, maintaining the routing table is an upstairs due to the great mobility of the nodes. this creates the method difficult. as future work, a slight improvement in weight in the current technique is also left. the current approach merely limits the construction of the virtual zone to the safe zone of the bza. figure 9. average number of hops. figure 10. routing load. figure 11. goodput for 1 and 10 m/s at 50 kbps load. figure 12. goodput for 1 and 10 m/s at 500 kbps load. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 references [1] a. mohammed munawar, p. sheik abdul khader, a comprehensive analysis on mobile adhoc network routing metrics, protocols and simulators, proceedings of national conference on advances in computer applications, 28(2012), isbn: 978-93-80769-16-5. [2] a. mohammed munawar and dr.p.sheik abdul khader, an enhanced approach to reduce rerouting time in mobile adhoc networks, proceedings of the ieee international conference on emerging trends in computing, communication and nano technology(ice-ccn'13), 25-26 march 2013. [3] a. mohammed munawar p. sheik abdul khader, elimination of queue stagnation in mobile adhoc networks, proceedings of national conference on recent trends in web technologies (rtwt’13), 4-5 october 2013. [4] e. larsen, l. landmark, v. pham, ø. kure, p. e. engelstad, routing with transmission buffer zones in mobile adhoc networks, in proceedings of the ieee international symposium on a world of wireless mobile and multimedia networks (wowmom), kos, greece, 15–18 june 2009, isbn: 978-1-4244-4440-3. doi: 10.1109/wowmom.2009.5282463 [5] v. pham, e. larsen, k. øvsthus, p. engelstad, ø. kure, rerouting time and queueing in proactive ad hoc networks, in proceedings of the performance, computing, and communications conference (ipccc), new orleans, usa, 11-13 april 2007, pp. 160–169. doi: 10.1109/pccc.2007.358891 [6] dhanya sudarsan, g. jisha, a survey on various improvements of hybrid zone routing protocol in manet, in proceedings of the international conference on advances in computing, communications and informatics (icacci '12), pp. 1261-1265. doi: 10.1145/2345396.2345599 [7] y. elrefaie, l. nassef, i. a. saroit, enhancing security of zonebased routing protocol using trust, in proceeding of infos, giza, egypt, 14-16 may 2012. [8] mayur tokekar, radhika d. joshi, enhancement of optimized linked state routing protocol for energy conservation, ccsea 2011, cs & it 02, 2011, pp. 305–319. doi: 10.5121/csit.2011.1228 [9] t. clausen, p. jacquet, optimized link state routing protocol (olsr), rfc 3626, october 2003. [10] c. siva ram murthy, b. s. manoj, ad hoc wireless networks – architectures and protocols. [11] s. corson, j. macker, mobile ad hoc networking (mobile adhoc network): routing protocol performance issues and evaluation considerations, rfc 1999;2501. pearson edu 2007. [12] maltz d. johnson, ad hoc networking. addison-wesley; 2001. [13] li vok, lu zhenxin, ad hoc network routing, ieee international conference on networking, sensing and control, vol. 1, 2004, p. 100–5. [14] a. mohammed munawar, p. sheik abdul khader, asynchronous invocation method to eliminate queueing in mobile adhoc networks, proceedings of second international conference on design and applications of structures, drives, communicational and computing systems (icdasdc’13) 29-30 november 2013, isbn: 978-93-80686-92-9. [15] network simulator ns-2, online [accessed 17 march 2022] http://www.isi.edu/nsnam/ns/ [16] yi huang, clemens gühmann, wireless sensor network for temperatures estimation in an asynchronous machine using a kalman filter, acta imeko, 7(1) (2018), pp. 5-12. doi: 10.21014/acta_imeko.v7i1.509 [17] mariorosario prist, andrea monteriù, emanuele pallotta, paolo cicconi, alessandro freddi, federico giuggioloni, eduard caizer, carlo verdini, sauro longhi, cyber-physical manufacturing systems: an architecture for sensors integration, production line simulation and cloud services, acta imeko, 9(4) (2020), pp. 39-52. doi: 10.21014/acta_imeko.v9i4.731 [18] lorenzo ciani, alessandro bartolini, giulia guidi, gabriele patrizi, a hybrid tree sensor network for a condition monitoring system to optimize maintenance policy, acta imeko, 9(1) (2020), pp. 39. doi: 10.21014/acta_imeko.v9i1.732 [19] kavita pandey, abhishek swaroop, a comprehensive performance analysis of proactive, reactive and hybrid manets routing protocols, international journal of computer science issues, 8(3) (2011), pp. 432-441. [20] livio d’alvia, eduardo palermo, stefano rossi, zaccaria del prete, validation of a low-cost wireless sensors node for museum environmental monitoring, acta imeko, 6(3) (2017), pp. 45-51. doi: 10.21014/acta_imeko.v6i3.454 [21] jiayu luo, xiangyu kong, changhua hu, hongzeng li, key performance-indicators-related fault subspace extraction for the reconstruction-based fault diagnosis, elsevier: measurement, 186 (2021), pp. 1-12. doi: 10.1016/j.measurement.2021.110119 [22] manohar yadav, a multi-constraint combined method for road extraction from airborne laser scanning data, elsevier: measurement, 186 (2021), pp. 1-13. doi: 10.1016/j.measurement.2021.110077 https://doi.org/10.1109/wowmom.2009.5282463 https://doi.org/10.1109/pccc.2007.358891 https://doi.org/10.1145/2345396.2345599 https://doi.org/10.5121/csit.2011.1228 http://www.isi.edu/nsnam/ns/ https://doi.org/10.21014/acta_imeko.v7i1.509 https://doi.org/10.21014/acta_imeko.v9i4.731 https://doi.org/10.21014/acta_imeko.v9i1.732 https://doi.org/10.21014/acta_imeko.v6i3.454 https://www.sciencedirect.com/science/article/abs/pii/s0263224121010393#! https://www.sciencedirect.com/science/article/abs/pii/s0263224121010393#! https://www.sciencedirect.com/science/article/abs/pii/s0263224121010393#! https://www.sciencedirect.com/science/article/abs/pii/s0263224121010393#! https://doi.org/10.1016/j.measurement.2021.110119 https://www.sciencedirect.com/science/article/abs/pii/s0263224121009994#! https://doi.org/10.1016/j.measurement.2021.110077 introductory notes for the acta imeko third issue 2022 acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 3 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 introductory notes for the acta imeko third issue 2022 francesco lamonaca1 1 department of department of computer science, modeling, electronics and systems engineering (dimes), university of calabria, ponte p. bucci, 87036, arcavacata di rende, italy section: editorial citation: francesco lamonaca, introductory notes for the acta imeko third issue 2022, acta imeko, vol. 11, no. 3, article 1, september 2022, identifier: imeko-acta-11 (2022)-03-01 received september 28, 2022; in final form september 28, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: francesco lamonaca, e-mail: editorinchief.actaimeko@hunmeko.org dear readers, the third issue 2022 of acta imeko collects contributions related to two events organized by imeko tc17, the imeko technical committee on robotic measurement [1]-[9], and, as usual, papers that do not relate to a specific event and are collected into the general track [10]-[18]. annually tc17 organizes the "international symposium on measurements and control in robotics" (ismcr), a full-fledged event, focusing on various aspects of international research, applications, and trends related to robotic innovations for benefit of humanity, advanced human-robot systems, and applied technologies, e.g. in the allied fields of telerobotics, telexistance, simulation platforms and environments, and mobile work machines as well as virtual reality (vr), augmented reality (ar) and 3d modelling and simulation. the introduction to the papers relatd to this event is given in the editorial authored by prof. zafar taqvi, organizer of this special issue. as editor in chief, it is my pleasure to give readers an overview of general track papers, with the aim of encouraging potential authors to consider sharing their research through acta imeko. additive manufacturing (am) is becoming a widely employed technique also in mass production. in this field, compliances with geometry and mechanical performance standards represent a crucial constrain. since 3d printed products exhibit a mechanical behaviour that is difficult to predict and investigate due to the complex shape and the inaccuracy in reproducing nominal sizes, optical non-contact techniques are an appropriate candidate to solve these issues. in the paper “measurement of the structural behaviour of a 3d airless wheel prototype by means of optical non-contact techniques”[10] by antonino quattrocchi et al., the 2d digital image correlation and thermoelastic stress analysis are combined to map the stress and the strain performance of an airless wheel prototype. the innovative airless wheel samples are 3d-printed by fused deposition modelling and stereolithography in poly-lactic acid and photopolymer resin, respectively. the static mechanical behaviour for different wheel-ground contact configurations is analysed using the aforementioned non-contact techniques. moreover, the wheel-ground contact pressure is mapped, and a parametric finite element model is developed. the results presented in the paper demonstrate that several factors have a great influence on 3d printed airless wheels: a) the material used for manufacturing the specimen, b) the correct transfer of the force line (i.e., the loading system), c) the geometric complexity of the lattice structure of the airless wheel. the work confirms the effectiveness of the proposed noncontact measurement procedures for characterising complexshaped prototypes manufactured using am. body impedance analysis (bia) is used to evaluate the human body composition by measuring the resistance and reactance of human tissues with a high-frequency, low-intensity electric current. nonetheless, the estimation of the body composition is influenced by many factors: body status, environmental conditions, instrumentation, and measurement procedure. valerio marcotuli et al., in “metrological characterization of instruments for body impedance analysis”, [11] present the results of a study about the effect of the connection cables, conductive electrodes, adhesive gel, and bia device characteristics on the measurement uncertainty. tests were initially performed on electric circuits with passive elements and on a jelly phantom simulating the body characteristics. results showed that the cables mainly contribute to increase the error on the resistance measurement, while the electrodes and the adhesive introduce a negligible disturbance on the measurement chain. the authors also propose a calibration procedure based on a multivariate linear regression to compensate for the systematic error effect of bia devices. sergio moltò et al. in the paper “uncertainty in mechanical deformation of a fabry-perot cavity due to pressure: towards best mechanical configuration” [12] present a study about the deformation of a refractometer used to achieve a quantum realization of the pascal. first, the propagation of the uncertainty in the pressure measurement due to mechanical deformation was assessed. then, deformation simulations were carried out with a cavity designed by the cnam (conservatoire national des arts mailto:editorinchief.actaimeko@hunmeko.org acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 et métiers). this step aims to corroborate the methodology used in the simulations. the assessment of modal components is a fundamental step in structural dynamics. while experimental investigations are generally performed through full-contact techniques, using accelerometers or modal hammers, the research proposed in the paper entitled “frequency response function identification using fused filament fabrication-3d-printed embedded aruco markers” [13], by lorenzo capponi et al, presents a non-contact frequency response function identification measurement technique based on aruco square fiducial markers displacement detection. a video of the phenomenon to be analyzed is acquired, and the displacement is measured through markers, using a dedicated tracking algorithm. the proposed method is presented using a harmonically excited fff 3d-printed flexible structure, equipped with multiple embedded-printed markers, whose displacement is measured by an industrial camera. comparison with numerical simulation and an established experimental approach is finally provided for the validation of the results. human movement modeling also referred to as motioncapture is a rapidly expanding field of interest for medical rehabilitation, sports training, and entertainment. motion capture devices are used to provide a virtual 3-dimensional reconstruction of human physical activities employing either optical or inertial sensors. using inertial measurement units and digital signal processing techniques offers a better alternative in terms of portability and immunity to visual perturbations when compared to conventional optical solutions. in the paper “lowcost real-time motion capturing system using inertial measurement units” [14], simona salicone et al. propose a cablefree, low-cost motion-capture solution based on inertial measurement units with a novel approach for calibration. the goal of the proposed solution is to apply motion capture to the fields that, because of cost problems, did not take enough benefit of such technology (e.g., fitness training centers). according to this goal, the necessary requirement for the proposed system is to be low-cost. therefore, all the considerations and all the solutions provided in this work have been done according to this main requirement. maximum-power extrapolation (mpe) techniques adopted for 4g and 5g signals are applied to systems using dynamic spectrum sharing (dss) signals generated by a base station and transferred to the measurement instruments through an air interface adapter to obtain a controlled environment. this allowed to focus the analysis on the effect of the frame structure on the mpe procedure, excluding the random effects associated to fading phenomena affecting signals received in real environments. the analysis presented by sara adda et al in the paper “experimental investigation in controlled conditions of the impact of dynamic spectrum sharing on maximum-power extrapolation techniques for the assessment of human exposure to electromagnetic fields generated by 5g gnodeb” [15] confirms that both the 4g mpe and the proposed 5g mpe procedure can be used for dss signals, provided that the correct number of subcarriers in the dss frame is considered. michela albano et al., in the paper entitled “x-rays investigations for the characterization of two 17th century brass instruments from nuremberg”, [16] propose a multidisciplinary approach mainly based on non-invasive analytical techniques and including x-rays investigations (x-ray radiography, x-ray fluorescence and x-ray diffraction) for the study of two brass natural horns from the end of the 17th recently found in castello sforzesco in milan (italy). these findings brought new information about this class of objects; actually, even though the instruments were heavily damaged, their historical value was great. the study proposed in the paper was aimed at: i) pointing out the executive techniques for archaeometric purposes; ii) characterizing the morphological and chemical features of materials; iii) identifying and mapping the damages of the structure and the alterations of the surface. in the paper “non-destructive investigation of the kyathos (6th-4th centuries bce) from the necropolis volna 1 on the taman peninsula by neutron resonance capture and x-ray fluorescence analysis” [17] nina simbirtseva et al. propose the method of neutron resonance capture analysis (nrca) to determine the elemental and isotope compositions of objects non-destructively, which makes it a suitable measurement tool for artefacts analysis without sampling. the method is currently being developed at the frank laboratory of neutron physics. nrca is based on the registration of neutron resonances in radiative capture and on the measurement of the yield of reaction products in these resonances. the potential of nrca at the intense resonance neutron source facility is demonstrated on the investigation of a kyathos from the necropolis volna 1 (6th-4th centuries bce) on the taman peninsula. in addition, x-ray fluorescence (xrf) analysis was applied to the same object. the elemental composition determined by nrca is in agreement with xrf data. a power system in which the generation units such as renewable energy sources and other types of generation equipment are located near loads, thereby, reducing operation costs and losses and improving voltage is referred to as ‘distributed generation’ (dg), and these generation units are named ‘distributed energy resources’. however, dgs must be located appropriately to improve the power quality and minimize power loss of the system. the objective of the paper entitled “performance enhancement of a low-voltage microgrid by measuring the optimal size and location of distributed generation” [18] by ahmed jassim ahmed et al., is to propose an approach for measuring the optimal size and location of dgs in a low voltage microgrid using the autoadd algorithm. the algorithm is validated by testing it on the ieee 33-bus standard system and, compared with previous studies, the algorithm proved its efficiency and superiority on the other techniques. a significant improvement in voltage and reduction in losses were observed when the dgs are placed at the sites selected by the algorithm. therefore, autoadd was used in finding the optimal size and location of dgs in the distribution system; then, the possibility of isolating the low voltage microgrid is discussed by integrating distributed generation units and the results showed the possibility of this scenario during faults time and intermittency of energy time. also in this issue, high quality and heterogeneous papers are presented, confirming acta imeko as the natural platform for disseminating measurement information and stimulating collaboration among researchers from many different fields. in particular, the technical note shows how acta imeko is the right place where different opinions and point of views can meet and compare, stimulating a fruitful and constructive debate in the scientific community of measurement science. i hope you will enjoy your reading. francesco lamonaca editor in chief acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 references [1] a. alsereidi, y. iwasaki, j. oh, v. vimolmongkolporn, f. kato, h. iwata, “experiment assisting system with local augmented body (easy-lab) in dual presence environment”, acta imeko, vol. 11, no. 3, pp. 1-6. [2] s. olasz-szabó, i. harmati, “path planning for data collection robot in sensing field with obstacles”, acta imeko, vol. 11, no. 3, pp. 1-6. [3] p. singh matharu, a. ashok ghadge, y. almubarak, y. tadesse, “jelly-z: twisted and coiled polymer muscle actuated jellyfish robot for environmental monitoring”, acta imeko, vol. 11, no. 3, pp. 1-7. [4] j. wolf rogers, k. alexander, “standards and affordances of 21stcentury digital learning: using the experience application programming interface and the augmented reality learning experience model to track engagement in extended reality”, acta imeko, vol. 11, no. 3, pp. 1-6. [5] e. e. cepolina, a. parmiggiani, c. canali, f. cannella, “disarmadillo: an open source, sustainable, robotic platform for humanitarian demining”, acta imeko, vol. 11, no. 3, pp. 1-8. [6] i. zobayed, d. miles, y. tadesse, “a 3d-printed soft orthotic hand actuated with twisted and coiled polymer muscles triggered by electromyography signals”, acta imeko, vol. 11, no. 3, pp. 1-8. [7] r. singh, s. mohapatra, p. singh matharu, y. tadesse, “twisted and coiled polymer muscle actuated soft 3d printed robotic hand with peltier cooler for drug delivery in medical management”, acta imeko, vol. 11, no. 3, pp. 1-6. [8] z. kovarikova, f. duchon, a. babinec, d. labat, “digital tools as part of a robotic system for adaptive manipulation and welding of objects”, acta imeko, vol. 11, no. 3, pp. 1-8. [9] a. v. geetha, t. mala “arel – augmented reality–based enriched learning experience”, acta imeko, vol. 11, no. 3, pp. 15. [10] a. quattrocchi, d. alizzio, l. capponi, t. tocci, r. marsili, g. rossi, s. pasinetti, p. chiariotti, a. annessi, p. castellini, m. martarelli, f. freni, a. di giacomo, r. montanini, “measurement of the structural behaviour of a 3d airless wheel prototype by means of optical non-contact techniques”, acta imeko, vol. 11, no. 3, pp. 1-8. [11] v. marcotuli, m. zago, a. p. moorhead, m. vespasiani, g. vespasiani, m. tarabini, “metrological characterization of instruments for body impedance analysis”, acta imeko, vol. 11, no. 3, pp. 1-7. [12] l s. moltó, m. a. sáenz-nuño, e. bernabeu, m. n. medina, “uncertainty in mechanical deformation of a fabry-perot cavity due to pressure: towards best mechanical configuration”, acta imeko, vol. 11, no. 3, pp. 1-8. [13] l. capponi, t. tocci, g. tribbiani, m. palmieri, g. rossi, “frequency response function identification using fused filament fabrication-3d-printed embedded aruco markers”, acta imeko, vol. 11, no. 3, pp. 1-6 [14] s. salicone, s. corbellini, h. v. jetti, s. ronaghi, “low-cost realtime motion capturing system using inertial measurement units”, acta imeko, vol. 11, no. 3, pp. 1-9. [15] s. adda, t. aureli, t. cassano, d. franci, m.d. migliore, n. pasquino, s. pavoncello, f. schettino, m. schirone “experimental investigation in controlled conditions of the impact of dynamic spectrum sharing on maximum-power extrapolation techniques for the assessment of human exposure to electromagnetic fields generated by 5g gnodeb”, acta imeko, vol. 11, no. 3, pp. 1-7. [16] m. albano, g. fiocco, d. comelli, m. licchelli, c. canevari, f. tasso, v. ricetti, p. cofrancesco, m. malagodi, “x-rays investigations for the characterization of two 17th century brass instruments from nuremberg”, acta imeko, vol. 11, no. 3, pp. 1-7. [17] n. simbirtseva, p. v. sedyshev, s. mazhen, a. yergashov, a. yu. dmitriev, i. a. saprykina, r. a. mimokhod, “non-destructive investigation of the kyathos (6th-4th centuries bce) from the necropolis volna 1 on the taman peninsula by neutron resonance capture and x-ray fluorescence analysis”, acta imeko, vol. 11, no. 3, pp. 1-6. [18] a. j. ahmed, m. h. alkhafaji, a.j. mahdi, “performance enhancement of a low-voltage microgrid by measuring the optimal size and location of distributed generation”, acta imeko, vol. 11, no. 3, pp. 1-8. comparison between 3d-reconstruction optical methods applied to bulge-tests through a feed-forward neural network acta imeko issn: 2221-870x december 2021, volume 10, number 4, 194 200 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 194 comparison between 3d-reconstruction optical methods applied to bulge-tests through a feed-forward neural network damiano alizzio1, marco bonfanti2, guido garozzo3, fabio lo savio4, roberto montanini1, antonino quattrocchi1 1 dept. of engineering, university of messina, cont.da di dio, 98166 messina, italy 2 dept. of electric, electronic, informatics engineering, univ. of catania, via s. sofia, 54, 95100 catania, italy 3 zerodivision systems s.r.l. piazza s. francesco n. 1, 56127 pisa, italy 4 dept. of civil engineering and architecture, university of catania, via s. sofia, 54, 95100 catania, italy section: research paper keywords: bulge-test; creep; 3d-dic; epipolar geometry; neural network citation: damiano alizzio, marco bonfanti, guido garozzo, fabio lo savio, roberto montanini, antonino quattrocchi, comparison between 3dreconstruction optical methods applied to bulge-tests through a feed-forward neural network, acta imeko, vol. 10, no. 4, article 30, december 2021, identifier: imeko-acta-10 (2021)-04-30 section editor: roberto montanini, università di messina and alfredo cigada, politecnico di milano, italy received august 2, 2021; in final form december 4, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: fabio lo savio, e-mail: flosavio@diim.unict.it 1. introduction the mechanical characterization of elastomers needs the knowledge of numerous hyperelastic constants describing their highly non-linear mechanical behaviour [1]-[6]. nowadays a lot of different techniques are used to determine the stress-strain curve, from the most traditional uniaxial tensile-compressive tests, through indentation and equibiaxial tests, up to the bulge tests. among these, the bulge test is a consolidated technique for the investigation of membranes subjected to an equibiaxial tension state [7], [8]. such method allows avoiding those edge damages commonly occurring when the specimen is stressed during other types of tests [9]. in bulge tests, a material thin sheet of uniform thickness is clamped between two circular flanges with cavities in the centre to cause, through the insufflation of fluid inside the test chamber, the sheet to deform plastically assuming a semi-spherical shape. since in isotropic materials the stress state is equibiaxial, the relation between the pressure-displacement and stress-strain curves is unique on the whole dome [10]. in light of the above, this technique, originally used for metallic materials, can be extended also to rubber-like materials, which show typically isotropic behaviour in absence of any previous calendering process which could generate a light transversally isotropy along calendering direction [11], [12]. under inflation, pressure and displacements are the testing parameters to be constantly monitored, these being the parameters necessary to determine the stress-strain curve. to achieve a faithful reconstruction of the dome, epipolar geometry [13]-[15] and 3d-digital image correlation (dic) [16][18] techniques were implemented. some of the authors have previously published a device with a mobile crosshead in order to subject an elastomeric membrane to the bulge test in force control (creep) [14], [19]. in the first of the two cited works [14], authors highlighted the 3d reconstructive technique of a hyperelastic specimen subjected to bulge-test, based on the epipolar geometry. in the abstract the mechanical behaviour of rubber-like materials can be investigated through numerous techniques that differ from each other in costs, execution times and parameters described. bulge test method proved helpful for hyperelastic membranes under plane and equibiaxial stress state. in the present study, bulge tests in force control were carried out on sbr 20% cb-filled specimens. 3d reconstructions of the dome were achieved through two different stereoscopic techniques, the epipolar geometry and the digital image correlation. through a feed-forward neural network (ffnn), these reconstructions were compared with the measurements by a laser triangulation sensor taken as reference. 3d-dic reconstruction was found to be more accurate. indeed, bias errors of the 3d-dic and epipolar techniques with respect to the relative reference values, under creep condition, were 0.53 mm and 0.87 mm, respectively. mailto:flosavio@diim.unict.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 195 second one [19], authors trained a feed-forward neural network (ffnn) on the data acquired using the previous technique, returning a predictive model that provides as output the dome apex height formed by the insufflated specimen. in the present paper, 5 specimens were bulge-tested in creep condition and, differently from previous works, a comparison between the stereoscopic reconstruction of the specimen dome based on epipolar geometry and 3d-dic was carried out. this comparison was preferred to be performed through a ffnn independently trained on the base of results from both the reconstructive techniques. the values of the dome apex height provided by the two ffnn models were, then, compared with the value revealed by using a laser triangulation measurement technique as reference. 2. theoretical background in the bulge test technique, some restrictive assumptions are required to be applied both on the tested material, such as isotropy and incompressibility, and on the geometry of the inflated sample, such as hemispherical shape and small thickness compared to the curvature radius. under these assumptions, the stress state can be thought as equibiaxial and plane and, according to the boyle-mariotte law (fit for thin-walled tanks), is defined by: 𝜎𝑐 = 𝜎𝑎 = 𝑝 ∙ 𝑅 2 𝑠 , (1) where 𝜎c and 𝜎a are, respectively, the circumferential and axial stress, p is the working inflation pressure, r is the curvature radius of the dome and s is the thickness at the dome apex. at the dome apex each meridian represents a main direction and, then, on its surface all the main strains are equal to each other (ε1= ε2= εeq). from the knowledge of the undeformed length l0 and the deformed length ld of a membrane finite element, these strains are given by: 𝜀1 = 𝜀2 = ln (𝑙𝑑 𝑙0)⁄ . (2) from the assumption of incompressibility of the material and in agreement with the well-known von mises equation [13]: 𝜀3 = ln (𝑠 𝑠0)⁄ , (3) where s0 is the undeformed specimen thickness. 3. 3d-reconstruction methods used the stereoscopic technique based on epipolar geometry is a 3d-reconstruction method allows determining the whole dome by identifying, via two cameras, the shifting of a grid printed on the surface of the sample as the dome inflates under creep bulge testing [20]. acquired images by two cameras are processed through apposite geometrical algorithms and different filters. the stereoscopic technique based on 3d-dic is an optical method adopted to measure full-field displacements and strains by applying the cross-correlation to measure shifts in digital images [21]. this method is effective at mapping displacements and deformations thanks to the contrast, necessary to correlate images, generated by the application of an airbrushed speckle pattern on the surface of the sample. the multi-camera system uses a single global axis system. a dedicate software, developed in matlab® environment, integrates a 2d-dic subset-based software with different camera calibration algorithms to reconstruct 3d surfaces from several acquisitions of stereo image pairs. furthermore, this software contains algorithms for computing and visualizing 3d displacements and strains. 4. experimental setup bulge tests in force control were performed by a home-made experimental setup already published in previous works of some of the authors [14], [19], [22]. a pneumatic circuit inflates a thin sample clamped between two flanges with adjustable flow rate (figure 1d). the device is equipped with a sliding crossbar, whose movements are proportional to the membrane inflation. creep phenomenon was obtained by keeping the pressure constant within the bulge chamber by means of a pressure regulator (0.1 bar resolution). in the creep test, after a transient inflation lasting 1 s, the sample was subjected to a constant pressure of 0.55 bar long enough for achieving a complete relaxation, ensuring at the same time that the strain fall within the linear and elastic field. the core of the stereoscopic acquisition setup (as shown in figure 1a) is a sliding crossbar, employed to translate two fixedfocus cameras (imaging source dmk23g445 monochromatic, equipped with fujifilm hf35ha-1b lens, having a frame rate up to 30 fps) in upward/downward direction so to follow the dome apex displacement due to the inflation and, thus, detect the equi-biaxial strain of the specimen. in this way, the dimensions of the captured images depend exclusively on the inflation of the dome and not on its approaching to the camera lenses. the crossbar shifting is made possible by converting the rotary motion of a stepper motor to a vertical translation of the crossbar through a driven shaft and a timing belt (figure 1b). the linear motion is controlled by an optical system rigidly connected to the crossbar and consisting of a laser diode (working as emitter) and a photodetector (hamamatsu si s5973-01 photodiode) placed laterally to the dome (figure 1c). when the laser beam is interrupted by the inflated dome, the system starts the crossbar shifting via a signal sent to a double h-bridge circuit. the shifting will be stopped once that the photodetector will be newly hit by the laser beam. the crossbar shifting takes place in steps of 0.125 mm, committing a focal length error of 0.03% and a negligible focusing error. system accuracy matches the size of the captured pixels (37.7 μm). since the photodiode diameter is greater than that of the laser spot (2 mm vs 1 mm), the laser beam does not affect the spatial resolution of the vertical motion (0.2 mm). a laser triangulation sensor (optoncdt 1302-200, having a resolution of 0.2% fso) was placed on the top of the device frame (figure 1a) and a target-lamina was fixed to the crossbar. these additions were made with the dual purpose of evaluating the uncertainty of the crossbar vertical shifting and indirectly obtaining the measurement of the height of the dome (as shown in figure 2) to be taken as reference value for both epipolar geometry and 3d-dic techniques. in other terms, the bias errors of the stereoscopic reconstructions will be calculated by comparing the laser measurement with the measurements from epipolar geometry and 3d-dic apparata. particular carefulness was payed for synchronizing, in pseudo real-time, the acquired images to data from pressure transducer and displacement sensor. for this purpose, a ni pxie-1073 chassis containing acquisition boards (pxie-6341 and pxi-8252), coupled to a pc in ni labview® environment, was used. this software also controls the compressed air supply and regulation system. an additional software (camera calibration matlab® toolbox), acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 196 using 3d calibration target on the pinhole model [23], was adopted to stereoscopically calibrate the cameras. to achieve the 3d reconstruction of the dome based on dic technique, an appropriate gom aramis 2m lt optical system (figure 3), also consisting of a double camera but mounted on a tripod, replaced the device with cameras/sliding crossbar used for the epipolar geometry acquisition. in this case, attention was taken during operations of calibration and focusing of cameras. 5. test samples the material tested in this paper was sbr 20% carbon blackfilled, an artificial elastomer widely used for several applications: from tires, through seals, up to shoe soles [24]-[26]. a set of 5 square specimens (180 × 180 mm2) were cut from a single 3 mm thick sheet of elastomer. each specimen was used for both stereoscopic techniques and, thus, presented two different patterns, one on each side, as shown in figure 4. on the first side (figure 4a), adopted for the epipolar reconstruction, a grid consisting of five concentric circles (hence named parallels), a small central circle whose centre corresponded to the top of the dome, and 73 equidistant rays (hence named meridians) radiating from the centre were silk-screened. on the second side (figure 4b), useful to carry out the 3d-dic reconstruction, a pattern of random spots of white paint was airbrushed (with nozzle diameter ϕ = 0.18 mm). in this way, the mean value of the diameter of the spots was found to be ϕ = 0.23 mm and their relative surface equals to 0.042 mm2. figure 1. (a) bulge test setup for epipolar geometry technique; (b) timing belt system; (c) optical system to regulate the vertical shifting; (d) cad of the bulge chamber cross-section (courtesy of authors [24]). figure 2. sketch of the laser sensor measurement (courtesy of authors [24]). figure 3. bulge test setup for 3d-dic technique (courtesy of authors [22]). acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 197 6. stereoscopic reconstructions and experimental tests in the epipolar technique, in order to acquire the whole grid, the cameras separation has not to overcome 10 % of working distance. thus, adopting a distance of 40 mm between the two cameras, the distance from the top of the sample (u) was set at 400 mm. moreover, a focal length of 10,610 pixels, corresponding to 400 mm, and a minimum focusing range from 250 mm to ∞ were set. for accurate displacement measurements, 33 samples at a sample rate of 1 khz were acquired for each image. acquisition time required is 1/30 s, corresponding to the camera frame rate. for each acquisition, the average of the 33 values, taken as best value, and the standard deviation of the series were computed. since the phase shift of a single frame was 1/30 s, the resulting synchronization uncertainty could be reasonably estimated in half of it. due to the slowness of the creep of the specific material tested in the experiments, one acquisition per minute was considered sufficient to capture the relevant information of the phenomenon. some pre-processing stages were needed to improve the image sharpness, as in the example shown in figure 5: the image was first converted to grayscale (figure 5a) and then filtered with the convolution (figure 5b) and median filters (figure 5c). the convolution filter was intended to refine the grid edges by recovering the light intensity lost due to the specimen strain. next, a smoothing filter (median filter) was applied in order to both reduce the noise introduced by the convolution filter and balance the image. in particular, the size of median filter was the 3 × 3 (kernel size) and the type of convolution filter chosen was the “sharpen filter”, in order to obtain a more sharpened image allowing a more accurate edge detection. for the reconstruction, the origin of reference system was set at the centre of the clamping flange, with the vertical z-axis passing through the midpoint between the focal centres of cameras, x-axis oriented to the right of the testing machine on the median plane and y-axis perpendicular to x and z-axes (figure 6) [19]. the sample was oriented so that the first meridian was along the x-axis. anticlockwise numeration was chosen for the 73 meridians. dome 3d reconstruction was performed using a customized matlab® algorithm able to fix the markers (figure 7) [14], corresponding to the intersections between the 73 meridians and the 5 parallels of the screen-printed grid on the specimen. moreover, this algorithm identifies with red and blue markers the light transitions from dark to light and vice versa, respectively (figure 7a), selects the homothetic markers (figure 7b) and compares the distance between two homothetic markers at two consecutive stress states (figure 7c). the 3d reconstruction was implemented also through 3ddic. the ccd sensors of the two cameras were of 1624 × 1236 pixels, allowing a spatial resolution of 0.092 mm/pixel. before acquiring, the optical setup was calibrated with respect to a measure volume equal to 150 mm × 110 mm × 110 mm, taken as reference and vertically set on the centre of the flange (z-axis), in order to ensure that the space gradually occupied by the dome figure 4. bulge test sample: (a) side for epipolar reconstruction; (b) side for 3d-dic reconstruction. in this sample, holes for clamping are already made (courtesy of authors [22]). figure 5. image pre-processing stages: (a) grayscale image; (b) convolutionfiltered image; (c) median-filtered image for recovering attenuated-light areas in red encircled of (b) (courtesy of authors [14]). figure 6. cartesian reference system (courtesy of authors [19]). figure 7. (a) light transitions: dark to light (red point) and light to dark (blue point); (b) homothetic markers; (c) comparison of the distance between two homothetic markers at two successive stress states (courtesy of authors [14]). acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 198 during creep expansion could be enclosed. to do that, a cp20 calibration table (175 × 140 mm²) was used, while the optics were adjusted on a depth of field equal to 5.6 mm. for post-processing needs, 35 stages in step of 300 s were acquired during the test. a facets size of 13 × 13 pixels with a relative distance of 8 pixels was chosen for computer processing of the dic meshes. each single creep bulge test lasted 180 minutes in order to ensure the complete relaxation of the material within the linear and elastic strains field. due to the temperature dependence of the tested elastomer, the temperature was constantly monitored and maintained at 20°c ± 0.5°c for the entire duration of the test. to use both stereoscopic methods it was necessary to analyse the two sides of each specimen individually. to avoid permanent deformation on the specimen between one test and another, the inflation pressure was kept constant at 0.55 bar that is a value well below the limit pressure. however, the nature of filled hyperelastic materials is such that even specimens from the same sheet can have different mechanical characteristics, especially when analyzed on opposite sides. therefore, based on the results obtained, it is legitimate to consider that the bias errors may be attributable to any differences in mechanical behavior between one side and the other of the specimen. 7. neural network architecture the neural network architecture used in this work was a ffnn (feed forward neural network) composed by 4 layers fully connected [19], [27]. the model had 2 n + 1 inputs (the cartesian coordinates (x, y) of the n points resulting from the intersection of meridians with parallels of the 3d image and the inflation pressure value) and n outputs: the height z relative of each point. the structural simplicity of such architecture (figure 8) allowed achieving a good quality of the outputs without a large waste of computational power and within training period relatively short. every layer had the sigmoid activation function and the hidden layers were composed by 30 neurons; only the last layer, the output layer, had a linear activation function. the model was trained with the dataset built using the cartesian points (x, y, z) from the 3d reconstruction of the dome and the inflation pressure during creep bulge test, and adopting a levenberg-marquardt algorithm as optimizer [28]. mse (mean square error) loss function and a technique of early stop were used in order to prevent the over-fitting of the model. 8. analysis of the results and conclusions figure 9 and figure 10 show the typical reconstruction of the dome at the end of the test, computed from epipolar geometry and from 3d-dic algorithm, respectively. figure 11a shows the plot of the inflating pressure as a function of the time, highlighting how this parameter, except for the instants in which the pressure regulator acts, remains constant during the whole test, as desired. from the comparison between the trends of the strain as function of the time resulting from the two stereoscopic techniques (figure 11b), as previously stated, it can be noted that the epipolar reconstruction can exploit a greater number of acquisitions respect the dic approach (180 vs 35). therefore, the timing resolution of the epipolar technique is better than the dic one. however, the spatial resolution is better in the latter, minimizing the error on the measurement of the dome apex height. in effect, while the 3d-dic spatial resolution is based on a points cloud, the epipolar spatial resolution is based on a limited homothetic markers set. a way to confirm the better quality of the 3d-dic reconstruction compared to that obtained with the epipolar geometry is based on the use of ffnn models. thus, two independent models were obtained by training the ffnn neural network on the acquired datasets. first dataset consists of the acquisitions on the 5 specimens analysed with the epipolar technique, the second one of those acquired on the same specimens with the 3d-dic technique. these models have as input the datasets, i.e. the set of the (x, y) coordinates of the acquired points and, as output, the height of the dome apex (z) from each acquisition. the output was used to estimate the bias error with respect to the reference laser value (hlaser) on the corresponding acquisition. figure 8. ffnn architecture. figure 9. epipolar geometric reconstruction of the dome (courtesy of authors [22]). figure 10. 3d-dic reconstruction of the dome. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 199 our results showed that the 3d-dic technique leads to smaller bias errors (table 1). with notations of figure 2, from the direct laser measurement (e), being fixed the working distance (u) at 400 mm (section 6), the indirect measurement of the dome apex height (hlaser) is geometrically given by: ℎ𝑙𝑎𝑠𝑒𝑟 = 𝑣 − 𝑚 − 𝑢 − 𝑒 , (4) where m = 75 mm and v = 680 mm. as previously stated, the measurements provided by the equation (4) were taken as references both for those obtained with the epipolar technique on one side of the specimen, and for those obtained with the 3d-dic technique on the other side. table 1 shows the best values of the dome apex height and the standard deviations (sd) of the measurements from the laser sensor (one for each sample side) and the two ffnn models after creep, where the coverage factor k for the uncertainty of each measurement was considered equal to 1. table 1 also shows the bias errors of each of the models with respect to its own reference value. each bias error is given by the difference, in absolute value, between the ffnn model value and the relative laser value. figure 12 plots gaussian distributions resumed in table 1. the small deviation observed between the values from ffnn models relative to dic and epipolar reconstructions, as shown in figure 12a and figure 12b is likely due to variations in the hyperelastic behavior of the specimens depending on the side analyzed and not to the inaccuracy of the experimental setup. this is confirmed by the corresponding variation in the reference values provided by the laser sensor. references [1] m. mooney, a theory of large elastic deformation, j appl phys, 11, 1940, 582-592. doi: 10.1063/1.1712836 [2] r. s. rivlin, large elastic deformations of isotropic materials iv. further developments of the general theory, phil trans r soc lond a, 241, 1948, 379-397. doi: 10.1098/rsta.1948.0024 [3] l. r. g. treloar, the physics of rubber elasticity, clarendon press, oxford, 1975. [4] e. m. arruda, m. c. boyce, a three-dimensional constitutive model for the large stretch behavior of rubber elastic materials, j mech phys solids, 41, 1993, 389-412. doi: 10.1016/0022-5096(93)90013-6 [5] o. h. yeoh, some forms of the strain energy function for rubber, rubber chem technol, 66, 1993, 754-771. doi: 10.5254/1.3538343 [6] r. w. ogden, non-linear elastic deformations, dover publications, mineola, n. y., 1997, isbn 978-0-486-69648-5. z. chenbing, w. xinpeng, l. xiyao, z. suoping, w. haitao, a small buoy for flux measurement in air-sea boundary layer, in proceedings of the icemi 2017. doi: 10.1109/icemi.2017.8265999 [7] j. y. sheng, l. y. zhang, b. li, g. f. wang, x. q. feng, bulge test method for measuring the hyperelastic parameters of soft membranes, acta mech 228 (2017) 4187–4197. doi: 10.1007/s00707-017-1945-x figure 11. (a) inflating pressure plot. (b) trends of the strains from 3d-dic and epipolar geometry techniques. figure 12. comparison between gaussian distributions for the height of the dome apex after creep from (a) laser measurements and epipolar ffnn model and (b) laser measurements and 3d-dic ffnn model. table 1. comparison between epipolar and dic ffnn models respect to its own laser measurements. measurement units in mm. 𝒉𝐥𝐚𝐬𝐞𝐫 (𝐞𝐩𝐢𝐩. 𝐬𝐢𝐝𝐞) ± 𝑺𝑫 𝒉𝐥𝐚𝐬𝐞𝐫 (𝐃𝐈𝐂 𝐬𝐢𝐝𝐞) ± 𝑺𝑫 𝒉𝐞𝐩𝐢𝐩. ± 𝑺𝑫 𝒉𝐃𝐈𝐂 ± 𝑺𝑫 𝑩𝒊𝒂𝒔𝐞𝐩𝐢𝐩. 𝑩𝒊𝒂𝒔𝐃𝐈𝐂 initial inflation 27.1 ± 0.8 25.9 ± 0.8 28.5 ± 1.4 26.9 ± 1.6 1.4 1.0 creep 29.2 ± 0.9 28.0 ± 0.8 30.1 ± 1.7 28.5 ± 1.4 0.9 0.5 https://doi.org/10.1063/1.1712836 https://doi.org/10.1098/rsta.1948.0024 https://doi.org/10.1016/0022-5096(93)90013-6 https://doi.org/10.5254/1.3538343 https://doi.org/10.1109/icemi.2017.8265999 https://doi.org/10.1007/s00707-017-1945-x acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 200 [8] m. koç, e. billur, o. n. cora, an experimental study on the comparative assessment of hydraulic bulge test analysis methods, mater. des. 32 (2011) 272–281. doi: 10.1016/j.matdes.2010.05.057 [9] m. k. small, w. d. nix, analysis of the accuracy of the bulge test in determining the mechanical-properties of thin-films. j mater res, 7, 1992, 1553-1563. doi: 10.1557/jmr.1992.1553 [10] dimarn, c. thanadngarn, v. buakaew, y. neamsup, mechanical properties testing of sheet metal by hydraulic bulge test, proceedings vol. 9234, international conference on experimental mechanics 2013 and twelfth asian conference on experimental mechanics, 2014. doi: 10.1117/12.2054257 [11] t. tsakalakos, the bulge test: a comparison of theory and experiment for isotropic and anisotropic films, thin solid films, 75, 1981, 293-305. doi: 10.1016/0040-6090(81)90407-7 [12] j. diani, m. brieu, j. m. vacherand, a. rezgui, directional model for isotropic and anisotropic hyperelastic rubber-like materials, mech mater, 36, 2004, 313-321. doi: 10.1016/s0167-6636(03)00025-5 [13] m. sasso, g. palmieri, g. chiappini, d. amodio, characterization of hyperelastic rubber-like materials by biaxial and uniaxial stretching tests based on optical methods, polym test, 27, 2008, 995-1004. doi: 10.1016/j.polymertesting.2008.09.001 [14] m. calì, f. lo savio, accurate 3d reconstruction of a rubber membrane inflated during a bulge test to evaluate anisotropy, advances on mechanics, design engineering and manufacturing lecture notes in mechanical engineering, springer, 2017, 12211231. doi: 10.1007/978-3-319-45781-9_122 [15] m. sigvant, k. mattiasson, h. vegter, p. thilderkvist, a viscous pressure bulge test for the determination of a plastic hardening curve and equibiaxial material data, int j mater form, 2009, 2, 235242. doi: 10.1007/s12289-009-0407-y [16] l. vitu, n. laforge, p. malécot, n. boudeau, s. manov, m. milesi, characterization of zinc alloy by sheet bulging test with analytical models and digital image correlation, aip conference proceedings, 2018, proceedings of the 21st international esaform conference on material forming, palermo, italy. doi: 10.1063/1.5035022 [17] c. feichter, z. major, r. w. lang, methods for measuring biaxial deformation on rubber and polypropylene specimens, in: gdoutos e. e. (eds), experimental analysis of nano and engineering materials and structures, 2007, 273-274. doi: 10.1007/978-1-4020-6239-1_135 [18] j. neggers, j. p. m. hoefnagels, f. hild, s. roux, m. g. d. geers, global digital image correlation for pressure-deflected membranes. in g. a. shaw, b. c. prorok, & l-va. starman (eds.), proceedings of the 2012 annual conference on experimental and applied mechanics, 6, 2012, 135-140. new york: springer. doi: 10.1007/978-1-4614-4436-7_20 [19] f. lo savio, g. capizzi, g. la rosa, g. lo sciuto, creep assessment in hyperelastic material by a 3d neural network reconstructor using bulge testing. polymer testing 63 (2017) 6572. doi: 10.1016/j.polymertesting.2017.08.009 [20] t. moons, l. van gool, m. vergauwen, 3d reconstruction from multiple images, part 1: principles, foundations and trends® in computer graphics and vision, vol. 4, no. 4 (2008) 287–398. doi: 10.1561/0600000007 [21] j. li, z. miao, x. liu, y. wan, 3d reconstruction based on stereovision and texture mapping, in paparoditis n., pierrotdeseilligny m., mallet c., tournaire o. (eds), iaprs, vol. xxxviii, part 3b – saint-mandé, france, 2010. [22] d. alizzio, f. lo savio, m. bonfanti, g. garozzo, a new approach based on neural network for a 3d reconstruction of the dome of a bulge tested specimen. ceur workshop proceedings icyrime 2020, 2768 (2020), pp. 54-58. online [accessed 22 december 2021] http://hdl.handle.net/20.500.11769/496378. [23] z. zhang, a flexible new technique for camera calibration, ieee t pattern anal, 22, 2000, pp. 1330-1334. doi: 10.1109/34.888718 [24] f. lo savio, g. la rosa, m. bonfanti, a new theoreticalexperimental model deriving from the contactless measurement of the thickness of bulge-tested elastomeric samples. polymer testing 87 (2020), 106548. doi: 10.1016/j.polymertesting.2020.106548 [25] f. lo savio, m. bonfanti, g. m. grasso, d. alizzio, an experimental apparatus to evaluate the non-linearity of the acoustoelastic effect in rubber-like materials, polymer testing 80 (2019), 106133. doi: 10.1016/j.polymertesting.2019.106133 [26] f. lo savio, m. bonfanti, a novel device for measuring the ultrasonic wave velocity and the thickness of hyperelastic materials under quasi-static deformations, polymer testing 74 (2019), pp. 235-244. doi: 10.1016/j.polymertesting.2019.01.005 [27] l. w. peng, s. m. shamsuddin, 3d object reconstruction and representation using neural networks, proceedings of the 2nd international conference on computer graphics and interactive techniques in australasia and southeast asia 2004, singapore, pp. 139-147. doi: 10.1145/988834.988859 [28] j. j. moré, the levenberg-marquardt algorithm: implementation and theory, numerical analysis, lnm 630 (2006), pp. 105-116. doi: 10.1007/bfb0067700 https://doi.org/10.1016/j.matdes.2010.05.057 https://doi.org/10.1557/jmr.1992.1553 https://doi.org/10.1117/12.2054257 https://doi.org/10.1016/0040-6090(81)90407-7 https://doi.org/10.1016/s0167-6636(03)00025-5 http://dx.doi.org/10.1016/j.polymertesting.2008.09.001 https://doi.org/10.1007/978-3-319-45781-9_122 https://doi.org/10.1007/s12289-009-0407-y https://doi.org/10.1063/1.5035022 https://doi.org/10.1007/978-1-4020-6239-1_135 https://doi.org/10.1007/978-1-4614-4436-7_20 https://doi.org/10.1016/j.polymertesting.2017.08.009 https://doi.org/10.1561/0600000007 http://hdl.handle.net/20.500.11769/496378 https://doi.org/10.1109/34.888718 https://doi.org/10.1016/j.polymertesting.2020.106548 http://dx.doi.org/10.1016/j.polymertesting.2019.106133 https://doi.org/10.1016/j.polymertesting.2019.01.005 https://doi.org/10.1145/988834.988859 https://doi.org/10.1007/bfb0067700 development of a method for the determination of the pressure balance piston fall rate acta imeko june 2014, volume 3, number 2, 44 – 47 www.imeko.org development of a method for the determination of the pressure balance piston fall rate lovorka grgec bermanec 1, davor zvizdic1, vedran simunovic2 1 faculty of mechanical engineering and naval architecture, laboratory for process measurement, 10000 zagreb, croatia 2 faculty of mechanical engineering and naval architecture, laboratory for length measurement, 10000 zagreb, croatia section: research paper keywords: pressure balance; fall rate measurement; sobel filter citation: lovorka grgec bermanec , marin martinaga1, davor zvizdic, vedran simunovic, development of method for determination of pressure balance piston fall rate, acta imeko, vol. 3, no. 2, article 11, june 2014, identifier: imeko-acta-03 (2014)-02-11 editor: paolo carbone, university of perugia received april 15th, 2013; in final form june 22nd, 2013; published june 2014 copyright: © 2014 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: (none reported) corresponding author: lovorka g.bermanec, e-mail: lovorka.grgec@fsb.hr 1. introduction the determination of the pressure balance piston fall rate is important for to several reasons. as an internal measure for quality assurance it indicates some deformation or changes in effective area [1], and in the ''cross-float'' calibration of other pressure balances where the fall rate is obtained when the two balances are connected and is compared with the natural fall rate. if the fall rates differ, small masses can be added to or subtracted from one of the pressure balance, and the measurements should be repeated until the fall rates agree. [2]. for periodical determination of the lpm standard unit fall rates and for internal quality assurance it was necessary to develop a simple, efficient, repeatable and precise enough method. since there is no standard procedure for this measurement, there was no limitation in selecting equipment. piston rate of fall is usually determined with laser sensors or expensive optic. further equipment that was also taken into consideration in this work comprises eddy current sensors and different cameras. a simple camera has been chosen for analysing measurement possibilities regarding accuracy, accessibility and price. 2. measurement method and calculation procedure measurements were performed with an amateur camera (nikon digital camera) equipped with appropriate lenses. a plane parallel gauge block with 1.5 mm thickness was used to relate relative motion to real displacement in millimetres. before the measurement, while the piston was in stand-up position, a snap of the standard gauge block was taken. pictures were analysed using matlab software which has inbuilt and predefined functions for various filters. in this measurement a sobel filter was used. this filter is often used for edge detection. edge detection enables to follow the relative movement of the pressure balance edges through continuous pictures. after implementation of the sobel filter, a simple method for transforming real thickness into pixel numbers was applied. from the number of pixels the movement between pictures can be calculated and converted into millimetres. two different results were obtained. for two consecutive measurements on the same gauge, the thickness of the standard gauge block was found to be 16 pixels and 15 pixels, abstract this paper describes a laboratory method for the determination of the pressure balance piston fall rate using a simple camera-based optical system with internally developed software. measurements were carried out on three standard piston/cylinder units in the croatian national pressure laboratory (lpm) using gas and oil as transmitting medium. measurement equipment, procedure and fall rate results for three sets of measurements are given, as well as an evaluation of the measurement uncertainty. results were compared with other relevant measurements. acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 44 respectively. this shows that a resolution of the described equipment of at least 0.1 mm was achieved. the interval between two consecutive pictures was somewhat longer than one minute, so an adequate number of pictures can be acquired to achieve good accuracy. for the analysis of each measurement result, picture intervals of 3 seconds were used, which means 20 measurement points per minute. the initial concept was to take 60 measurement points in a minute to precisely follow the movement of the pressure balance piston. due to computer and camera limitations, the smallest possible interval between pictures with this equipment was 3 seconds. to avoid any contact with the camera, all adjusting parameters and the start of the photographing process were controlled by a computer connected to the camera. every setting was adjusted by appropriate software that allowed camera control via a cable. in this way a fixed position of the camera was assured, which is critical for the applied measurement method. after all the photographs were taken and have passed the sobel filter (figure 1), a series of 20 pictures for each measurement was obtained with information about the relative movement in pixels. to avoid accidental movement of the camera or imperfections of edges visible on the pictures, the x-axis was kept constant. in this way, possible distortions of the pictures are constant over the whole y-axis movement, avoiding errors. the relative motion in pixels on the y-axis was converted into mm for every two consecutive pictures. measurements were performed on three different effective areas of the piston/cylinder, including oil and gas pressure balances (figure 2). the oil pressure balance was designed by budenberg with a double piston for 600 bar and 60 bar loads (figure 3 and figure 4). the gas pressure balance that was used in this work was a dhi pg7601 with 3 bar load. the dhi standard pressure balance is equipped with an internal fall rate sensor so we had an opportunity to compare results obtained between the proposed method and those obtained from the dhi pressure balance. the fall rate of the oil unit was compared with the last calibration certificates from the physikalisch-technische bundesanstalt (ptb). 3. fall rate results and measurement uncertainty evaluation in this paragraph the results for the three standard piston/cylinder units as well as the measurement uncertainty evaluation are presented. the fall rate measurement uncertainty, uf, was evaluated as type b uncertainty [3] taking into account the gauge block uncertainty, the camera resolution and the time measurements as the major influence quantities. 2 2 2= + +f g r tu u u u (1) where: ug uncertainty of the plane parallel gauge block ur uncertainty due to resolution ut -uncertainty due to the time measurement figure 2. determination of oil piston fall rate at 600 bar load. figure 1. computer image of the pressure balance and the plane parallel gauge block with applied sobel filter. figure 3. start of measurement (first photograph) of the budenberg pressure balance with 600 bar load. figure 4. start of the measurement (computer image) of the budenberg pressure balance with 600 bar load after application of the sobel filter. acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 45 3.1. oil operated system up to 600 bar measurements performed on the budenberg standard pressure balance with 600 bar load has a high rate of fall, as expected. this was clearly visible without any equipment. results can be compared with results given in the calibration certificate obtained from the ptb. these results were (3.0 ± 0.5) mm/min. results from the lpm first unit are given in table 1. from this result it can be seen that the fall rate is too large for a pressure balance classified in the accuracy class of 0.02. the maximum piston fall rate defined in [1] is 1.5 mm/min. the uncertainty estimation is given in table 2, only for the first piston/cylinder unit, although it is calculated for each unit separately with different sensitivity coefficient for the time measurement. 3.2. oil operated system up to 60 bar the second measurement is performed on the same budenberg oil unit but using a low pressure range up to 60 bar maximum load. results from the ptb for this unit were (0.26 ± 0.10) mm/min. results obtained from the picture analysis are shown in table 3. good agreement between the results of ptb and lpm can be observed. table 3. determination of oil piston fall rate at 60 bar load. table 4. determination of dhi pg7601 gas piston fall rate at 3 bar load. table 1. determination of oil piston fall rate at 600 bar load. table 2. fall rate uncertainty evaluation. influence quantity uncertainty of the influence quantity factor sensitivity coefficient contribution to the standard uncertainty plane parallel gauge block 0.1 µm 3 1 0.06 µm resolution 0.1 mm 3 1 57.8 µm time 0.5 s 3 0.06 mm/s 17.3 µm fall rate uncertainty uf 60 µm expanded fall rate measurement uncertainty (k=2) uf=2∙uf 0.12 mm acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 46 3.3. gas operated standard system up to 3 bar third set of measurements was performed on the gas operated dhi pressure balance with a maximum load of 3 bar. this unit is equipped with an internal fall rate sensor and all the results were directly compared. results obtained after pictures analyses are shown in table 4. in this measurement, the relative movement was 0.094 mm for one pixel, and the internal fall rate sensor has a precision of 0.1 mm. this prevented direct comparison. as it can be seen from the results in table 3, the pressure balance fall rate is 0.56 mm/min, and the internal fall rate sensor started changing its value from 0.5 mm to 0.6 mm after one minute and three seconds. the maximum piston fall rate for gas operated systems according to the oiml document is 1 mm/min. comparing the results in all three cases with results from the calibration certificates, as well as from comparison with the internal fall rate sensor in the dhi pressure balance, it can be concluded that new method is sufficiently accurate for further development. 4. conclusions an internal laboratory method for the determination of the fall rate was developed in the lpm, with a target uncertainty of 0.1 mm/min, using a camera based optical system. the advantages of the proposed method are a simple and cheap measurement equipment. measurement results obtained with the proposed method show good agreement with other relevant measurements. disadvantages are found in the choice of lenses. further development of the method focuses on the automation of the measurements. references [1] 1994 oiml regulation r110, edition 1994(e) pressure balances (paris: organisation international de m´etrologie l´egale) [2] dadson r.s., lewis s.l., peggs g.n., the pressure balance: theory and practice, ed.1., hmso, london, 1982. [3] iso guide to the expression of uncertainty in measurement, [4] geneva: iso, 1995. acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 47 development of a method for the determination of the pressure balance piston fall rate arel – augmented reality–based enriched learning experience acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 5 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 arel – augmented reality–based enriched learning experience a v geetha1, t mala2 1 research scholar, department of information science and technology, college of engineering, anna university, india 2 associate professor, department of information science and technology, college of engineering, anna university, india section: research paper keywords: augmented reality; learning technologies; education; vuforia citation: a v geetha, t mala, arel – augmented reality–based enriched learning experience, acta imeko, vol. 11, no. 3, article 12, september 2022, identifier: imeko-acta-11 (2022)-03-12 section editor: zafar taqvi, usa received march 30, 2022; in final form july 30, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: a v geetha, e-mail: geethu15@gmail.com 1. introduction within few months, the pandemic of coronavirus illness 2019 (covid-19) caused by the novel virus sars-cov-2 has forced enormous changes in the way businesses and other sectors operate. according to world economic forum, 1.2 billion children in 186 countries were affected by school closures, as of march 2021 [1]. moreover, the new wave of cases in several regions of the world impacts the return towards normalcy. herd immunity and vaccines provide only temporary relief to regions affected by the new virus strains. thus, online learning has evolved into a viable alternative to traditional classroom-based learning, with instruction delivered remotely and using digital platforms. even before the covid era, there is a steady increase in the growth rate and adoption of technology in education. according to globenewswire, it is estimated that the online education market will reach $350 billion by 2025. in addition, concerning the response to covid-19, several online education platforms such as dingtalk, have scaled their cloud services by more than 100,000 servers [2]. augmented and virtual reality (ar/vr), the emerging technology trend, can improve the online learning experience by increasing engagement and retention. vr headsets such as google cardboard (gc) have made the technology accessible to most of the world's population. vr and ar based online learning platforms offer experiential learning, where the students learn through experience rather than through traditional methods such as rote learning. some of the benefits of experiential learning are accelerated learning, engagement, understanding of complex concepts easily. traditional educational methods are progressively becoming digital, due to technological advancements. mobile augmented reality (ar) is the superimposition of the virtual objects over reality. ar is widely used in many field such as manufacturing, robotics, behavioral treatment, aircraft engineering design and so abstract in present era, teaching occur either on a chalkboard or on a projected power point presentation on the wall. traditional teaching methods such as blackboards and power point presentations are being phased out in favor of enriched learning experiences provided by emerging edtech. with the closure of schools due to covid-19, the demand for online educational platforms has also increased. furthermore, some of the recent trends in edtech include personalized learning, gamification and immersive learning with extended reality (xr) technologies. due to its immersive experience, xr is a pioneering technology in education, with multiple benefits including greater motivation, a positive attitude toward learning, concrete learning of abstract concepts, and so on. existing augmented reality (ar) based education applications often rely on unimodal input such as marker-based trigger to launch the educational content. hence, this work proposes a multi-modal interface to enable the content delivery through marker and speech recognition-based content delivery. additionally, the proposed work is designed as mobile based ar platform with the regional language support to increase the ubiquitous accessibility of the ar content. thus, the proposed mobile ar based enriched learning (arel) platform provides a multimodal mobile based educational ar platform for primary students. based on the feedback received after the usage, it is observed that arel improves the learning experience of the students. mailto:geethu15@gmail.com acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 on [3]. the spectrum of extended reality (xr), which refers to the spectrum of real-virtual environments. xr has gradually evolved in the realm of education, revolutionizing pedagogical practices as well as students' educational experiences by facilitating the understanding of complicated aspects in education through the visual depiction of images based on realworld data [4]. specifically, ar and vr has been extensively used in many educational applications. for example, in [5], a solar system mobile app is designed to test the knowledge retention of college students. in [6], a collaborative augmented reality system is used to deliver geometrical concepts in mathematics, which are abstract can be easily illustrated using ar platform. as a result, ar aids in the comprehension of challenging concepts of the learning module. in [7], the work presents a gesture-based ar system to aid in understanding the anatomical structures of the human body. [8] focuses on the concept of comic-based education through markerless ar for improving metacognitive abilities of the students. thus, there are variety of approaches for designing the educational apps based on ar and vr such as head mounted displays, markerless systems, and gesture-based systems. in addition, mobile ar systems are effective because they allow for portability and easy access. therefore, the proposed arel system is a mobile ar-based learning platform in which students can scan the contents of their books to discover videos that appear magically over the pages, transforming a plain textbook into a book with dynamic information. furthermore, arel delivers the contents in regional language of the students to improve the engagement with the app. arel is made up of a collection of modules such as speech recognition system, image tracking and registration module that take advantage of mobile sensors and computational power. the application is developed using unity engine and vuforia sdk. the mobile application interfaces with the vuforia cloud target recognition system via a client-server architecture. to grab students' attention and increase their learning experience, the content in the book is enriched with augmented graphics, animations, and other edutainment features. the mobile camera is linked to the scene's virtual camera as soon as the program is launched. once the target image is recognized with the help of the vuforia image database, using pattern recognition algorithms, the corresponding output view is rendered using the display unit. as a result, the output view consists of virtual objects laid out over the real-time objects. the triggered audio output explains the concepts that are scanned, which in turn improves the experience of the learning. the learning module also consists of multiple-choice questions on the contents taught, to assess the understanding of the learned material. based on the feedback received after the usage, it is observed that arel positively improves the learning experience. 2. methodology arel is designed as a multi-modal ar interface for students to deliver mobile-based learning. based on the survey of various literature related to ar based educational platform design, it is observed that ar-based learning platform improves engagement and comprehension of difficult concepts. ar also provides an experiential learning experience rather than traditional methods such as rote learning or instructor-led learning. arel is designed as a mobile-based ar system for increasing the accessibility of ar based learning for students. thus, arel complements and improves the online learning solutions or protocols developed during the covid era. 2.1. development model arel is developed using the rapid application development (rad) model, as it accelerates the system development. rad model is appropriate when the product development time is less, and the project requires high component reusability and modularity. figure 1 illustrates the rad model followed in the work. the rad model involves four development phases, and they are briefly explained as follows: • requirement gathering: in this phase, the objectives and requirements for the products are gathered based on the technical review. thus, this phase aids in the understanding of the project goals and expectations. • user description: to design the component, the developer collects the description of the component design from its user. based on the design from the user, a prototype of the application is developed, which further reviewed, and the design is updated. • implementation: in this phase, the developer implements the requirements and perform testing on the product. for a typical ar application, this phase involves: ui interface design, creation of 3d objects, coding and testing of the product. • evaluation: once the product is completely developed and tested whether the user expectations are met. once the product is successfully evaluated, the project reaches the users. 2.2. software used • unity: unity is a game development engine which is used to create games for 3d environments such as vr and ar. unity supports scripting through c# [9]. • the applications developed using unity can be exported to platforms such as ios, android, or desktop platforms like windows. unity provides a comprehensive framework for adding interactive animations, audio and physics based logical simulations for natural and close to real interactions. therefore, arel is designed used unity engine. • vuforia software development kit (sdk): vuforia is an sdk supported by unity and enables creation of ar applications for mobile [10]. it uses computer vision-based technologies to track image targets, object-targets, or area targets for marker-based ar application. • upon recognition, the virtual object is placed relative to the marker and the virtual camera position. vuforia is integrated figure 1. rad development model. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 to the unity engine for developing the ar concepts of arel. • google speech-to-text api: the google speech to text api [11] enables integration of the speech recognition ability into to variety of applications. upon sending a voice audio, it sends a transcript of the audio from the service. it uses sophisticated deep learning models ranging from long term short term-recurrent neural networks to sophisticated speech recognition algorithms to perform accurate recognition. 3. augmented reality based enriched learning the objectives of the arel system are as follows: • to design a mobile-based ar system which increases the accessibility of ar based learning for students • to design an ar platform that can act as teaching aid for the students • to complement and improve the online experience through ar • to provide multi-modal interface and regional language support. the objectives are achieved in arel through its multi-modal interfaced content delivery methods. arel consists of two modes of content delivery as follows: a) image target-based content delivery b) speech-to-text based content delivery. this is illustrated in figure 2, where the application receives input from the camera and microphone to deliver the content via ar. 3.1. image target-based content delivery the ar interface of the system is developed using vuforia sdk and uses its image target database system for processing the image targets. the image-targets from the children’s textbook is created and the processed in the target database system of vuforia. it is then integrated with the application through unity gaming engine. upon scanning these image targets, the virtual camera performs an image recognition based on the features available in the target database. once a matching image target is found, relevant content is displayed as ar content. figure 3 illustrates the image-target based content delivery. the output view consists of video content or 3d objects (with audio description) laid out over the real-time objects. 3.2. speech-to-text based content delivery arel also supports speech-to-text based content delivery. the audio samples received from the microphone is preprocessed using noise cancellation and the speech input is sent to the speech-to-text service. speech processing involves several steps, including analysis, feature extraction, modelling, and testing. the feature extraction process extracts unique features of the audio using mel frequency cepstral coefficients (mfcc) technique. upon recognizing the sample, the speech input is converted to text. if the text matches any speech commands, then the appropriate ar content is displayed. the detailed steps for the content delivery here is as follows: 1) record a short audio from the user’s microphone 2) convert the audio into wav format 3) upload the file into the google server 4) once the uploaded file is processed, it receives the output from the json file. 5) process the json file with the text command, the corresponding number is displayed figure 4 depicts the working of the speech to text-based content delivery. algorithm 1 depicts the process involved in the speech-to-text based content delivery. algorithm 1: speech-to-text based content delivery input: microphone audio (a) output: ar content based on speech 1: 𝐶:=command_words 2: w:=convert_audio_to_wav(a) 3: text_from_speech:=speech_to_text_api(w) 4: if text_from_speech ∈ 𝐶 then 5: command= text_from_speech ∩ 𝐶 6: load_content(command) 7: end if 4. results and discussion arel provides an ar based multi-modal audio-visual ar based learning with regional language support. to evaluate the usability and the learning experience of the arel system, an observation study of the prototype system is made in a primary school in chennai, india. participants of this study range from 58 years. the students were instructed on how to use the app over their physical book. during the experiment, the children were asked to try both the speech mode of learning and image targetbased mode. the screenshots of the image target-based content delivery are shown in figure 5. the screenshots of the speechbased content delivery are illustrated in figure 6. during the analysis of the arel experiment, the children were tested for the concepts presented. it is observed that after figure 2. architecture of the arel system. figure 3. image target-based content delivery. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 the usage of the arel the children improved and answered correctly. post the experiment, the parents of the children were asked to provide feedback on the usability, learning retention, learning engagement and overall experience. the result of the survey is aggregated and tabulated, as shown in the figure 7, where 1 represents the lowest rating such as unusable app or poor learning retention or improper learning engagement. overall experience of 10 represents a user-friendly design and development with parameters related to learning are score high. the average usability rating of the app is 7, which represents the user-friendliness score of using the application. as the application supports both voice interaction and image-based interaction, it aids in better exploring their book. the children were excited to see the virtual content appearing in real-time over their book. according to the results of the experiment, the multi-modal user interface with image-target and voice-based interactions, as well as the augmented reality display integrating real and virtual items, functions as a natural immersive experience for children. as the children try out the various interactions of the same learnable content, the learning retention got improved, as indicated by the tabulated score in figure 7. the children expressed a strong willingness to explore the application, indicating that it might be used as a fun and engaging learning tool. this is also indicated by an average learning engagement score of 8.33 from the survey. the overall experience of the mobile ar platform is at 8, which indicates the learning experience and usability experience of the children is positive and improved. 5. conclusions the development and evaluation of a mobile ar based enriched learning experience for learning language and math are reported in this work. one of the benefits of an ar learning experience over a standard book is that other intriguing aspects like animation, virtual objects, sound, and video may be included while the physical book is still present. the findings of the study suggest that the existence of such aspects during the learning process generate excitement, learning engagement, and enjoyment. the findings are supported by the answers to our survey questions to the parents of the children. the findings also show that the multi-modal interface of real and virtual things provides a natural immersive experience as well as an engaging and exciting learning tool for this age range. however, after repeated usage of the same book, children may become bored with arel if they can guess what things will appear. therefore, as part of future work, including a surprise aspect in the application could make it more enjoyable and engaging. while each image target-based marker has multiple types of visual content that could be presented, randomising the presentation of such content could surprise the child and can be included as future enhancement. furthermore, learning analytics of user engagement and learning retention can be utilised to evaluate the user experience, and personalised content for each student will be applied in future work. references [1] the rise of online learning during the covid-19 pandemic |world economic forum. onlinbe [accessed 16 august 2022] https://www.weforum.org/agenda/2020/04/coronaviruseducation-global-covid19-online-digital-learning/ [2] online education market study 2019 | world market projected. online [accessed 16 august 2022] https://www.globenewswire.com/newsrelease/2019/12/17/1961785/0/en/online-education-market figure 4. speech-to-text based content delivery. figure 5. screenshots from image target-based content delivery. figure 6. screenshots from image target-based content delivery. figure 7. feedback for arel. https://www.weforum.org/agenda/2020/04/coronavirus-education-global-covid19-online-digital-learning/ https://www.weforum.org/agenda/2020/04/coronavirus-education-global-covid19-online-digital-learning/ https://www.globenewswire.com/news-release/2019/12/17/1961785/0/en/online-education-market-study-2019-world-market-projected-to-reach-350-billion-by-2025-dominated-by-the-united-states-and-china.html https://www.globenewswire.com/news-release/2019/12/17/1961785/0/en/online-education-market-study-2019-world-market-projected-to-reach-350-billion-by-2025-dominated-by-the-united-states-and-china.html acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 study-2019-world-market-projected-to-reach-350-billion-by2025-dominated-by-the-united-states-and-china.html [3] r. azuma, y. baillot, r. behringer, s. feiner, s. julier, b. macintyre, recent advances in augmented reality, ieee comput. graph. appl., vol. 21, no. 6, nov. 2001, pp. 34–47. doi: 10.1109/38.963459 [4] s. alvarado, w. gonzalez, t. guarda, augmented reality ‘another level of education, iber. conf. inf. syst. technol. cist., vol. 2018june, 2018, pp. 1–5. doi: 10.23919/cisti.2018.8399331 [5] k. t. huang, c. ball, j. francis, r. ratan, j. boumis, j. fordham, augmented versus virtual reality in education: an exploratory study examining science knowledge retention when using augmented reality/virtual reality mobile applications, cyberpsychology, behav. soc. netw., vol. 22, no. 2, feb. 2019, pp. 105–110. doi: 10.1089/cyber.2018.0150 [6] h. kaufmann, d. schmalstieg, mathematics and geometry education with collaborative augmented reality, computers & graphics, vol. 27, no. 3, 2003, pp. 339–345. doi: 10.1016/s0097-8493(03)00028-1 [7] f. bernard, c. gallet, h. d. fournier, l. laccoureye, p. h. roche, and l. troude, toward the development of 3-dimensional virtual reality video tutorials in the french neurosurgical residency program. example of the combined petrosal approach in the french college of neurosurgery, neurochirurgie, vol. 65, no. 4, aug. 2019, pp. 152–157. doi: 10.1016/j.neuchi.2019.04.004 [8] a. m. nidhom, a. a. smaragdina, k. n. gres dyah, b. n. r. p. andika, c. p. setiadi, j. m. yunos, markerless augmented reality (mar) through learning comics to improve student metacognitive ability, iceeie 2019 int. conf. electr. electron. inf. eng. emerg. innov. technol. sustain. futur., oct. 2019, pp. 201–205. doi: 10.1109/iceeie47180.2019.8981411 [9] unity real-time development platform | 3d, 2d vr & ar engine. online [accesed 16 august 2022] https://unity.com/ [10] vuforia developer portal. online [accessed 16 august 2022] https://developer.vuforia.com/ [11] quickstart: transcribe speech to text by using client libraries | cloud speech-to-text documentation | google cloud. online [accessed 16 august 2022] https://cloud.google.com/speech-to-text/docs/transcribeclient-libraries#before-you-begin https://www.globenewswire.com/news-release/2019/12/17/1961785/0/en/online-education-market-study-2019-world-market-projected-to-reach-350-billion-by-2025-dominated-by-the-united-states-and-china.html https://www.globenewswire.com/news-release/2019/12/17/1961785/0/en/online-education-market-study-2019-world-market-projected-to-reach-350-billion-by-2025-dominated-by-the-united-states-and-china.html https://doi.org/10.1109/38.963459 https://doi.org/10.23919/cisti.2018.8399331 https://doi.org/10.1089/cyber.2018.0150 https://doi.org/10.1016/s0097-8493(03)00028-1 https://doi.org/10.1016/j.neuchi.2019.04.004 https://doi.org/10.1109/iceeie47180.2019.8981411 https://unity.com/ https://developer.vuforia.com/ https://cloud.google.com/speech-to-text/docs/transcribe-client-libraries#before-you-begin https://cloud.google.com/speech-to-text/docs/transcribe-client-libraries#before-you-begin a case study on providing fair and metrologically traceable data sets acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 6 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 a case study on providing fair and metrologically traceable data sets tanja dorst1, maximilian gruber2, anupam p. vedurmudi2, daniel hutzschenreuter3, sascha eichstädt2, andreas schütze1 1 lab for measurement technology, saarland university, campus a5 1, 66123 saarbrücken, germany 2 physikalisch-technische bundesanstalt, abbestraße 2-12, 10587 berlin, germany 3 physikalisch-technische bundesanstalt, bundesallee 100, 38116 braunschweig, germany section: research paper keywords: data set; fair digital objects; traceability; digital si; research data management citation: tanja dorst, maximilian gruber, anupam p. vedurmudi, daniel hutzschenreuter, sascha eichstädt, andreas schütze, a case study on providing fair and metrologically traceable data sets, acta imeko, vol. 12, no. 1, article 5, march 2023, identifier: imeko-acta-12 (2023)-01-05 section editor: daniel hutzschenreuter, ptb, germany received november 17, 2022; in final form february 14, 2023; published march 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: research for this paper has received funding within the projects 17ind12 met4fof and 17ind02 smartcom from the empir program co-financed by the participating states and from the european union’s horizon 2020 research and innovation program and from the federal ministry of education and research (bmbf) project famous (01is18078). corresponding author: tanja dorst, e-mail: t.dorst@lmt.uni-saarland.de 1. introduction in 2016, the fair principles (and their 15 subprinciples) which provide guidelines to improve the findability, accessibility, interoperability, and reusability of digital resources such as code, data sets, research objects, and workflows, were published [1]. providing data in a fair way contributes to the data curation process, which allows to maintain a high value of data over extended periods of time [2]. given the widespread and increasing advancement of digitalization, the main focus of the guideline is on machinereadability, i.e., the ability of computers to find, access, (inter)operate with, and reuse data with minimal or no human intervention. the fair principles only describe attributes and behaviors; however, they do not provide strict rules to achieve the desired fairness for digital resources, i.e., they only serve as a framework for the sustainability of data. fair and metrologically sound data are a basis for the exchange of digital measurement values in research and industry [3]. thousands of petabytes of data collected every year cannot be used to their full potential as the data are often not fair-compliant [4]. apart from the pure fair framework, the interoperability of measurement data requires an unambiguous machine-readable provision of its metrological properties. essential metadata are units of measurement, types of physical quantities, and, where appropriate, traceability to measurement standards in the form of measurement uncertainty provided by calibration. in this contribution, an existing data set of a lifetime test of an electromechanical cylinder (emc) is restructured and extended with metadata in a manner that conforms to the fair principles and addresses metrological properties by application of the d-si metadata model [5]. 2. use case the used test bed (see figure 1) consists of an emc as device under test (dut) and a pneumatic cylinder, which simulates an abstract in recent years, data science and engineering have faced many challenges concerning the increasing amount of data. in order to ensure findability, accessibility, interoperability, and reusability (fairness) of digital resources, digital objects as a synthesis of data and metadata with persistent and unique identifiers should be used. in this context, the fair data principles formulate requirements that research data and, ideally, also industrial data should fulfill to make full use of them, particularly when machine learning or other datadriven methods are under consideration. in this contribution, the process of providing scientific data of an industrial testbed in a traceable and fair manner is documented as an example. mailto:t.dorst@lmt.uni-saarland.de acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 axial load on the dut [6]. typically, more than 500,000 working cycles of duration 2.8 s are executed, where each consists of a forward stroke, a waiting time, and a return stroke. data are recorded with the help of two different data acquisition systems (daq). the zema daq acquires data from eleven different sensors during each cycle. three motor current sensors, one microphone, three accelerometers (at the plain bearing, the ball bearing, and the piston rod), and four process sensors (axial force, velocity, pneumatic pressure, and active current of the emc motor) with different sampling rates are used in the test bed. in addition, the smartup unit (suu) (see figure 1) records global navigation satellite system (gnss) timestamped data continuously with three micro-electro-mechanical systems (mems) sensors [7]: a 9-axis inertial measurement unit, a 3-axis accelerometer (for comparison reasons [8]), and a combined pressure and temperature sensor. the only link between both data acquisition systems is a trigger signal of the zema daq, indicating the start of a working cycle. this trigger signal is recorded with the suu to enable the alignment of both data sets. the aligned data set used in this publication was acquired during a lifetime test of an emc executed in april 2021, which lasted approx. 16.5 days. the data format of this aligned data set is suitable for the use of an automated toolbox for statistical machine learning [9]. it consists of numerical measurement values which can be associated to a corresponding unit of the “système international d’unités” (si) [10]. 3. achieving fair data the creation of a fair data set for the given use case can be broken down into different aspects, each with a specific contribution to one or more of the fair principles. this is also summarized by the blue (contributes) or light grey (does not contribute) symbols next to the headings of the following sections. the justification of this summary is given by the detailed evaluation provided in table 1 and further discussed in section 4. 3.1. open and known formats the use of common, well-described, and open (source) data formats provides the foundation of a fair data set [11]. because the selected use case consists of large amounts of numerical data, the hierarchical data format version 5 (hdf5) is chosen. it supports the structuring of numerical data in a filesystem-like hierarchy and allows the addition of annotations for an alignment of metadata with a description of the numerical data to every node in this hierarchy. the annotation possibilities of hdf5 are utilized by storing relatively short strings in the data-interchange format javascript object notation (json), which represent dictionary entries of key-value pairs corresponding to the specific node. the structure of the corresponding hdf5 file is shown in figure 2. it integrates two main parts corresponding to two different measurement systems, each consisting of groups representing sensors and/or physical quantities. as the zema daq is based on sensors, each measuring only a single physical quantity, no further hdf5 group is needed in this case. in contrast, the suu sensors each measure more than one physical quantity. thus, a further hdf5 group is inserted, which has the name of the corresponding sensor (bma_280 in this example). open and known formats do not contribute to the findability of the data set, as this choice only concerns the structure of the data, but not the “external visibility”. 3.2. semantic descriptors semantically expressive concepts used to describe the metadata are an important step towards not only machinereadable but machine-interpretable data. to our knowledge, no single metadata scheme provides all the necessary concepts required for the present use case. therefore, different ontologies and knowledge representations are used to describe specific aspects of the data set: figure 1. hdf5 file (grey) structure containing groups (green), data sets (orange) and associated metadata (blue). figure 2. smartup unit (small picture) used together with the zema daq for data acquisition of the testbed for an emc life time test (big picture). acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 3 • digital system of units (d-si) [5], • dublin core (dc) [12], • quantity, unit, dimension and type (qudt) [13], • resource description framework (rdf) [14], and • sensor, observation, sample, and actuator (sosa) [15]. by using existing and semantically enriched knowledge representations as a basis, researchers, organizations, institutions, machines, or algorithms are enabled to understand the data set without the expert knowledge of the initial creator. once the meaning of the data is clear, the selection of applications for analysis and processing can become more informed. semantic descriptors do not contribute to the findability of the data set, as they are only used inside the data set to describe the different data, hence not contributing to the “external visibility”. 3.3. top level metadata one of the most important steps preceding the (re)use of appropriate data is to find them. with the help of machinereadable top level metadata, both humans and computers can easily find the digital resource. the top level metadata are split into four groups with several subproperties. where appropriate, identifiers from the dublin core ontology [12] are used and specified by the dc: prefix. in the group “project”, general information is stored about the project in which the data set was generated. the creators of the data set are listed in the group “person”. in the “publication” group, the doi and the license can be found among other information. to assign the data set to a unique emc test, the “experiment” group provides information about the specific test. the following list is not complete but provides suggestions for the properties of the four different groups as used in the emc data set: • project: fulltitle, acronym, websitelink, fundingsource, fundingadministrator, acknowledgementtext, funding programme, fundingnumber • person: dc:author, e-mail, affiliation • publication: dc:identifier, dc:license, dc:title, dc:type, dc:description, dc:subject, dc:sizeorduration, dc:issued, dc:bibliographiccitation • experiment: date, dut, identifier, label top level metadata in json are added, as shown in listing 1. in this contribution, only the most important top level metadata are shown. the complete list of all top level metadata for this emc data set is included in the published data set itself [16]. top level metadata is primarily used to find and categorize the data set on a rough level. 3.4. publication in general, several aspects need to be considered: 1) choosing a suitable publishing license and an online access repository; 2) making the data set findable under a persistent identifier, and 3) providing example code to access and reuse the data set. 3.4.1. publishing license the selection of a suitable publishing license enables the reuse of existing data not only by the creators but also by other researchers. in general, the license defines who can reuse the data for which purposes and how the usage should be reported. guidance for such a decision is given, e.g., in [17]. for the emc data set, a creative commons attribution 4.0 international (cc-by4.0) license is chosen to maximize reusability, as it places no restrictions on any entities using the data as long as the original creators are credited. 3.4.2. online access repository the finalized data set is published at the online service zenodo [18], which specializes in open science. other platforms with a similar scope also exist, e.g., the “open access repository of the physikalisch-technische bundesanstalt” (ptb-oar) [19]. only publication in suitable archives with searchable metadata ensures that others are able to find and access the created data set. 3.4.3. persistent identifier to make the data set addressable under a globally unique and persistent identifier, a digital object identifier (doi) is generated within the zenodo service [18]. doi is a standard (iso 26324:2012) widely used in the scientific community to resolve an identifier to the current storage (web)site [20]. even if the data is no more available online, the doi still resolves to some information about the data set and its repository. 3.4.4. example access code alongside the publication at an online service, basic scripts for the emc data set in python and matlab® are provided to facilitate the file opening. 3.5. quantity, unit and sensor metadata at its core, the data set provides numerical measurement data from different sensors. to foster a clear understanding of the numerical data, it is required to add metadata with suitable machine-readable descriptions of the underlying quantities, units, and sensors. each group of actual measurement data (the "leaffolders" of figure 1 is associated with its own specific metadata. in listing 2, the json representation of this metadata is shown for two exemplary quantities of this data set. according to the recommendations from the d-si metadata model, si:unit introduces a mandatory machine-readable definition of the unit of measurement based on the si system. elements from the qudt ontology add information and semantics describing the measured data (e.g., qudt:value), and listing 1: json code for the most important top level metadata. "project": { 1 "fulltitle": "metrology for the factory of the future", 2 "funding programme": "empir", 3 "fundingnumber": "17ind12" 4 }, 5 "person": { 6 "dc:author": ["tanja dorst", "maximilian gruber", "anupam 7 prasad vedurmudi"], 8 "e-mail": ["t.dorst@zema.de", "maximilian.gruber@ptb.de", 9 "anupam.vedurmudi@ptb.de"], 10 "affiliation": ["zema ggmbh", "physikalisch-technische 11 bundesanstalt", "physikalisch-technische bundesanstalt"] 12 }, 13 "publication": { 14 "dc:identifier": "10.5281/zenodo.5185953", 15 "dc:license": "creative commons attribution 4.0 16 international (cc-by-4.0)", 17 "dc:title": "sensor data set of one electromechanical 18 cylinder at zema testbed (zema daq and smart-up unit)", 19 "dc:subject": ["measurement uncertainty", "sensor 20 network", "mems"], 21 "dc:sizeorduration": "24 sensors, 4776 cycles and 2000 22 datapoints each" 23 }, 24 "experiment": { 25 "date": "2021-03-29/2021-04-15", 26 "dut": "festo esbf cylinder", 27 "identifier": "axis11" 28 } 29 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 4 the type of associated measurement uncertainty (qudt:standarduncertainty). sosa provides a relation to the sensors creating the data (sosa:madebysensor). quantity, unit, and sensor metadata do not contribute to the findability and accessibility of the data set, as this choice only concerns the “internal visibility”. 3.6. traceable measurement data metrological traceability is defined as the relation of a measurement value to a reference through a chain of calibrations [21]. in practice, this is achieved by quantifying the measurement uncertainty of this measurement value. the actual numerical measurement data for a single quantity are provided by two multidimensional arrays of equal dimensions. one array qudt:value stores the measured values in the unit specified by the metadata. another array qudt:standarduncertainty stores the standard uncertainty of the measured value in the same unit. the uncertainty information can be derived from datasheets or from calibra